Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Paper SERS chromatography for detection of trace analytes in complex samples
NASA Astrophysics Data System (ADS)
Yu, Wei W.; White, Ian M.
2013-05-01
We report the application of paper SERS substrates for the detection of trace quantities of multiple analytes in a complex sample in the form of paper chromatography. Paper chromatography facilitates the separation of different analytes from a complex sample into distinct sections in the chromatogram, which can then be uniquely identified using SERS. As an example, the separation and quantitative detection of heroin in a highly fluorescent mixture is demonstrated. Paper SERS chromatography has obvious applications, including law enforcement, food safety, and border protection, and facilitates the rapid detection of chemical and biological threats at the point of sample.
Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction
ERIC Educational Resources Information Center
Khalafi, Lida; Doolittle, Pamela; Wright, John
2018-01-01
A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…
NASA Astrophysics Data System (ADS)
Kassem, Mohammed A.; Amin, Alaa S.
2015-02-01
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.
NASA Astrophysics Data System (ADS)
Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-01
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.
ERIC Educational Resources Information Center
Fischer, Dan
2002-01-01
Points out the enthusiasm of students towards the complex chemical survival mechanism of some plants during the early stages of life. Uses allelopathic research to introduce students to conducting experimental research. Includes sample procedures, a timetable, and a sample grading sheet. (YDS)
Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar
2013-01-01
A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.
Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.
2016-06-01
Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.
Kassem, Mohammed A; Amin, Alaa S
2015-02-05
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4'-nitro-2',6'-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50°C, the surfactant-rich phase was heated again at 100°C to remove water after decantation and the remaining phase was dissolved using 0.5mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75ngmL(-1) and the detection limit was 0.15ngmL(-1) of the original solution. The enhancement factor of 500 was achieved for 250mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Point prevalence of complex wounds in a defined United Kingdom population.
Hall, Jill; Buckley, Hannah L; Lamb, Karen A; Stubbs, Nikki; Saramago, Pedro; Dumville, Jo C; Cullum, Nicky A
2014-01-01
Complex wounds (superficial-, partial-, or full-thickness skin loss wounds healing by secondary intention) are common; however, there is a lack of high-quality, contemporary epidemiological data. This paper presents point prevalence estimates for complex wounds overall as well as for individual types. A multiservice, cross-sectional survey was undertaken across a United Kingdom city (Leeds, population 751,485) during 2 weeks in spring of 2011. The mean age of people with complex wounds was approximately 70 years, standard deviation 19.41. The point prevalence of complex wounds was 1.47 per 1,000 of the population, 95% confidence interval 1.38 to 1.56. While pressure ulcers and leg ulcers were the most frequent, one in five people in the sample population had a less common wound type. Surveys confined to people with specific types of wound would underestimate the overall impact of complex wounds on the population and health care resources. © 2014 The Authors. Wound Repair and Regeneration published by Wiley Periodicals, Inc. on behalf of Wound Healing Society.
Arain, Salma Aslam; Kazi, Tasneem G; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-10
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu(2+)) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu(2+) using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046μgL(-1) and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu(2+) in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu(2+) in serum samples of different viral hepatitis patients and healthy controls. Copyright © 2014 Elsevier B.V. All rights reserved.
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
Point sources of endocrine active compounds to aquatic environments such as waste water treatment plants, pulp and paper mills, and animal feeding operations invariably contain complex mixtures of chemicals. The current study investigates the use of targeted in vitro assays des...
Point sources of potentially endocrine active compounds to aquatic environments such as waste water treatment plants, pulp and paper mills, and animal feeding operations invariably contain complex mixtures of chemicals. The current study investigates the use of targeted in vitro ...
Optical Ptychographic Microscope for Quantitative Bio-Mechanical Imaging
NASA Astrophysics Data System (ADS)
Anthony, Nicholas; Cadenazzi, Guido; Nugent, Keith; Abbey, Brian
The role that mechanical forces play in biological processes such as cell movement and death is becoming of significant interest to further develop our understanding of the inner workings of cells. The most common method used to obtain stress information is photoelasticity which maps a samples birefringence, or its direction dependent refractive indices, using polarized light. However this method only provides qualitative data and for stress information to be useful quantitative data is required. Ptychography is a method for quantitatively determining the phase of a samples complex transmission function. The technique relies upon the collection of multiple overlapping coherent diffraction patterns from laterally displaced points on the sample. The overlap of measurement points provides complementary information that significantly aids in the reconstruction of the complex wavefield exiting the sample and allows for quantitative imaging of weakly interacting specimens. Here we describe recent advances at La Trobe University Melbourne on achieving quantitative birefringence mapping using polarized light ptychography with applications in cell mechanics. Australian Synchrotron, ARC Centre of Excellence for Advanced Molecular Imaging.
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
Nogueiras, Gloria; Kunnen, E. Saskia; Iborra, Alejandro
2017-01-01
This study adopts a dynamic systems approach to investigate how individuals successfully manage contextual complexity. To that end, we tracked individuals' emotional trajectories during a challenging training course, seeking qualitative changes–turning points—and we tested their relationship with the perceived complexity of the training. The research context was a 5-day higher education course based on process-oriented experiential learning, and the sample consisted of 17 students. The students used a five-point Likert scale to rate the intensity of 16 emotions and the complexity of the training on 8 measurement points. Monte Carlo permutation tests enabled to identify 30 turning points in the 272 emotional trajectories analyzed (17 students * 16 emotions each). 83% of the turning points indicated a change of pattern in the emotional trajectories that consisted of: (a) increasingly intense positive emotions or (b) decreasingly intense negative emotions. These turning points also coincided with particularly complex periods in the training as perceived by the participants (p = 0.003, and p = 0.001 respectively). The relationship between positively-trended turning points in the students' emotional trajectories and the complexity of the training may be interpreted as evidence of a successful management of the cognitive conflict arising from the clash between the students' prior ways of meaning-making and the challenging demands of the training. One of the strengths of this study is that it provides a relatively simple procedure for identifying turning points in developmental trajectories, which can be applied to various longitudinal experiences that are very common in educational and developmental contexts. Additionally, the findings contribute to sustaining that the assumption that complex contextual demands lead unfailingly to individuals' learning is incomplete. Instead, it is how individuals manage complexity which may or may not lead to learning. Finally, this study can also be considered a first step in research on the developmental potential of process-oriented experiential learning training. PMID:28515703
Improved graphite furnace atomizer
Siemer, D.D.
1983-05-18
A graphite furnace atomizer for use in graphite furnace atomic absorption spectroscopy is described wherein the heating elements are affixed near the optical path and away from the point of sample deposition, so that when the sample is volatilized the spectroscopic temperature at the optical path is at least that of the volatilization temperature, whereby analyteconcomitant complex formation is advantageously reduced. The atomizer may be elongated along its axis to increase the distance between the optical path and the sample deposition point. Also, the atomizer may be elongated along the axis of the optical path, whereby its analytical sensitivity is greatly increased.
Analysis of macromolecules, ligands and macromolecule-ligand complexes
Von Dreele, Robert B [Los Alamos, NM
2008-12-23
A method for determining atomic level structures of macromolecule-ligand complexes through high-resolution powder diffraction analysis and a method for providing suitable microcrystalline powder for diffraction analysis are provided. In one embodiment, powder diffraction data is collected from samples of polycrystalline macromolecule and macromolecule-ligand complex and the refined structure of the macromolecule is used as an approximate model for a combined Rietveld and stereochemical restraint refinement of the macromolecule-ligand complex. A difference Fourier map is calculated and the ligand position and points of interaction between the atoms of the macromolecule and the atoms of the ligand can be deduced and visualized. A suitable polycrystalline sample of macromolecule-ligand complex can be produced by physically agitating a mixture of lyophilized macromolecule, ligand and a solvent.
Direct sampling for stand density index
Mark J. Ducey; Harry T. Valentine
2008-01-01
A direct method of estimating stand density index in the field, without complex calculations, would be useful in a variety of silvicultural situations. We present just such a method. The approach uses an ordinary prism or other angle gauge, but it involves deliberately "pushing the point" or, in some cases, "pulling the point." This adjusts the...
Computer generated hologram from point cloud using graphics processor.
Chen, Rick H-Y; Wilkinson, Timothy D
2009-12-20
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Tiwari, Swapnil; Deb, Manas Kanti; Sen, Bhupendra K
2017-04-15
A new cloud point extraction (CPE) method for the determination of hexavalent chromium i.e. Cr(VI) in food samples is established with subsequent diffuse reflectance-Fourier transform infrared (DRS-FTIR) analysis. The method demonstrates enrichment of Cr(VI) after its complexation with 1,5-diphenylcarbazide. The reddish-violet complex formed showed λ max at 540nm. Micellar phase separation at cloud point temperature of non-ionic surfactant, Triton X-100 occurred and complex was entrapped in surfactant and analyzed using DRS-FTIR. Under optimized conditions, the limit of detection (LOD) and quantification (LOQ) were 1.22 and 4.02μgmL -1 , respectively. Excellent linearity with correlation coefficient value of 0.94 was found for the concentration range of 1-100μgmL -1 . At 10μgmL -1 the standard deviation for 7 replicate measurements was found to be 0.11μgmL -1 . The method was successfully applied to commercially marketed food stuffs, and good recoveries (81-112%) were obtained by spiking the real samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Galbeiro, Rafaela; Garcia, Samara; Gaubeur, Ivanise
2014-04-01
Cloud point extraction (CPE) was used to simultaneously preconcentrate trace-level cadmium, nickel and zinc for determination by flame atomic absorption spectrometry (FAAS). 1-(2-Pyridilazo)-2-naphthol (PAN) was used as a complexing agent, and the metal complexes were extracted from the aqueous phase by the surfactant Triton X-114 ((1,1,3,3-tetramethylbutyl)phenyl-polyethylene glycol). Under optimized complexation and extraction conditions, the limits of detection were 0.37μgL(-1) (Cd), 2.6μgL(-1) (Ni) and 2.3μgL(-1) (Zn). This extraction was quantitative with a preconcentration factor of 30 and enrichment factor estimated to be 42, 40 and 43, respectively. The method was applied to different complex samples, and the accuracy was evaluated by analyzing a water standard reference material (NIST SRM 1643e), yielding results in agreement with the certified values. Copyright © 2013 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang
2016-09-01
Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.
Alberti, Giancarla; Biesuz, Raffaela; Pesavento, Maria
2008-12-01
Different natural water samples were investigated to determine the total concentration and the distribution of species for Cu(II), Pb(II), Al(III) and U(VI). The proposed method, named resin titration (RT), was developed in our laboratory to investigate the distribution of species for metal ions in complex matrices. It is a competition method, in which a complexing resin competes with natural ligands present in the sample to combine with the metal ions. In the present paper, river, estuarine and seawater samples, collected during a cruise in Adriatic Sea, were investigated. For each sample, two RTs were performed, using different complexing resins: the iminodiacetic Chelex 100 and the carboxylic Amberlite CG50. In this way, it was possible to detect different class of ligands. Satisfactory results have been obtained and are commented on critically. They were summarized by principal component analysis (PCA) and the correlations with physicochemical parameters allowed one to follow the evolution of the metals along the considered transect. It should be pointed out that, according to our findings, the ligands responsible for metal ions complexation are not the major components of the water system, since they form considerably weaker complexes.
Kartal Temel, Nuket; Gürkan, Ramazan
2018-03-01
A novel ultrasound assisted-cloud point extraction method was developed for preconcentration and determination of V(V) in beverage samples. After complexation by pyrogallol in presence of safranin T at pH 6.0, V(V) ions as ternary complex are extracted into the micellar phase of Triton X-114. The complex was monitored at 533 nm by spectrophotometry. The matrix effect on the recovery of V(V) from the spiked samples at 50 μg L-1 was evaluated. In optimized conditions, the limits of detection and quantification of the method, respectively, was 0.58 and 1.93 μg L-1 in linear range of 2-500 μg L-1 with sensitivity enhancement and preconcentration factors of 47.7 and 40 for preconcentration from 15 mL of sample solution. The recoveries from spiked samples were in range of 93.8-103.2% with a relative standard deviation ranging from 2.6% to 4.1% (25, 100 and 250 μg L-1, n: 5). The accuracy was verified by analysis of two certified samples, and the results were in a good agreement with the certified values. The intra-day and inter-day precision were tested by reproducibility (as 3.3-3.4%) and repeatability (as 3.4-4.1%) analysis for five replicate measurements of V(V) in quality control samples spiked with 5, 10 and 15 μg L-1. Trace V(V) contents of the selected beverage samples by the developed method were successfully determined.
Wilkison, D.H.; Armstrong, D.J.; Hampton, S.A.
2009-01-01
From 1998 through 2007, over 750 surface-water or bed-sediment samples in the Blue River Basin - a largely urban basin in metropolitan Kansas City - were analyzed for more than 100 anthropogenic compounds. Compounds analyzed included nutrients, fecal-indicator bacteria, suspended sediment, pharmaceuticals and personal care products. Non-point source runoff, hydrologic alterations, and numerous waste-water discharge points resulted in the routine detection of complex mixtures of anthropogenic compounds in samples from basin stream sites. Temporal and spatial variations in concentrations and loads of nutrients, pharmaceuticals, and organic wastewater compounds were observed, primarily related to a site's proximity to point-source discharges and stream-flow dynamics. ?? 2009 ASCE.
Electrical Chips for Biological Point-of-Care Detection.
Reddy, Bobby; Salm, Eric; Bashir, Rashid
2016-07-11
As the future of health care diagnostics moves toward more portable and personalized techniques, there is immense potential to harness the power of electrical signals for biological sensing and diagnostic applications at the point of care. Electrical biochips can be used to both manipulate and sense biological entities, as they can have several inherent advantages, including on-chip sample preparation, label-free detection, reduced cost and complexity, decreased sample volumes, increased portability, and large-scale multiplexing. The advantages of fully integrated electrical biochip platforms are particularly attractive for point-of-care systems. This review summarizes these electrical lab-on-a-chip technologies and highlights opportunities to accelerate the transition from academic publications to commercial success.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Molinario, G.; Hansen, M. C.; Potapov, P. V.; Tyukavina, A.; Stehman, S.; Barker, B.; Humber, M.
2017-10-01
The rural complex is the inhabited agricultural land cover mosaic found along the network of rivers and roads in the forest of the Democratic Republic of Congo. It is a product of traditional small-holder shifting cultivation. To date, thanks to its distinction from primary forest, this area has been mapped as relatively homogenous, leaving the proportions of land cover heterogeneity within it unknown. However, the success of strategies for sustainable development, including land use planning and payment for ecosystem services, such as Reduced Emissions from Deforestation and Degradation, depends on the accurate characterization of the impacts of land use on natural resources, including within the rural complex. We photo-interpreted a simple random sample of 1000 points in the established rural complex, using 3106 high resolution satellite images obtained from the National Geospatial-Intelligence Agency, together with 406 images from Google Earth, spanning the period 2008-2016. Results indicate that nationally the established rural complex includes 5% clearings, 10% active fields, 26% fallows, 34% secondary forest, 2% wetland forest, 11% primary forest, 6% grasslands, 3% roads and settlements and 2% commercial plantations. Only a small proportion of sample points were plantations, while other commercial dynamics, such as logging and mining, were not detected in the sample. The area of current shifting cultivation accounts for 76% of the established rural complex. Added to primary forest (11%), this means that 87% of the rural complex is available for shifting cultivation. At the current clearing rate, it would take ~18 years for a complete rotation of the rural complex to occur. Additional pressure on land results in either the cultivation of non-preferred land types within the rural complex (such as wetland forest), or expansion of agriculture into nearby primary forests, with attendant impacts on emissions, habitat loss and other ecosystems services.
NASA Astrophysics Data System (ADS)
Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz
2012-12-01
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.
Scalable boson sampling with time-bin encoding using a loop-based architecture.
Motes, Keith R; Gilchrist, Alexei; Dowling, Jonathan P; Rohde, Peter P
2014-09-19
We present an architecture for arbitrarily scalable boson sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.
Current trends in sample preparation for cosmetic analysis.
Zhong, Zhixiong; Li, Gongke
2017-01-01
The widespread applications of cosmetics in modern life make their analysis particularly important from a safety point of view. There is a wide variety of restricted ingredients and prohibited substances that primarily influence the safety of cosmetics. Sample preparation for cosmetic analysis is a crucial step as the complex matrices may seriously interfere with the determination of target analytes. In this review, some new developments (2010-2016) in sample preparation techniques for cosmetic analysis, including liquid-phase microextraction, solid-phase microextraction, matrix solid-phase dispersion, pressurized liquid extraction, cloud point extraction, ultrasound-assisted extraction, and microwave digestion, are presented. Furthermore, the research and progress in sample preparation techniques and their applications in the separation and purification of allowed ingredients and prohibited substances are reviewed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Wu, Peng; Zhang, Yunchang; Lv, Yi; Hou, Xiandeng
2006-12-01
A simple, low cost and highly sensitive method based on cloud point extraction (CPE) for separation/preconcentration and thermospray flame quartz furnace atomic absorption spectrometry was proposed for the determination of ultratrace cadmium in water and urine samples. The analytical procedure involved the formation of analyte-entrapped surfactant micelles by mixing the analyte solution with an ammonium pyrrolidinedithiocarbamate (APDC) solution and a Triton X-114 solution. When the temperature of the system was higher than the cloud point of Triton X-114, the complex of cadmium-PDC entered the surfactant-rich phase and thus separation of the analyte from the matrix was achieved. Under optimal chemical and instrumental conditions, the limit of detection was 0.04 μg/L for cadmium with a sample volume of 10 mL. The analytical results of cadmium in water and urine samples agreed well with those by ICP-MS.
Methyl-CpG island-associated genome signature tags
Dunn, John J
2014-05-20
Disclosed is a method for analyzing the organismic complexity of a sample through analysis of the nucleic acid in the sample. In the disclosed method, through a series of steps, including digestion with a type II restriction enzyme, ligation of capture adapters and linkers and digestion with a type IIS restriction enzyme, genome signature tags are produced. The sequences of a statistically significant number of the signature tags are determined and the sequences are used to identify and quantify the organisms in the sample. Various embodiments of the invention described herein include methods for using single point genome signature tags to analyze the related families present in a sample, methods for analyzing sequences associated with hyper- and hypo-methylated CpG islands, methods for visualizing organismic complexity change in a sampling location over time and methods for generating the genome signature tag profile of a sample of fragmented DNA.
Genova, Alessandro; Pavanello, Michele
2015-12-16
In order to approximately satisfy the Bloch theorem, simulations of complex materials involving periodic systems are made n(k) times more complex by the need to sample the first Brillouin zone at n(k) points. By combining ideas from Kohn-Sham density-functional theory (DFT) and orbital-free DFT, for which no sampling is needed due to the absence of waves, subsystem DFT offers an interesting middle ground capable of sizable theoretical speedups against Kohn-Sham DFT. By splitting the supersystem into interacting subsystems, and mapping their quantum problem onto separate auxiliary Kohn-Sham systems, subsystem DFT allows an optimal topical sampling of the Brillouin zone. We elucidate this concept with two proof of principle simulations: a water bilayer on Pt[1 1 1]; and a complex system relevant to catalysis-a thiophene molecule physisorbed on a molybdenum sulfide monolayer deposited on top of an α-alumina support. For the latter system, a speedup of 300% is achieved against the subsystem DTF reference by using an optimized Brillouin zone sampling (600% against KS-DFT).
NASA Astrophysics Data System (ADS)
Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing
Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.
Sun, Mei; Wu, Qianghua
2010-04-15
A cloud point extraction (CPE) method for the preconcentration of ultra-trace aluminum in human albumin prior to its determination by graphite furnace atomic absorption spectrometry (GFAAS) had been developed in this paper. The CPE method was based on the complex of Al(III) with 1-(2-pyridylazo)-2-naphthol (PAN) and Triton X-114 was used as non-ionic surfactant. The main factors affecting cloud point extraction efficiency, such as pH of solution, concentration and kind of complexing agent, concentration of non-ionic surfactant, equilibration temperature and time, were investigated in detail. An enrichment factor of 34.8 was obtained for the preconcentration of Al(III) with 10 mL solution. Under the optimal conditions, the detection limit of Al(III) was 0.06 ng mL(-1). The relative standard deviation (n=7) of sample was 3.6%, values of recovery of aluminum were changed from 92.3% to 94.7% for three samples. This method is simple, accurate, sensitive and can be applied to the determination of ultra-trace aluminum in human albumin. 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Haider, Shahid A.; Tran, Megan Y.; Wong, Alexander
2018-02-01
Observing the circular dichroism (CD) caused by organic molecules in biological fluids can provide powerful indicators of patient health and provide diagnostic clues for treatment. Methods for this kind of analysis involve tabletop devices that weigh tens of kilograms with costs on the order of tens of thousands of dollars, making them prohibitive in point-of-care diagnostic applications. In an e ort to reduce the size, cost, and complexity of CD estimation systems for point-of-care diagnostics, we propose a novel method for CD estimation that leverages a vortex half-wave retarder in between two linear polarizers and a two-dimensional photodetector array to provide an overall complexity reduction in the system. This enables the measurement of polarization variations across multiple polarizations after they interact with a biological sample, simultaneously, without the need for mechanical actuation. We further discuss design considerations of this methodology in the context of practical applications to point-of-care diagnostics.
Page, Norman J; Talkington, Raymond W.
1984-01-01
Samples of spinel lherzolite, harzburgite, dunite, and chromitite from the Bay of Islands, Lewis Hills, Table Mountain, Advocate, North Arm Mountain, White Hills Periodite Point Rousse, Great Bend and Betts Cove ophiolite complexes in Newfoundland were analyzed for the platinum-group elements (PGE) Pd, Pt, Rh, Ru and Ir. The ranges of concentration (in ppb) observed for all rocks are: less than 0. 5 to 77 (Pd), less than 1 to 120 (Pt), less than 0. 5 to 20 (Rh), less than 100 to 250 (Ru) and less than 20 to 83 (Ir). Chondrite-normalized PGE ratios suggest differences between rock types and between complexes. Samples of chromitite and dunite show relative enrichment in Ru and Ir and relative depletion in Pt and Pd.
Adams, David T.; Langer, William H.; Hoefen, Todd M.; Van Gosen, Bradley S.; Meeker, Gregory P.
2010-01-01
Natural background levels of Libby-type amphibole in the sediment of the Libby valley in Montana have not, up to this point, been determined. The purpose of this report is to provide the preliminary findings of a study designed by both the U.S. Geological Survey and the U.S. Environmental Protection Agency and performed by the U.S. Geological Survey. The study worked to constrain the natural background levels of fibrous amphiboles potentially derived from the nearby Rainy Creek Complex. The material selected for this study was sampled from three localities, two of which are active open-pit sand and gravel mines. Seventy samples were collected in total and examined using a scanning electron microscope equipped with an energy dispersive x-ray spectrometer. All samples contained varying amounts of feldspars, ilmenite, magnetite, quartz, clay minerals, pyroxene minerals, and non-fibrous amphiboles such as tremolite, actinolite, and magnesiohornblende. Of the 70 samples collected, only three had detectable levels of fibrous amphiboles compatible with those found in the rainy creek complex. The maximum concentration, identified here, of the amphiboles potentially from the Rainy Creek Complex is 0.083 percent by weight.
Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran
2010-05-01
Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.
Exploring revictimization risk in a community sample of sexual assault survivors.
Chu, Ann T; Deprince, Anne P; Mauss, Iris B
2014-01-01
Previous research points to links between risk detection (the ability to detect danger cues in various situations) and sexual revictimization in college women. Given important differences between college and community samples that may be relevant to revictimization risk (e.g., the complexity of trauma histories), the current study explored the link between risk detection and revictimization in a community sample of women. Community-recruited women (N = 94) reported on their trauma histories in a semistructured interview. In a laboratory session, participants listened to a dating scenario involving a woman and a man that culminated in sexual assault. Participants were instructed to press a button "when the man had gone too far." Unlike in college samples, revictimized community women (n = 47) did not differ in terms of risk detection response times from women with histories of no victimization (n = 10) or single victimization (n = 15). Data from this study point to the importance of examining revictimization in heterogeneous community samples where risk mechanisms may differ from college samples.
Detection of image structures using the Fisher information and the Rao metric.
Maybank, Stephen J
2004-12-01
In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.
The influence of point defects on the thermal conductivity of AlN crystals
NASA Astrophysics Data System (ADS)
Rounds, Robert; Sarkar, Biplab; Alden, Dorian; Guo, Qiang; Klump, Andrew; Hartmann, Carsten; Nagashima, Toru; Kirste, Ronny; Franke, Alexander; Bickermann, Matthias; Kumagai, Yoshinao; Sitar, Zlatko; Collazo, Ramón
2018-05-01
The average bulk thermal conductivity of free-standing physical vapor transport and hydride vapor phase epitaxy single crystal AlN samples with different impurity concentrations is analyzed using the 3ω method in the temperature range of 30-325 K. AlN wafers grown by physical vapor transport show significant variation in thermal conductivity at room temperature with values ranging between 268 W/m K and 339 W/m K. AlN crystals grown by hydride vapor phase epitaxy yield values between 298 W/m K and 341 W/m K at room temperature, suggesting that the same fundamental mechanisms limit the thermal conductivity of AlN grown by both techniques. All samples in this work show phonon resonance behavior resulting from incorporated point defects. Samples shown by optical analysis to contain carbon-silicon complexes exhibit higher thermal conductivity above 100 K. Phonon scattering by point defects is determined to be the main limiting factor for thermal conductivity of AlN within the investigated temperature range.
Lifetime Occupation and Late-Life Cognitive Performance Among Women.
Ribeiro, Pricila Cristina Correa; Lourenço, Roberto Alves
2015-01-01
We examined whether women who had regular jobs throughout life performed better cognitively than older adult housewives. Linear regression was used to compare global cognitive performance scores of housewives (G1) and women exposed to work of low (G2) and high (G3) complexity. The sample comprised 477 older adult Brazilian women, 430 (90.4%) of whom had performed lifelong jobs. In work with data, the G2 group's cognitive performance scores were 1.73 points higher (p =.03), and the G3 group scored 1.76 points (p =.02) higher, than the G1. In work with things and with people, the G3 scored, respectively, 2.04 (p <.01) and 2.21 (p <.01) cognitive test points higher than the G1. Based on our findings we suggest occupation of greater complexity is associated with better cognitive performance in women later in life.
CePt2In7: Shubnikov-de Haas measurements on micro-structured samples under high pressures
NASA Astrophysics Data System (ADS)
Kanter, J.; Moll, P.; Friedemann, S.; Alireza, P.; Sutherland, M.; Goh, S.; Ronning, F.; Bauer, E. D.; Batlogg, B.
2014-03-01
CePt2In7 belongs to the CemMnIn3 m + 2 n heavy fermion family, but compared to the Ce MIn5 members of this group, exhibits a more two dimensional electronic structure. At zero pressure the ground state is antiferromagnetically ordered. Under pressure the antiferromagnetic order is suppressed and a superconducting phase is induced, with a maximum Tc above a quantum critical point around 31 kbar. To investigate the changes in the Fermi Surface and effective electron masses around the quantum critical point, Shubnikov-de Haas measurements were conducted under high pressures in an anvil cell. The samples were micro-structured and contacted using a Focused Ion Beam (FIB). The Focused Ion Beam enables sample contacting and structuring down to a sub-micrometer scale, making the measurement of several samples with complex shapes and multiple contacts on a single anvil feasible.
NASA Astrophysics Data System (ADS)
Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto
2017-04-01
The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.
Toogood, Helen S; Leys, David; Scrutton, Nigel S
2007-11-01
Electron transferring flavoproteins (ETFs) are soluble heterodimeric FAD-containing proteins that function primarily as soluble electron carriers between various flavoprotein dehydrogenases. ETF is positioned at a key metabolic branch point, responsible for transferring electrons from up to 10 primary dehydrogenases to the membrane-bound respiratory chain. Clinical mutations of ETF result in the often fatal disease glutaric aciduria type II. Structural and biophysical studies of ETF in complex with partner proteins have shown that ETF partitions the functions of partner binding and electron transfer between (a) a 'recognition loop', which acts as a static anchor at the ETF-partner interface, and (b) a highly mobile redox-active FAD domain. Together, this enables the FAD domain of ETF to sample a range of conformations, some compatible with fast interprotein electron transfer. This 'conformational sampling' enables ETF to recognize structurally distinct partners, whilst also maintaining a degree of specificity. Complex formation triggers mobility of the FAD domain, an 'induced disorder' mechanism contrasting with the more generally accepted models of protein-protein interaction by induced fit mechanisms. We discuss the implications of the highly dynamic nature of ETFs in biological interprotein electron transfer. ETF complexes point to mechanisms of electron transfer in which 'dynamics drive function', a feature that is probably widespread in biology given the modular assembly and flexible nature of biological electron transfer systems.
NMR study of xenotropic murine leukemia virus-related virus protease in a complex with amprenavir
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furukawa, Ayako; Okamura, Hideyasu; Morishita, Ryo
2012-08-24
Highlights: Black-Right-Pointing-Pointer Protease (PR) of XMR virus (XMRV) was successfully synthesized with cell-free system. Black-Right-Pointing-Pointer Interface of XMRV PR with an inhibitor, amprenavir (APV), was identified with NMR. Black-Right-Pointing-Pointer Structural heterogeneity is induced for two PR protomers in the APV:PR = 1:2 complex. Black-Right-Pointing-Pointer Structural heterogeneity is transmitted even to distant regions from the interface. Black-Right-Pointing-Pointer Long-range transmission of structural change may be utilized for drug discovery. -- Abstract: Xenotropic murine leukemia virus-related virus (XMRV) is a virus created through recombination of two murine leukemia proviruses under artificial conditions during the passage of human prostate cancer cells in athymic nudemore » mice. The homodimeric protease (PR) of XMRV plays a critical role in the production of functional viral proteins and is a prerequisite for viral replication. We synthesized XMRV PR using the wheat germ cell-free expression system and carried out structural analysis of XMRV PR in a complex with an inhibitor, amprenavir (APV), by means of NMR. Five different combinatorially {sup 15}N-labeled samples were prepared and backbone resonance assignments were made by applying Otting's method, with which the amino acid types of the [{sup 1}H, {sup 15}N] HSQC resonances were automatically identified using the five samples (Wu et al., 2006) . A titration experiment involving APV revealed that one APV molecule binds to one XMRV PR dimer. For many residues, two distinct resonances were observed, which is thought to be due to the structural heterogeneity between the two protomers in the APV:XMRV PR = 1:2 complex. PR residues at the interface with APV have been identified on the basis of chemical shift perturbation and identification of the intermolecular NOEs by means of filtered NOE experiments. Interestingly, chemical shift heterogeneity between the two protomers of XMRV PR has been observed not only at the interface with APV but also in regions apart from the interface. This indicates that the structural heterogeneity induced by the asymmetry of the binding of APV to the XMRV PR dimer is transmitted to distant regions. This is in contrast to the case of the APV:HIV-1 PR complex, in which the structural heterogeneity is only localized at the interface. Long-range transmission of the structural change identified for the XMRV PR complex might be utilized for the discovery of a new type of drug.« less
NASA Astrophysics Data System (ADS)
Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.
2018-05-01
Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.
Path optimization method for the sign problem
NASA Astrophysics Data System (ADS)
Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji
2018-03-01
We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.
Valenza, Gaetano; Garcia, Ronald G; Citi, Luca; Scilingo, Enzo P; Tomaz, Carlos A; Barbieri, Riccardo
2015-01-01
Nonlinear digital signal processing methods that address system complexity have provided useful computational tools for helping in the diagnosis and treatment of a wide range of pathologies. More specifically, nonlinear measures have been successful in characterizing patients with mental disorders such as Major Depression (MD). In this study, we propose the use of instantaneous measures of entropy, namely the inhomogeneous point-process approximate entropy (ipApEn) and the inhomogeneous point-process sample entropy (ipSampEn), to describe a novel characterization of MD patients undergoing affective elicitation. Because these measures are built within a nonlinear point-process model, they allow for the assessment of complexity in cardiovascular dynamics at each moment in time. Heartbeat dynamics were characterized from 48 healthy controls and 48 patients with MD while emotionally elicited through either neutral or arousing audiovisual stimuli. Experimental results coming from the arousing tasks show that ipApEn measures are able to instantaneously track heartbeat complexity as well as discern between healthy subjects and MD patients. Conversely, standard heart rate variability (HRV) analysis performed in both time and frequency domains did not show any statistical significance. We conclude that measures of entropy based on nonlinear point-process models might contribute to devising useful computational tools for care in mental health.
Szydzik, C; Gavela, A F; Herranz, S; Roccisano, J; Knoerzer, M; Thurgood, P; Khoshmanesh, K; Mitchell, A; Lechuga, L M
2017-08-08
A primary limitation preventing practical implementation of photonic biosensors within point-of-care platforms is their integration with fluidic automation subsystems. For most diagnostic applications, photonic biosensors require complex fluid handling protocols; this is especially prominent in the case of competitive immunoassays, commonly used for detection of low-concentration, low-molecular weight biomarkers. For this reason, complex automated microfluidic systems are needed to realise the full point-of-care potential of photonic biosensors. To fulfil this requirement, we propose an on-chip valve-based microfluidic automation module, capable of automating such complex fluid handling. This module is realised through application of a PDMS injection moulding fabrication technique, recently described in our previous work, which enables practical fabrication of normally closed pneumatically actuated elastomeric valves. In this work, these valves are configured to achieve multiplexed reagent addressing for an on-chip diaphragm pump, providing the sample and reagent processing capabilities required for automation of cyclic competitive immunoassays. Application of this technique simplifies fabrication and introduces the potential for mass production, bringing point-of-care integration of complex automated microfluidics into the realm of practicality. This module is integrated with a highly sensitive, label-free bimodal waveguide photonic biosensor, and is demonstrated in the context of a proof-of-concept biosensing assay, detecting the low-molecular weight antibiotic tetracycline.
Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.
Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T
2008-09-15
Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with the reported levels at the 95% confidence level.
Rajaram, Kaushik; Losada-Pérez, Patricia; Vermeeren, Veronique; Hosseinkhani, Baharak; Wagner, Patrick; Somers, Veerle; Michiels, Luc
2015-01-01
Over the last three decades, phage display technology has been used for the display of target-specific biomarkers, peptides, antibodies, etc. Phage display-based assays are mostly limited to the phage ELISA, which is notorious for its high background signal and laborious methodology. These problems have been recently overcome by designing a dual-display phage with two different end functionalities, namely, streptavidin (STV)-binding protein at one end and a rheumatoid arthritis-specific autoantigenic target at the other end. Using this dual-display phage, a much higher sensitivity in screening specificities of autoantibodies in complex serum sample has been detected compared to single-display phage system on phage ELISA. Herein, we aimed to develop a novel, rapid, and sensitive dual-display phage to detect autoantibodies presence in serum samples using quartz crystal microbalance with dissipation monitoring as a sensing platform. The vertical functionalization of the phage over the STV-modified surfaces resulted in clear frequency and dissipation shifts revealing a well-defined viscoelastic signature. Screening for autoantibodies using antihuman IgG-modified surfaces and the dual-display phage with STV magnetic bead complexes allowed to isolate the target entities from complex mixtures and to achieve a large response as compared to negative control samples. This novel dual-display strategy can be a potential alternative to the time consuming phage ELISA protocols for the qualitative analysis of serum autoantibodies and can be taken as a departure point to ultimately achieve a point of care diagnostic system.
NASA Astrophysics Data System (ADS)
Oriani, Fabio
2017-04-01
The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002
Hoang, Phuong Le; Ahn, Sanghoon; Kim, Jeng-o; Kang, Heeshin; Noh, Jiwhan
2017-01-01
In modern high-intensity ultrafast laser processing, detecting the focal position of the working laser beam, at which the intensity is the highest and the beam diameter is the lowest, and immediately locating the target sample at that point are challenging tasks. A system that allows in-situ real-time focus determination and fabrication using a high-power laser has been in high demand among both engineers and scientists. Conventional techniques require the complicated mathematical theory of wave optics, employing interference as well as diffraction phenomena to detect the focal position; however, these methods are ineffective and expensive for industrial application. Moreover, these techniques could not perform detection and fabrication simultaneously. In this paper, we propose an optical design capable of detecting the focal point and fabricating complex patterns on a planar sample surface simultaneously. In-situ real-time focus detection is performed using a bandpass filter, which only allows for the detection of laser transmission. The technique enables rapid, non-destructive, and precise detection of the focal point. Furthermore, it is sufficiently simple for application in both science and industry for mass production, and it is expected to contribute to the next generation of laser equipment, which can be used to fabricate micro-patterns with high complexity. PMID:28671566
Zhang, Xin; Fu, Lingdi; Geng, Yuehua; Zhai, Xiang; Liu, Yanhua
2014-03-01
Here, we administered repeated-pulse transcranial magnetic stimulation to healthy people at the left Guangming (GB37) and a mock point, and calculated the sample entropy of electroencephalo-gram signals using nonlinear dynamics. Additionally, we compared electroencephalogram sample entropy of signals in response to visual stimulation before, during, and after repeated-pulse tran-scranial magnetic stimulation at the Guangming. Results showed that electroencephalogram sample entropy at left (F3) and right (FP2) frontal electrodes were significantly different depending on where the magnetic stimulation was administered. Additionally, compared with the mock point, electroencephalogram sample entropy was higher after stimulating the Guangming point. When visual stimulation at Guangming was given before repeated-pulse transcranial magnetic stimula-tion, significant differences in sample entropy were found at five electrodes (C3, Cz, C4, P3, T8) in parietal cortex, the central gyrus, and the right temporal region compared with when it was given after repeated-pulse transcranial magnetic stimulation, indicating that repeated-pulse transcranial magnetic stimulation at Guangming can affect visual function. Analysis of electroencephalogram revealed that when visual stimulation preceded repeated pulse transcranial magnetic stimulation, sample entropy values were higher at the C3, C4, and P3 electrodes and lower at the Cz and T8 electrodes than visual stimulation followed preceded repeated pulse transcranial magnetic stimula-tion. The findings indicate that repeated-pulse transcranial magnetic stimulation at the Guangming evokes different patterns of electroencephalogram signals than repeated-pulse transcranial mag-netic stimulation at other nearby points on the body surface, and that repeated-pulse transcranial magnetic stimulation at the Guangming is associated with changes in the complexity of visually evoked electroencephalogram signals in parietal regions, central gyrus, and temporal regions.
A Modular Low-Complexity ECG Delineation Algorithm for Real-Time Embedded Systems.
Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman
2018-03-01
This work presents a new modular and low-complexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets, and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform real-time delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in runtime to a wide range of modes and sampling rates, from a ultralow-power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete high-accuracy delineation mode, in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in the case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography Committee in the high-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultralow-power 8-MHz TI MSP430 series microcontroller ranges from 0.2% to 8.5% according to the mode used.
NASA Astrophysics Data System (ADS)
Zeng, Yayun; Wang, Jun; Xu, Kaixuan
2017-04-01
A new financial agent-based time series model is developed and investigated by multiscale-continuum percolation system, which can be viewed as an extended version of continuum percolation system. In this financial model, for different parameters of proportion and density, two Poisson point processes (where the radii of points represent the ability of receiving or transmitting information among investors) are applied to model a random stock price process, in an attempt to investigate the fluctuation dynamics of the financial market. To validate its effectiveness and rationality, we compare the statistical behaviors and the multifractal behaviors of the simulated data derived from the proposed model with those of the real stock markets. Further, the multiscale sample entropy analysis is employed to study the complexity of the returns, and the cross-sample entropy analysis is applied to measure the degree of asynchrony of return autocorrelation time series. The empirical results indicate that the proposed financial model can simulate and reproduce some significant characteristics of the real stock markets to a certain extent.
Ulusoy, Halil Ibrahim
2014-01-01
A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.
DNA-cisplatin binding mechanism peculiarities studied with single molecule stretching experiments
NASA Astrophysics Data System (ADS)
Crisafuli, F. A. P.; Cesconetto, E. C.; Ramos, E. B.; Rocha, M. S.
2012-02-01
We propose a method to determine the DNA-cisplatin binding mechanism peculiarities by monitoring the mechanical properties of these complexes. To accomplish this task, we have performed single molecule stretching experiments by using optical tweezers, from which the persistence and contour lengths of the complexes can be promptly measured. The persistence length of the complexes as a function of the drug total concentration in the sample was used to deduce the binding data, from which we show that cisplatin binds cooperatively to the DNA molecule, a point which so far has not been stressed in binding equilibrium studies of this ligand.
Characterizing air quality data from complex network perspective.
Fan, Xinghua; Wang, Li; Xu, Huihui; Li, Shasha; Tian, Lixin
2016-02-01
Air quality depends mainly on changes in emission of pollutants and their precursors. Understanding its characteristics is the key to predicting and controlling air quality. In this study, complex networks were built to analyze topological characteristics of air quality data by correlation coefficient method. Firstly, PM2.5 (particulate matter with aerodynamic diameter less than 2.5 μm) indexes of eight monitoring sites in Beijing were selected as samples from January 2013 to December 2014. Secondly, the C-C method was applied to determine the structure of phase space. Points in the reconstructed phase space were considered to be nodes of the network mapped. Then, edges were determined by nodes having the correlation greater than a critical threshold. Three properties of the constructed networks, degree distribution, clustering coefficient, and modularity, were used to determine the optimal value of the critical threshold. Finally, by analyzing and comparing topological properties, we pointed out that similarities and difference in the constructed complex networks revealed influence factors and their different roles on real air quality system.
Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong
2012-11-15
A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH=7.0, Triton X-114=0.05% (w/v), 8-HQ=2.0×10(-4) mol L(-1), HNO3=0.8 mol L(-1)), the detection limits for Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L(-1), respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L(-1) were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion in water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Optofluidic analysis system for amplification-free, direct detection of Ebola infection
NASA Astrophysics Data System (ADS)
Cai, H.; Parks, J. W.; Wall, T. A.; Stott, M. A.; Stambaugh, A.; Alfson, K.; Griffiths, A.; Mathies, R. A.; Carrion, R.; Patterson, J. L.; Hawkins, A. R.; Schmidt, H.
2015-09-01
The massive outbreak of highly lethal Ebola hemorrhagic fever in West Africa illustrates the urgent need for diagnostic instruments that can identify and quantify infections rapidly, accurately, and with low complexity. Here, we report on-chip sample preparation, amplification-free detection and quantification of Ebola virus on clinical samples using hybrid optofluidic integration. Sample preparation and target preconcentration are implemented on a PDMS-based microfluidic chip (automaton), followed by single nucleic acid fluorescence detection in liquid-core optical waveguides on a silicon chip in under ten minutes. We demonstrate excellent specificity, a limit of detection of 0.2 pfu/mL and a dynamic range of thirteen orders of magnitude, far outperforming other amplification-free methods. This chip-scale approach and reduced complexity compared to gold standard RT-PCR methods is ideal for portable instruments that can provide immediate diagnosis and continued monitoring of infectious diseases at the point-of-care.
NASA Astrophysics Data System (ADS)
Jorge-Villar, Susana E.; Edwards, Howell G. M.
2013-03-01
Raman spectroscopy is a valuable analytical technique for the identification of biomolecules and minerals in natural samples, which involves little or minimal sample manipulation. In this paper, we evaluate the advantages and disadvantages of this technique applied to the study of extremophiles. Furthermore, we provide a review of the results published, up to the present point in time, of the bio- and geo-strategies adopted by different types of extremophile colonies of microorganisms. We also show the characteristic Raman signatures for the identification of pigments and minerals, which appear in those complex samples.
Hyperspectral microscopic imaging by multiplex coherent anti-Stokes Raman scattering (CARS)
NASA Astrophysics Data System (ADS)
Khmaladze, Alexander; Jasensky, Joshua; Zhang, Chi; Han, Xiaofeng; Ding, Jun; Seeley, Emily; Liu, Xinran; Smith, Gary D.; Chen, Zhan
2011-10-01
Coherent anti-Stokes Raman scattering (CARS) microscopy is a powerful technique to image the chemical composition of complex samples in biophysics, biology and materials science. CARS is a four-wave mixing process. The application of a spectrally narrow pump beam and a spectrally wide Stokes beam excites multiple Raman transitions, which are probed by a probe beam. This generates a coherent directional CARS signal with several orders of magnitude higher intensity relative to spontaneous Raman scattering. Recent advances in the development of ultrafast lasers, as well as photonic crystal fibers (PCF), enable multiplex CARS. In this study, we employed two scanning imaging methods. In one, the detection is performed by a photo-multiplier tube (PMT) attached to the spectrometer. The acquisition of a series of images, while tuning the wavelengths between images, allows for subsequent reconstruction of spectra at each image point. The second method detects CARS spectrum in each point by a cooled coupled charged detector (CCD) camera. Coupled with point-by-point scanning, it allows for a hyperspectral microscopic imaging. We applied this CARS imaging system to study biological samples such as oocytes.
Sm-Nd isotopic systematics of the ancient Gneiss complex, southern Africa
NASA Technical Reports Server (NTRS)
Carlson, R. W.; Hunter, D. R.; Barker, F.
1983-01-01
In order to shed some new light on the question of the absolute and relative ages of the Ancient Gneiss Complex and Onverwacht Group, a Sm-Nd whole-rock and mineral isochron study of the AGC was begun. At this point, the whole-rock study of samples from the Bimodal Suite selected from those studied for their geochemical characteristics by Hunter et al., is completed. These results and their implications for the chronologic evolution of the Kaapvaal craton and the sources of these ancient rocks are discussed.
Contamination source review for Building E3162, Edgewood Area, Aberdeen Proving Ground, Maryland
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, G.A.; Draugelis, A.K.; Rueda, J.
1995-09-01
This report was prepared by Argonne National Laboratory (ANL) to document the results of a contamination source review for Building E3162 at the Aberdeen Proving Ground (APG) in Maryland. The report may be used to assist the US Army in planning for the future use or disposition of this building. The review included a historical records search, physical inspection, photographic documentation, geophysical investigation, and collection of air samples. The field investigations were performed by ANL during 1994 and 1995. Building E3162 (APG designation) is part of the Medical Research Laboratories Building E3160 Complex. This research laboratory complex is located westmore » of Kings Creek, east of the airfield and Ricketts Point Road, and south of Kings Creek Road in the Edgewood Area of APG. The original structures in the E3160 Complex were constructed during World War 2. The complex was originally used as a medical research laboratory. Much of the research involved wound assessment involving chemical warfare agents. Building E3162 was used as a holding and study area for animals involved in non-agent burns. The building was constructed in 1952, placed on inactive status in 1983, and remains unoccupied. Analytical results from these air samples revealed no distinguishable difference in hydrocarbon and chlorinated solvent levels between the two background samples and the sample taken inside Building E3162.« less
NASA Astrophysics Data System (ADS)
Kazami, Sou; Tsunogae, Toshiaki; Santosh, M.; Tsutsumi, Yukiyasu; Takamura, Yusuke
2016-11-01
The Lützow-Holm Complex (LHC) of East Antarctica forms part of a complex subduction-collision orogen related to the amalgamation of the Neoproterozoic supercontinent Gondwana. Here we report new petrological, geochemical, and geochronological data from a metamorphosed and disrupted layered igneous complex from Akarui Point in the LHC which provide new insights into the evolution of the complex. The complex is composed of mafic orthogneiss (edenite/pargasite + plagioclase ± clinopyroxene ± orthopyroxene ± spinel ± sapphirine ± K-feldspar), meta-ultramafic rock (pargasite + olivine + spinel + orthopyroxene), and felsic orthogneiss (plagioclase + quartz + pargasite + biotite ± garnet). The rocks show obvious compositional layering reflecting the chemical variation possibly through magmatic differentiation. The metamorphic conditions of the rocks were estimated using hornblende-plagioclase geothermometry which yielded temperatures of 720-840 °C. The geochemical data of the orthogneisses indicate fractional crystallization possibly related to differentiation within a magma chamber. Most of the mafic-ultramafic samples show enrichment of LILE, negative Nb, Ta, P and Ti anomalies, and constant HFSE contents in primitive-mantle normalized trace element plots suggesting volcanic arc affinity probably related to subduction. The enrichment of LREE and flat HREE patterns in chondrite-normalized REE plot, with the Nb-Zr-Y, Y-La-Nb, and Th/Yb-Nb/Yb plots also suggest volcanic arc affinity. The felsic orthogneiss plotted on Nb/Zr-Zr diagram (low Nb/Zr ratio) and spider diagrams (enrichment of LILE, negative Nb, Ta, P and Ti anomalies) also show magmatic arc origin. The morphology, internal structure, and high Th/U ratio of zircon grains in felsic orthogneiss are consistent with magmatic origin for most of these grains. Zircon U-Pb analyses suggest Early Neoproterozoic (847.4 ± 8.0 Ma) magmatism and protolith formation. Some older grains (1026-882 Ma) are regarded as xenocrysts from basement entrained in the magma through limited crustal reworking. The younger ages (807-667 Ma) might represent subsequent thermal events. The results of this study suggest that the ca. 850 Ma layered igneous complex in Akarui Point was derived from a magma chamber constructed through arc-related magmatism which included components from ca. 1.0 Ga felsic continental crustal basement. The geochemical characteristics and the timing of protolith emplacement from this complex are broadly identical to those of similar orthogneisses from Kasumi Rock and Tama Point in the LHC and the Kadugannawa Complex in Sri Lanka, which record Early Neoproterozoic (ca. 1.0 Ga) arc magmatism. Although the magmatic event in Akarui Point is slightly younger, the thermal event probably continued from ca. 1.0 Ga to ca. 850 Ma or even to ca. 670 Ma. We therefore correlate the Akarui Point igneous complex with those in the LHC and Kadugannawa Complex formed under similar Early Neoproterozoic arc magmatic events during the convergent margin processes prior to the assembly of the Gondwana supercontinent.
A novel image registration approach via combining local features and geometric invariants
Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa
2018-01-01
Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595
"Paper Machine" for Molecular Diagnostics.
Connelly, John T; Rolland, Jason P; Whitesides, George M
2015-08-04
Clinical tests based on primer-initiated amplification of specific nucleic acid sequences achieve high levels of sensitivity and specificity. Despite these desirable characteristics, these tests have not reached their full potential because their complexity and expense limit their usefulness to centralized laboratories. This paper describes a device that integrates sample preparation and loop-mediated isothermal amplification (LAMP) with end point detection using a hand-held UV source and camera phone. The prototype device integrates paper microfluidics (to enable fluid handling) and a multilayer structure, or a "paper machine", that allows a central patterned paper strip to slide in and out of fluidic path and thus allows introduction of sample, wash buffers, amplification master mix, and detection reagents with minimal pipetting, in a hand-held, disposable device intended for point-of-care use in resource-limited environments. This device creates a dynamic seal that prevents evaporation during incubation at 65 °C for 1 h. This interval is sufficient to allow a LAMP reaction for the Escherichia coli malB gene to proceed with an analytical sensitivity of 1 double-stranded DNA target copy. Starting with human plasma spiked with whole, live E. coli cells, this paper demonstrates full integration of sample preparation with LAMP amplification and end point detection with a limit of detection of 5 cells. Further, it shows that the method used to prepare sample enables concentration of DNA from sample volumes commonly available from fingerstick blood draw.
The relevance of time series in molecular ecology and conservation biology.
Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E
2014-05-01
The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.
Using machine learning tools to model complex toxic interactions with limited sampling regimes.
Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W
2013-03-19
A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.
NASA Technical Reports Server (NTRS)
Schultink, G. (Principal Investigator)
1977-01-01
The author has identified the following significant results. A linear regression between percent nonvegetative land and the time variable was completed for the two sample areas. Sample area no. 1 showed an average vegetation loss of 1.901% per year, while the loss for sample area no. 2 amounted to 5.889% per year. Two basic reasons for the difference were assumed to play a role: the difference in access potential and the amount of already fragmented vegetation complexes in existence during the first year of the comparative analysis - 1970. Sample area no. 2 was located closer to potential access points and was more fragmented initially.
Complexity quantification of dense array EEG using sample entropy analysis.
Ramanand, Pravitha; Nampoori, V P N; Sreenivasan, R
2004-09-01
In this paper, a time series complexity analysis of dense array electroencephalogram signals is carried out using the recently introduced Sample Entropy (SampEn) measure. This statistic quantifies the regularity in signals recorded from systems that can vary from the purely deterministic to purely stochastic realm. The present analysis is conducted with an objective of gaining insight into complexity variations related to changing brain dynamics for EEG recorded from the three cases of passive, eyes closed condition, a mental arithmetic task and the same mental task carried out after a physical exertion task. It is observed that the statistic is a robust quantifier of complexity suited for short physiological signals such as the EEG and it points to the specific brain regions that exhibit lowered complexity during the mental task state as compared to a passive, relaxed state. In the case of mental tasks carried out before and after the performance of a physical exercise, the statistic can detect the variations brought in by the intermediate fatigue inducing exercise period. This enhances its utility in detecting subtle changes in the brain state that can find wider scope for applications in EEG based brain studies.
Chen, Li-ding; Peng, Hong-jia; Fu, Bo-Jie; Qiu, Jun; Zhang, Shu-rong
2005-01-01
Surface waters can be contaminated by human activities in two ways: (1) by point sources, such as sewage treatment discharge and storm-water runoff; and (2) by non-point sources, such as runoff from urban and agricultural areas. With point-source pollution effectively controlled, non-point source pollution has become the most important environmental concern in the world. The formation of non-point source pollution is related to both the sources such as soil nutrient, the amount of fertilizer and pesticide applied, the amount of refuse, and the spatial complex combination of land uses within a heterogeneous landscape. Land-use change, dominated by human activities, has a significant impact on water resources and quality. In this study, fifteen surface water monitoring points in the Yuqiao Reservoir Basin, Zunhua, Hebei Province, northern China, were chosen to study the seasonal variation of nitrogen concentration in the surface water. Water samples were collected in low-flow period (June), high-flow period (July) and mean-flow period (October) from 1999 to 2000. The results indicated that the seasonal variation of nitrogen concentration in the surface water among the fifteen monitoring points in the rainfall-rich year is more complex than that in the rainfall-deficit year. It was found that the land use, the characteristics of the surface river system, rainfall, and human activities play an important role in the seasonal variation of N-concentration in surface water.
Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Fifield, Leonard S.; Wollan, Eric J.
2015-11-13
During the last quarter of FY 2015, the following technical progress has been made toward project milestones: 1) PlastiComp used the PlastiComp direct in-line (D-LFT) Pushtrusion system to injection mold 40 30wt% LCF/PP parts with ribs, 40 30wt% LCF/PP parts without ribs, 10 30wt% LCF/PA66 parts with ribs, and 35 30wt% LCF/PA66 parts without ribs. In addition, purge materials from the injection molding nozzle were obtained for fiber length analysis, and molding parameters were sent to PNNL for process modeling. 2) Magna cut samples at four selected locations (named A, B, C and D) from the non-ribbed Magna-molded parts basedmore » on a plan discussed with PNNL and the team and shipped these samples to Virginia Tech for fiber orientation and length measurements. 3) Virginia Tech started fiber orientation and length measurements for the samples taken from the complex parts using Virginia Tech’s established procedure. 4) PNNL and Autodesk built ASMI models for the complex parts with and without ribs, reviewed process datasheets and performed preliminary analyses of these complex parts using the actual molding parameters received from Magna and PlastiComp to compare predicted to experimental mold filling patterns. 5) Autodesk assisted PNNL in developing the workflow to use Moldflow fiber orientation and length results in ABAQUS® simulations. 6) Autodesk advised the team on the practicality and difficulty of material viscosity characterization from the D-LFT process. 7) PNNL developed a procedure to import fiber orientation and length results from a 3D ASMI analysis to a 3D ABAQUS® model for structural analyses of the complex part for later weight reduction study. 8) In discussion with PNNL and Magna, Toyota developed mechanical test setups and built fixtures for three-point bending and torsion tests of the complex parts. 9) Toyota built a finite element model for the complex parts subjected to torsion loading. 10) PNNL built the 3D ABAQUS® model of the complex ribbed part subjected to 3-point bending. 11) University of Illinois (Prof. C.L. Tucker) advised the team on fiber orientation and fiber length measurement options, modeling issues as well as interpretation of data.« less
Meyer, Annabel; Focks, Andreas; Radl, Viviane; Welzl, Gerhard; Schöning, Ingo; Schloter, Michael
2014-01-01
In the present study, the influence of the land use intensity on the diversity of ammonia oxidizing bacteria (AOB) and archaea (AOA) in soils from different grassland ecosystems has been investigated in spring and summer of the season (April and July). Diversity of AOA and AOB was studied by TRFLP fingerprinting of amoA amplicons. The diversity from AOB was low and dominated by a peak that could be assigned to Nitrosospira. The obtained profiles for AOB were very stable and neither influenced by the land use intensity nor by the time point of sampling. In contrast, the obtained patterns for AOA were more complex although one peak that could be assigned to Nitrosopumilus was dominating all profiles independent from the land use intensity and the sampling time point. Overall, the AOA profiles were much more dynamic than those of AOB and responded clearly to the land use intensity. An influence of the sampling time point was again not visible. Whereas AOB profiles were clearly linked to potential nitrification rates in soil, major TRFs from AOA were negatively correlated to DOC and ammonium availability and not related to potential nitrification rates.
NASA Astrophysics Data System (ADS)
Kujawinski, E. B.; Longnecker, K.; Alexander, H.; Dyhrman, S.; Jenkins, B. D.; Rynearson, T. A.
2016-02-01
Phytoplankton blooms in coastal areas contribute a large fraction of primary production to the global oceans. Despite their central importance, there are fundamental unknowns in phytoplankton community metabolism, which limit the development of a more complete understanding of the carbon cycle. Within this complex setting, the tools of systems biology hold immense potential for profiling community metabolism and exploring links to the carbon cycle, but have rarely been applied together in this context. Here we focus on phytoplankton community samples collected from a model coastal system over a three-week period. At each sampling point, we combined two assessments of metabolic function: the meta-transcriptome, or the genes that are expressed by all organisms at each sampling point, and the metabolome, or the intracellular molecules produced during the community's metabolism. These datasets are inherently complementary, with gene expression likely to vary in concert with the concentrations of metabolic intermediates. Indeed, preliminary data show coherence in transcripts and metabolites associated with nutrient stress response and with fixed carbon oxidation. To date, these datasets are rarely integrated across their full complexity but together they provide unequivocal evidence of specific metabolic pathways by individual phytoplankton taxa, allowing a more comprehensive systems view of this dynamic environment. Future application of multi-omic profiling will facilitate a more complete understanding of metabolic reactions at the foundation of the carbon cycle.
Minimized state complexity of quantum-encoded cryptic processes
NASA Astrophysics Data System (ADS)
Riechers, Paul M.; Mahoney, John R.; Aghamohammadi, Cina; Crutchfield, James P.
2016-05-01
The predictive information required for proper trajectory sampling of a stochastic process can be more efficiently transmitted via a quantum channel than a classical one. This recent discovery allows quantum information processing to drastically reduce the memory necessary to simulate complex classical stochastic processes. It also points to a new perspective on the intrinsic complexity that nature must employ in generating the processes we observe. The quantum advantage increases with codeword length: the length of process sequences used in constructing the quantum communication scheme. In analogy with the classical complexity measure, statistical complexity, we use this reduced communication cost as an entropic measure of state complexity in the quantum representation. Previously difficult to compute, the quantum advantage is expressed here in closed form using spectral decomposition. This allows for efficient numerical computation of the quantum-reduced state complexity at all encoding lengths, including infinite. Additionally, it makes clear how finite-codeword reduction in state complexity is controlled by the classical process's cryptic order, and it allows asymptotic analysis of infinite-cryptic-order processes.
Affinity learning with diffusion on tensor product graph.
Yang, Xingwei; Prasad, Lakshman; Latecki, Longin Jan
2013-01-01
In many applications, we are given a finite set of data points sampled from a data manifold and represented as a graph with edge weights determined by pairwise similarities of the samples. Often the pairwise similarities (which are also called affinities) are unreliable due to noise or due to intrinsic difficulties in estimating similarity values of the samples. As observed in several recent approaches, more reliable similarities can be obtained if the original similarities are diffused in the context of other data points, where the context of each point is a set of points most similar to it. Compared to the existing methods, our approach differs in two main aspects. First, instead of diffusing the similarity information on the original graph, we propose to utilize the tensor product graph (TPG) obtained by the tensor product of the original graph with itself. Since TPG takes into account higher order information, it is not a surprise that we obtain more reliable similarities. However, it comes at the price of higher order computational complexity and storage requirement. The key contribution of the proposed approach is that the information propagation on TPG can be computed with the same computational complexity and the same amount of storage as the propagation on the original graph. We prove that a graph diffusion process on TPG is equivalent to a novel iterative algorithm on the original graph, which is guaranteed to converge. After its convergence we obtain new edge weights that can be interpreted as new, learned affinities. We stress that the affinities are learned in an unsupervised setting. We illustrate the benefits of the proposed approach for data manifolds composed of shapes, images, and image patches on two very different tasks of image retrieval and image segmentation. With learned affinities, we achieve the bull's eye retrieval score of 99.99 percent on the MPEG-7 shape dataset, which is much higher than the state-of-the-art algorithms. When the data- points are image patches, the NCut with the learned affinities not only significantly outperforms the NCut with the original affinities, but it also outperforms state-of-the-art image segmentation methods.
NASA Technical Reports Server (NTRS)
Strahler, A. H.; Woodcock, C. E.; Logan, T. L.
1983-01-01
A timber inventory of the Eldorado National Forest, located in east-central California, provides an example of the use of a Geographic Information System (GIS) to stratify large areas of land for sampling and the collection of statistical data. The raster-based GIS format of the VICAR/IBIS software system allows simple and rapid tabulation of areas, and facilitates the selection of random locations for ground sampling. Algorithms that simplify the complex spatial pattern of raster-based information, and convert raster format data to strings of coordinate vectors, provide a link to conventional vector-based geographic information systems.
Determination of arsenic species in rice samples using CPE and ETAAS.
Costa, Bruno Elias Dos Santos; Coelho, Nívia Maria Melo; Coelho, Luciana Melo
2015-07-01
A highly sensitive and selective procedure for the determination of arsenate and total arsenic in food by electrothermal atomic absorption spectrometry after cloud point extraction (ETAAS/CPE) was developed. The procedure is based on the formation of a complex of As(V) ions with molybdate in the presence of 50.0 mmol L(-1) sulfuric acid. The complex was extracted into the surfactant-rich phase of 0.06% (w/v) Triton X-114. The variables affecting the complex formation, extraction and phase separation were optimized using factorial designs. Under the optimal conditions, the calibration graph was linear in the range of 0.05-10.0 μg L(-1). The detection and quantification limits were 10 and 33 ng L(-1), respectively and the corresponding value for the relative standard deviation for 10 replicates was below 5%. Recovery values of between 90.8% and 113.1% were obtained for spiked samples. The accuracy of the method was evaluated by comparison with the results obtained for the analysis of a rice flour sample (certified material IRMM-804) and no significant difference at the 95% confidence level was observed. The method was successfully applied to the determination of As(V) and total arsenic in rice samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Abbasi Tarighat, Maryam; Nabavi, Masoume; Mohammadizadeh, Mohammad Reza
2015-06-15
A new multi-component analysis method based on zero-crossing point-continuous wavelet transformation (CWT) was developed for simultaneous spectrophotometric determination of Cu(2+) and Pb(2+) ions based on the complex formation with 2-benzyl espiro[isoindoline-1,5 oxasolidine]-2,3,4 trione (BSIIOT). The absorption spectra were evaluated with respect to synthetic ligand concentration, time of complexation and pH. Therefore according the absorbance values, 0.015 mmol L(-1) BSIIOT, 10 min after mixing and pH 8.0 were used as optimum values. The complex formation between BSIIOT ligand and the cations Cu(2+) and Pb(2+) by application of rank annihilation factor analysis (RAFA) were investigated. Daubechies-4 (db4), discrete Meyer (dmey), Morlet (morl) and Symlet-8 (sym8) continuous wavelet transforms for signal treatments were found to be suitable among the wavelet families. The applicability of new synthetic ligand and selected mother wavelets were used for the simultaneous determination of strongly overlapped spectra of species without using any pre-chemical treatment. Therefore, CWT signals together with zero crossing technique were directly applied to the overlapping absorption spectra of Cu(2+) and Pb(2+). The calibration graphs for estimation of Pb(2+) and Cu (2+)were obtained by measuring the CWT amplitudes at zero crossing points for Cu(2+) and Pb(2+) at the wavelet domain, respectively. The proposed method was validated by simultaneous determination of Cu(2+) and Pb(2+) ions in red beans, walnut, rice, tea and soil samples. The obtained results of samples with proposed method have been compared with those predicted by partial least squares (PLS) and flame atomic absorption spectrophotometry (FAAS). Copyright © 2015 Elsevier B.V. All rights reserved.
Scargle, Jeffrey D; Way, M J; Gazis, P R
2017-04-10
We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys.
Scargle, Jeffrey D.; Way, M. J.; Gazis, P. R.
2017-01-01
We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys. PMID:29628519
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; Way, M. J.; Gazis, P. R.
2017-01-01
We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys.
Determination of total selenium in food samples by d-CPE and HG-AFS.
Wang, Mei; Zhong, Yizhou; Qin, Jinpeng; Zhang, Zehua; Li, Shan; Yang, Bingyi
2017-07-15
A dual-cloud point extraction (d-CPE) procedure was developed for the simultaneous preconcentration and determination of trace level Se in food samples by hydride generation-atomic fluorescence spectrometry (HG-AFS). The Se(IV) was complexed with ammonium pyrrolidinedithiocarbamate (APDC) in a Triton X-114 surfactant-rich phase, which was then treated with a mixture of 16% (v/v) HCl and 20% (v/v) H 2 O 2 . This converted the Se(IV)-APDC into free Se(IV), which was back extracted into an aqueous phase at the second cloud point extraction stage. This aqueous phase was analyzed directly by HG-AFS. Optimization of the experimental conditions gave a limit of detection of 0.023μgL -1 with an enhancement factor of 11.8 when 50mL of sample solution was preconcentrated to 3mL. The relative standard deviation was 4.04% (c=6.0μgL -1 , n=10). The proposed method was applied to determine the Se contents in twelve food samples with satisfactory recoveries of 95.6-105.2%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Language skills of children during the first 12 months after stuttering onset.
Watts, Amy; Eadie, Patricia; Block, Susan; Mensah, Fiona; Reilly, Sheena
2017-03-01
To describe the language development in a sample of young children who stutter during the first 12 months after stuttering onset was reported. Language production was analysed in a sample of 66 children who stuttered (aged 2-4 years). The sample were identified from a pre-existing prospective, community based longitudinal cohort. Data were collected at three time points within the first year after stuttering onset. Stuttering severity was measured, and global indicators of expressive language proficiency (length of utterances and grammatical complexity) were derived from the samples and summarised. Language production abilities of the children who stutter were contrasted with normative data. The majority of children's stuttering was rated as mild in severity, with more than 83% of participants demonstrating very mild or mild stuttering at each of the time points studied. The participants demonstrated developmentally appropriate spoken language skills comparable with available normative data. In the first year following the report of stuttering onset, the language skills of the children who were stuttering progressed in a manner that is consistent with developmental expectations. Copyright © 2016 Elsevier Inc. All rights reserved.
Expected antenna utilization and overload
NASA Technical Reports Server (NTRS)
Posner, Edward C.
1991-01-01
The trade-offs between the number of antennas at Deep Space Network (DSN) Deep-Space Communications Complex and the fraction of continuous coverage provided to a set of hypothetical spacecraft, assuming random placement of the space craft passes during the day. The trade-offs are fairly robust with respect to the randomness assumption. A sample result is that a three-antenna complex provides an average of 82.6 percent utilization of facilities and coverage of nine spacecraft that each have 8-hour passes, whereas perfect phasing of the passes would yield 100 percent utilization and coverage. One key point is that sometimes fewer than three spacecraft are visible, so an antenna is idle, while at other times, there aren't enough antennas, and some spacecraft do without service. This point of view may be useful in helping to size the network or to develop a normalization for a figure of merit of DSN coverage.
Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F
2009-05-01
Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.
Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger
2013-01-01
A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.
Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E
2005-05-15
We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.
Rubínová, Eva; Nikolai, Tomáš; Marková, Hana; Siffelová, Kamila; Laczó, Jan; Hort, Jakub; Vyhnálek, Martin
2014-01-01
The Clock Drawing Test is a frequently used cognitive screening test with several scoring systems in elderly populations. We compare simple and complex scoring systems and evaluate the usefulness of the combination of the Clock Drawing Test with the Mini-Mental State Examination to detect patients with mild cognitive impairment. Patients with amnestic mild cognitive impairment (n = 48) and age- and education-matched controls (n = 48) underwent neuropsychological examinations, including the Clock Drawing Test and the Mini-Mental State Examination. Clock drawings were scored by three blinded raters using one simple (6-point scale) and two complex (17- and 18-point scales) systems. The sensitivity and specificity of these scoring systems used alone and in combination with the Mini-Mental State Examination were determined. Complex scoring systems, but not the simple scoring system, were significant predictors of the amnestic mild cognitive impairment diagnosis in logistic regression analysis. At equal levels of sensitivity (87.5%), the Mini-Mental State Examination showed higher specificity (31.3%, compared with 12.5% for the 17-point Clock Drawing Test scoring scale). The combination of Clock Drawing Test and Mini-Mental State Examination scores increased the area under the curve (0.72; p < .001) and increased specificity (43.8%), but did not increase sensitivity, which remained high (85.4%). A simple 6-point scoring system for the Clock Drawing Test did not differentiate between healthy elderly and patients with amnestic mild cognitive impairment in our sample. Complex scoring systems were slightly more efficient, yet still were characterized by high rates of false-positive results. We found psychometric improvement using combined scores from the Mini-Mental State Examination and the Clock Drawing Test when complex scoring systems were used. The results of this study support the benefit of using combined scores from simple methods.
Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis
NASA Astrophysics Data System (ADS)
Li, Y.
2013-05-01
The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.
Ontsira Ngoyi, E N; Obengui; Taty Taty, R; Koumba, E L; Ngala, P; Ossibi Ibara, R B
2014-12-01
The aim of the present work was to describe mycobacteria species isolated in the antituberculosis center of Pointe-Noire city in Congo Brazzaville. It was a descriptive transversal study, conducted between September 2008 and April 2009 (7 months). A simple random sample was established from patients who came to the antituberculosis center of Pointe-Noire City (reference center on diagnosis and treatment of tuberculosis). To those patients consulting with symptoms leading to suspect pulmonary tuberculosis, a sputum sampling in three sessions was conducted. Staining techniques to Ziehl-Neelsen and auramine were performed in Pointe-Noire. Culture, molecular hybridization and antibiotic susceptibility testing to first-line antituberculosis drugs (isoniazid, rifampicin, ethambutol, pyrazinamide or streptomycine) using diffusion method on agar were performed in Cerba Pasteur laboratory in France. In 77 patients, 24 sputum (31.20%) were positive to the microscopic examination and 45 (58.44%) to the culture and identification by molecular hybridization. Mycobacteria species complex isolated were M. tuberculosis with 31 cases (68.9%) and M. africanum with 3 cases (6.67%). Non-tuberculous mycobacteria (NMT) were isolated in association or not with M. tuberculosis in 9 cases (20%) and the most common species were M. intracellulare. In M. tuberculosis species, 7 strains (41.20%) were tested sensitive to the first-line antituberculosis drugs, 8 cases (47%) monoresistance and 2 cases multidrug resistance at both isoniazide and rifampicine (12%) (MDR). This study showed the importance of Mycobacteria species complex and non-mycobacteria species in pulmonary tuberculosis. The data on resistance can help medical physicians in the treatment of pulmonary tuberculosis. Another study with a large population is required to confirm these data.
Elliott, Sarah M.; Brigham, Mark E.; Kiesling, Richard L.; Schoenfuss, Heiko L.; Jorgenson, Zachary G.
2018-01-01
The North American Great Lakes are a vital natural resource that provide fish and wildlife habitat, as well as drinking water and waste assimilation services for millions of people. Tributaries to the Great Lakes receive chemical inputs from various point and nonpoint sources, and thus are expected to have complex mixtures of chemicals. However, our understanding of the co‐occurrence of specific chemicals in complex mixtures is limited. To better understand the occurrence of specific chemical mixtures in the US Great Lakes Basin, surface water from 24 US tributaries to the Laurentian Great Lakes was collected and analyzed for diverse suites of organic chemicals, primarily focused on chemicals of concern (e.g., pharmaceuticals, personal care products, fragrances). A total of 181 samples and 21 chemical classes were assessed for mixture compositions. Basin wide, 1664 mixtures occurred in at least 25% of sites. The most complex mixtures identified comprised 9 chemical classes and occurred in 58% of sampled tributaries. Pharmaceuticals typically occurred in complex mixtures, reflecting pharmaceutical‐use patterns and wastewater facility outfall influences. Fewer mixtures were identified at lake or lake‐influenced sites than at riverine sites. As mixture complexity increased, the probability of a specific mixture occurring more often than by chance greatly increased, highlighting the importance of understanding source contributions to the environment. This empirically based analysis of mixture composition and occurrence may be used to focus future sampling efforts or mixture toxicity assessments.
Complexes of Nitrocellulose with Cupric Chloride,
1985-11-01
4 Z I Vm 04 N-C-11 0soa -~~~ ii a Lid US 1 U C. U . i .ci OC S 0V- C C C 0 d i 0 41 v . 0i C u -s4 0- C .4 ~ tw aM 0i u U-JU I CU- w4 05 a.- ow US...la formation *d’un complexe et de la fraction pond&rale de CC par rapport A la NC, X , dltermin~ s au point de saturation. Le PCC est caractfristique de...59 s - 1. 3.5 Effect of X on the Rate of Complex Formation The variation of the ratio (ki/kf) with X is shown in Fig. 4 for sample 11 at C - 59 s
Statistical approaches for the determination of cut points in anti-drug antibody bioassays.
Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A
2015-03-01
Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.
Habitat Complexity Metrics to Guide Restoration of Large Rivers
NASA Astrophysics Data System (ADS)
Jacobson, R. B.; McElroy, B. J.; Elliott, C.; DeLonay, A.
2011-12-01
Restoration strategies on large, channelized rivers typically strive to recover lost habitat complexity, based on the assumption complexity and biophysical capacity are directly related. Although definition of links between complexity and biotic responses can be tenuous, complexity metrics have appeal because of their potential utility in quantifying habitat quality, defining reference conditions and design criteria, and measuring restoration progress. Hydroacoustic instruments provide many ways to measure complexity on large rivers, yet substantive questions remain about variables and scale of complexity that are meaningful to biota, and how complexity can be measured and monitored cost effectively. We explore these issues on the Missouri River, using the example of channel re-engineering projects that are intended to aid in recovery of the pallid sturgeon, an endangered benthic fish. We are refining understanding of what habitat complexity means for adult fish by combining hydroacoustic habitat assessments with acoustic telemetry to map locations during reproductive migrations and spawning. These data indicate that migrating sturgeon select points with relatively low velocity but adjacent to areas of high velocity (that is, with high velocity gradients); the integration of points defines pathways which minimize energy expenditures during upstream migrations of 10's to 100's of km. Complexity metrics that efficiently quantify migration potential at the reach scale are therefore directly relevant to channel restoration strategies. We are also exploring complexity as it relates to larval sturgeon dispersal. Larvae may drift for as many as 17 days (100's of km at mean velocities) before using up their yolk sac, after which they "settle" into habitats where they initiate feeding. An assumption underlying channel re-engineering is that additional channel complexity, specifically increased shallow, slow water, is necessary for early feeding and refugia. Development of complexity metrics is complicated by the fact that characteristics of channel morphology may increase complexity scores without necessarily increasing biophysical capacity for target species. For example, a cross section that samples depths and velocities across the thalweg (navigation channel) and into lentic habitat may score high on most measures of hydraulic or geomorphic complexity, but does not necessarily provide habitats beneficial to native species. Complexity measures need to be bounded by best estimates of native species requirements. In the absence of specific information, creation of habitat complexity for the sake of complexity may lead to unintended consequences, for example, lentic habitats that increase a complexity score but support invasive species. An additional practical constraint on complexity measures is the need to develop metrics that are can be deployed cost-effectively in an operational monitoring program. Design of a monitoring program requires informed choices of measurement variables, definition of reference sites, and design of sampling effort to capture spatial and temporal variability.
NASA Astrophysics Data System (ADS)
McGuire, N. D.; Ewen, R. J.; de Lacy Costello, B.; Garner, C. E.; Probert, C. S. J.; Vaughan, K.; Ratcliffe, N. M.
2014-06-01
Rapid volatile profiling of stool sample headspace was achieved using a combination of short multi-capillary chromatography column (SMCC), highly sensitive heated metal oxide semiconductor sensor and artificial neural network software. For direct analysis of biological samples this prototype offers alternatives to conventional gas chromatography (GC) detectors and electronic nose technology. The performance was compared to an identical instrument incorporating a long single capillary column (LSCC). The ability of the prototypes to separate complex mixtures was assessed using gas standards and homogenized in house ‘standard’ stool samples, with both capable of detecting more than 24 peaks per sample. The elution time was considerably faster with the SMCC resulting in a run time of 10 min compared to 30 min for the LSCC. The diagnostic potential of the prototypes was assessed using 50 C. difficile positive and 50 negative samples. The prototypes demonstrated similar capability of discriminating between positive and negative samples with sensitivity and specificity of 85% and 80% respectively. C. difficile is an important cause of hospital acquired diarrhoea, with significant morbidity and mortality around the world. A device capable of rapidly diagnosing the disease at the point of care would reduce cases, deaths and financial burden.
McGuire, N D; Ewen, R J; de Lacy Costello, B; Garner, C E; Probert, C S J; Vaughan, K.; Ratcliffe, N M
2016-01-01
Rapid volatile profiling of stool sample headspace was achieved using a combination of short multi-capillary chromatography column (SMCC), highly sensitive heated metal oxide semiconductor (MOS) sensor and artificial neural network (ANN) software. For direct analysis of biological samples this prototype offers alternatives to conventional GC detectors and electronic nose technology. The performance was compared to an identical instrument incorporating a long single capillary column (LSCC). The ability of the prototypes to separate complex mixtures was assessed using gas standards and homogenised in house ‘standard’ stool samples, with both capable of detecting more than 24 peaks per sample. The elution time was considerably faster with the SMCC resulting in a run time of 10 minutes compared to 30 minutes for the LSCC. The diagnostic potential of the prototypes was assessed using 50 C. difficile positive and 50 negative samples. The prototypes demonstrated similar capability of discriminating between positive and negative samples with sensitivity and specificity of 85% and 80% respectively. C. difficile is an important cause of hospital acquired diarrhoea, with significant morbidity and mortality around the world. A device capable of rapidly diagnosing the disease at the point of care would reduce cases, deaths and financial burden. PMID:27212803
The case for planetary sample return missions. 2. History of Mars.
Gooding, J L; Carr, M H; McKay, C P
1989-08-01
Principal science goals for exploration of Mars are to establish the chemical, isotopic, and physical state of Martian material, the nature of major surface-forming processes and their time scales, and the past and present biological potential of the planet. Many of those goals can only be met by detailed analyses of atmospheric gases and carefully selected samples of fresh rocks, weathered rocks, soils, sediments, and ices. The high-fidelity mineral separations, complex chemical treatments, and ultrasensitive instrument systems required for key measurements, as well as the need to adapt analytical strategies to unanticipated results, point to Earth-based laboratory analyses on returned Martian samples as the best means for meeting the stated objectives.
Van Tan, Le; Quang Hieu, Tran; Van Cuong, Nguyen
2015-01-01
New complexes of 5,11,17,23-tetra[(2-ethyl acetoethoxyphenyl)(azo)phenyl]calix[4]arene (TEAC) with Pb(II) and Cr(III) were prepared in basic solution with a mixture of MeOH and H2O as solvent. The ratio of TEAC and metal ion in complexes was found to be 1 : 1 under investigated condition. The complex formation constants (based on Benesi-Hildebrand method) for TEAC-Pb(II) and TEAC-Cr(III) were 4.03 × 104 and 1.2 × 104, respectively. Additionally, the molar extinction coefficients were 5 × 104 and 1.42 × 104 for TEAC-Pb(II) and TEAC-Cr(III), respectively. The H-Point Standard Addition Method (HPSAM) has been applied for simultaneous determination of complexes formation of Cr(III)/Pb(II) and TEAC with concentration from 2 : 1 to 1 : 20 (w/w). The proposed method was successfully utilized to invest lead and chromium contents in plating wastewater samples. The results for several analyzed samples were found to be in satisfied agreement with those acquired by using the inductively coupled plasma mass spectrometry (ICP-MS) technique. PMID:25984379
Atomic force microscopy imaging of macromolecular complexes.
Santos, Sergio; Billingsley, Daniel; Thomson, Neil
2013-01-01
This chapter reviews amplitude modulation (AM) AFM in air and its applications to high-resolution imaging and interpretation of macromolecular complexes. We discuss single DNA molecular imaging and DNA-protein interactions, such as those with topoisomerases and RNA polymerase. We show how relative humidity can have a major influence on resolution and contrast and how it can also affect conformational switching of supercoiled DNA. Four regimes of AFM tip-sample interaction in air are defined and described, and relate to water perturbation and/or intermittent mechanical contact of the tip with either the molecular sample or the surface. Precise control and understanding of the AFM operational parameters is shown to allow the user to switch between these different regimes: an interpretation of the origins of topographical contrast is given for each regime. Perpetual water contact is shown to lead to a high-resolution mode of operation, which we term SASS (small amplitude small set-point) imaging, and which maximizes resolution while greatly decreasing tip and sample wear and any noise due to perturbation of the surface water. Thus, this chapter provides sufficient information to reliably control the AFM in the AM AFM mode of operation in order to image both heterogeneous samples and single macromolecules including complexes, with high resolution and with reproducibility. A brief introduction to AFM, its versatility and applications to biology is also given while providing references to key work and general reviews in the field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrault, Joeel, E-mail: joel.barrault@univ-poitiers.fr; Makhankova, Valeriya G., E-mail: leram@univ.kiev.ua; Khavryuchenko, Oleksiy V.
2012-03-15
From the selective transformation of the heterometallic (Zn-Mn or Cu-Mn) carboxylate complexes with 2,2 Prime -bipyridyl by thermal degradation at relatively low (350 Degree-Sign C) temperature, it was possible to get either well defined spinel ZnMn{sub 2}O{sub 4} over zinc oxide or well dispersed copper particles surrounded by a manganese oxide (Mn{sub 3}O{sub 4}) in a core-shell like structure. Morphology of the powder surface was examined by scanning electron microscopy with energy dispersive X-ray microanalysis (SEM/EDX). Surface composition was determined by X-ray photoelectron spectroscopy (XPS). Specific surface of the powders by nitrogen adsorption was found to be 33{+-}0.2 and 9{+-}0.06more » m{sup 2} g{sup -1} for Zn-Mn and Cu-Mn samples, respectively, which is comparable to those of commercial products. - Graphical abstract: From the selective transformation of heterometallic (Zn-Mn or Cu-Mn) carboxylate complexes, it was possible to get either well defined spinel ZnMn{sub 2}O{sub 4} over zinc oxide or well dispersed copper particles surrounded by a manganese oxide (Mn{sub 3}O{sub 4}) in a core-shell like structure. Highlights: Black-Right-Pointing-Pointer Thermal degradation of heterometallic complexes results in fine disperse particles. Black-Right-Pointing-Pointer Core-shell Cu/Mn{sub 3}O{sub 4} particles are obtained. Black-Right-Pointing-Pointer ZnMn{sub 2}O{sub 4} spinel layer covers ZnO particles.« less
NASA Astrophysics Data System (ADS)
Khodabakhshi, M.; Jafarpour, B.
2013-12-01
Characterization of complex geologic patterns that create preferential flow paths in certain reservoir systems requires higher-order geostatistical modeling techniques. Multipoint statistics (MPS) provides a flexible grid-based approach for simulating such complex geologic patterns from a conceptual prior model known as a training image (TI). In this approach, a stationary TI that encodes the higher-order spatial statistics of the expected geologic patterns is used to represent the shape and connectivity of the underlying lithofacies. While MPS is quite powerful for describing complex geologic facies connectivity, the nonlinear and complex relation between the flow data and facies distribution makes flow data conditioning quite challenging. We propose an adaptive technique for conditioning facies simulation from a prior TI to nonlinear flow data. Non-adaptive strategies for conditioning facies simulation to flow data can involves many forward flow model solutions that can be computationally very demanding. To improve the conditioning efficiency, we develop an adaptive sampling approach through a data feedback mechanism based on the sampling history. In this approach, after a short period of sampling burn-in time where unconditional samples are generated and passed through an acceptance/rejection test, an ensemble of accepted samples is identified and used to generate a facies probability map. This facies probability map contains the common features of the accepted samples and provides conditioning information about facies occurrence in each grid block, which is used to guide the conditional facies simulation process. As the sampling progresses, the initial probability map is updated according to the collective information about the facies distribution in the chain of accepted samples to increase the acceptance rate and efficiency of the conditioning. This conditioning process can be viewed as an optimization approach where each new sample is proposed based on the sampling history to improve the data mismatch objective function. We extend the application of this adaptive conditioning approach to the case where multiple training images are proposed to describe the geologic scenario in a given formation. We discuss the advantages and limitations of the proposed adaptive conditioning scheme and use numerical experiments from fluvial channel formations to demonstrate its applicability and performance compared to non-adaptive conditioning techniques.
A Scanning Quantum Cryogenic Atom Microscope
NASA Astrophysics Data System (ADS)
Lev, Benjamin
Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (2 um), or 6 nT / Hz1 / 2 per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly one-hundred points with an effective field sensitivity of 600 pT / Hz1 / 2 each point during the same time as a point-by-point scanner would measure these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly two orders of magnitude improvement in magnetic flux sensitivity (down to 10- 6 Phi0 / Hz1 / 2) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are for the first time carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns and done so using samples that may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge transport images at temperatures from room to \\x9D4K in unconventional superconductors and topologically nontrivial materials.
Scanning Quantum Cryogenic Atom Microscope
NASA Astrophysics Data System (ADS)
Yang, Fan; Kollár, Alicia J.; Taylor, Stephen F.; Turner, Richard W.; Lev, Benjamin L.
2017-03-01
Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed-matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented dc-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (approximately 2 μ m ) or 6 nT /√{Hz } per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly 100 points with an effective field sensitivity of 600 pT /√{Hz } for each point during the same time as a point-by-point scanner measures these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly 2 orders of magnitude improvement in magnetic flux sensitivity (down to 10-6 Φ0/√{Hz } ) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns in a system where samples may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge-transport images at temperatures from room temperature to 4 K in unconventional superconductors and topologically nontrivial materials.
An open-population hierarchical distance sampling model
Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,
2015-01-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
An open-population hierarchical distance sampling model.
Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott
2015-02-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, Andrew B; St. Brice, Lois; Rodriguez, Jr., Miguel
2014-01-01
Clostridium thermocellum has emerged as a leading bioenergy-relevant microbe due to its ability to solubilize cellulose into carbohydrates, mediated by multi-component membrane-attached complexes termed cellulosomes. To probe microbial cellulose utilization rates, it is desirable to be able to measure the concentrations of saccharolytic enzymes and estimate the total amount of cellulosome present on a mass basis. Current cellulase determination methodologies involve labor-intensive purification procedures and only allow for indirect determination of abundance. We have developed a method using multiple reaction monitoring (MRM-MS) to simultaneously quantitate both enzymatic and structural components of the cellulosome protein complex in samples ranging in complexitymore » from purified cellulosomes to whole cell lysates, as an alternative to a previously-developed enzyme-linked immunosorbent assay (ELISA) method of cellulosome quantitation. The precision of the cellulosome mass concentration in technical replicates is better than 5% relative standard deviation for all samples, indicating high precision for determination of the mass concentration of cellulosome components.« less
Thunborg, Charlotta; Salzmann-Erikson, Martin
2017-01-01
Communication skills are vital for successful relationships between patients and health care professionals. Failure to communicate may lead to a lack of understanding and may result in strained interactions. Our theoretical point of departure was to make use of chaos and complexity theories. To examine the features of strained interactions and to discuss their relevance for health care settings. A netnography study design was applied. Data were purposefully sampled, and video clips (122 minutes from 30 video clips) from public online venues were used. The results are presented in four categories: 1) unpredictability, 2) sensitivity dependence, 3) resistibility, and 4) iteration. They are all features of strained interactions. Strained interactions are a complex phenomenon that exists in health care settings. The findings provide health care professionals guidance to understand the complexity and the features of strained interactions.
NASA Astrophysics Data System (ADS)
Mukasa, Samuel B.; Dalziel, Ian W. D.
1996-11-01
Zircon U-Pb and muscovite {40Ar }/{39Ar } isotopic ages have been determined on rocks from the southernmost Andes and South Georgia Island, North Scotia Ridge, to provide absolute time constraints on the kinematic evolution of southwestern Gondwanaland, until now known mainly from stratigraphic relations. The U-Pb systematics of four zircon fractions from one sample show that proto-marginal basin magmatism in the northern Scotia arc, creating the peraluminous Darwin granite suite and submarine rhyolite sequences of the Tobifera Formation, had begun by the Middle Jurassic (164.1 ± 1.7 Ma). Seven zircon fractions from two other Darwin granites are discordant with non-linear patterns, suggesting a complex history of inheritances and Pb loss. Reference lines drawn through these points on concordia diagrams give upper intercept ages of ca. 1500 Ma, interpreted as a minimum age for the inherited zircon component. This component is believed to have been derived from sedimentary rocks in the Gondwanaland margin accretionary wedge that forms the basement of the region, or else directly from the cratonic "back stop" of that wedge. Ophiolitic remnants of the Rocas Verdes marginal basin preserved in the Larsen Harbour complex on South Georgia yield the first clear evidence that Gondwanaland fragmentation had resulted in the formation of oceanic crust in the Weddell Sea region by the Late Jurassic (150 ± 1 Ma). The geographic pattern in the observed age range of 8 to 13 million years in these ophiolitic materials, while not definitive, is in keeping with propagation of the marginal basin floor northwestward from South Georgia Island to the Sarmiento Complex in southern Chile. Rocks of the Beagle granite suite, emplaced post-tectonically within the uplifted marginal basin floor, have complex zircon U-Pb systematics with gross discordances dominated by inheritances in some samples and Pb loss in others. Of eleven samples processed, only two had sufficient amounts of zircon for multiple fractions, and only one yielded colinear points. These points lie close to the lower concordia intercept for which the age is 68.9 ± 1.0 Ma, but their upper intercept is not well known. Inasmuch as this age is similar to the {40Ar }/{39Ar } age of secondary muscovite growing in extensional fractures of pulled-apart feldspar phenocrysts in a Beagle suite granitic pluton (plateau age is 68.1 ± 0.4 Ma), we interpret the two dates as good time constraints for cooling following a period of extensional deformation probably related to the tectonic denudation of the highgrade metamorphic complex of Cordillera Darwin in Tierra del Fuego.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Eves, E Eugene; Murphy, Ethan K; Yakovlev, Vadim V
2007-01-01
The paper discusses characteristics of a new modeling-based technique for determining dielectric properties of materials. Complex permittivity is found with an optimization algorithm designed to match complex S-parameters obtained from measurements and from 3D FDTD simulation. The method is developed on a two-port (waveguide-type) fixture and deals with complex reflection and transmission characteristics at the frequency of interest. A computational part is constructed as an inverse-RBF-network-based procedure that reconstructs dielectric constant and the loss factor of the sample from the FDTD modeling data sets and the measured reflection and transmission coefficients. As such, it is applicable to samples and cavities of arbitrary configurations provided that the geometry of the experimental setup is adequately represented by the FDTD model. The practical implementation of the method considered in this paper is a section of a WR975 waveguide containing a sample of a liquid in a cylindrical cutout of a rectangular Teflon cup. The method is run in two stages and employs two databases--first, built for a sparse grid on the complex permittivity plane, in order to locate a domain with an anticipated solution and, second, made as a denser grid covering the determined domain, for finding an exact location of the complex permittivity point. Numerical tests demonstrate that the computational part of the method is highly accurate even when the modeling data is represented by relatively small data sets. When working with reflection and transmission coefficients measured in an actual experimental fixture and reconstructing a low dielectric constant and the loss factor the technique may be less accurate. It is shown that the employed neural network is capable of finding complex permittivity of the sample when experimental data on the reflection and transmission coefficients are numerically dispersive (noise-contaminated). A special modeling test is proposed for validating the results; it confirms that the values of complex permittivity for several liquids (including salt water acetone and three types of alcohol) at 915 MHz are reconstructed with satisfactory accuracy.
Influence of Landscape Morphology and Vegetation Cover on the Sampling of Mixed Igneous Bodies
NASA Astrophysics Data System (ADS)
Perugini, Diego; Petrelli, Maurizio; Poli, Giampiero
2010-05-01
A plethora of evidence indicates that magma mixing processes can take place at any evolutionary stage of magmatic systems and that they are extremely common in both plutonic and volcanic environments (e.g. Bateman, 1995). Furthermore, recent studies have shown that the magma mixing process is governed by chaotic dynamics whose evolution in space and time generates complex compositional patterns that can span several length scales producing fractal domains (e.g. Perugini et al., 2003). The fact that magma mixing processes can produce igneous bodies exhibiting a large compositional complexity brings up the key question about the potential pitfalls that may be associated with the sampling of these systems for petrological studies. In particular, since commonly only exiguous portions of the whole magmatic system are available as outcrops for sampling, it is important to address the point whether the sampling may be considered representative of the complexity of the magmatic system. We attempt to address this crucial point by performing numerical simulations of chaotic magma mixing processes in 3D. The numerical system used in the simulations is the so-called ABC (Arnold-Beltrami-Childress) flow (e.g. Galluccio and Vulpiani, 1994), which is able to generate the contemporaneous occurrence of chaotic and regular streamlines in which the mixing efficiency is differently modulated. This numerical system has already been successfully utilized as a kinematic template to reproduce magma mixing structures observed on natural outcrops (Perugini et al., 2007). The best conditions for sampling are evaluated considering different landscape morphologies and percentages of vegetation cover. In particular, synthetic landscapes with different degree of roughness are numerically reproduced using the Random Mid-point Displacement Method (RMDM; e.g. Fournier et al., 1982) in two dimensions and superimposed to the compositional fields generated by the magma mixing simulation. Vegetation cover is generated using a random Brownian motion process in 2D. Such an approach allows us to produce vegetation patches that closely match the general topology of natural vegetation (e.g., Mandelbrot, 1982). Results show that the goodness of sampling is strongly dependant on the roughness of the landscape, with highly irregular morphologies being the best candidates to give the most complete information on the whole magma body. Conversely, sampling on flat or nearly flat surfaces should be avoided because they may contain misleading information about the magmatic system. Contrary to common sense, vegetation cover does not appear to significantly influence the representativeness of sampling if sample collection occurs on topographically irregular outcrops. Application of the proposed method for sampling area selection is straightforward. The irregularity of natural landscapes and the percentage of vegetation can be estimated by using natural landscapes extracted from digital elevation models (DEM) of the Earth's surface and satellite images by employing a variety of methods (e.g., Develi and Babadagli, 1998), thus giving one the opportunity to select a priori the best outcrops for sampling. References Bateman R (1995) The interplay between crystallization, replenishment and hybridization in large felsic magma chambers. Earth Sci Rev 39: 91-106 Develi K, Babadagli T (1998) Quantfication of natural fracture surfaces using fractal geometry. Math Geol 30: 971-998 Fournier A, Fussel D, Carpenter L (1982) Computer rendering of stochastic models. Comm ACM 25: 371-384 Galluccio S, Vulpiani A (1994) Stretching of material lines and surfaces in systems with Lagrangian chaos. Physica A 212: 75-98 Mandelbrot BB (1982) The fractal geometry of nature. W. H. Freeman, San Francisco Perugini D, Petrelli M, Poli G (2007) A Virtual Voyage through 3D Structures Generated by Chaotic Mixing of Magmas and Numerical Simulations: a New Approach for Understanding Spatial and Temporal Complexity of Magma Dynamics, Visual Geosciences, 10.1007/s10069-006-0004-x Perugini D, Poli G, Mazzuoli R (2003) Chaotic advection, fractals and diffusion during mixing of magmas: evidences from lava flows. J Volcanol Geotherm Res 124: 255-279
Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan
2016-11-01
Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.
Chen, Grace Dongqing; Alberts, Catharina Johanna
2009-01-01
The low concentration and complex sample matrix of many clinical and environmental viral samples presents a significant challenge in the development of low cost, point-of-care viral assays. To address this problem, we investigated the use of a microfluidic passive magnetic separator combined with on-chip mixer to both purify and concentrate whole particle HIV-1 virions. Virus-containing plasma samples are first mixed to allow specific binding of the viral particles with antibody-conjugated superparamagnetic nanoparticles, and several passive mixer geometries were assessed for their mixing efficiencies. The virus-nanoparticle complexes are then separated from the plasma in a novel magnetic separation chamber, where packed micron-sized ferromagnetic particles serve as high magnetic gradient concentrators for an externally applied magnetic field. Thereafter, a viral lysis buffer was flowed through the chip and the released HIV proteins were assayed off-chip. Viral protein extraction efficiencies of 62% and 45% were achieved at 10uL/min and 30uL/min throughputs respectively. More importantly, an 80-fold concentration was observed for an initial sample volume of 1mL, and a 44-fold concentration for an initial sample volume of 0.5mL. The system is broadly applicable to microscale sample preparation of any viral sample and can be used for nucleic acid extraction as well as 40–80 fold enrichment of target viruses. PMID:19954210
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Accelerated high-resolution photoacoustic tomography via compressed sensing
NASA Astrophysics Data System (ADS)
Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward
2016-12-01
Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.
THE POPULATION OF COMPACT RADIO SOURCES IN THE ORION NEBULA CLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbrich, J.; Meingast, S.; Rivilla, V. M.
We present a deep centimeter-wavelength catalog of the Orion Nebula Cluster (ONC), based on a 30 hr single-pointing observation with the Karl G. Jansky Very Large Array in its high-resolution A-configuration using two 1 GHz bands centered at 4.7 and 7.3 GHz. A total of 556 compact sources were detected in a map with a nominal rms noise of 3 μ Jy bm{sup −1}, limited by complex source structure and the primary beam response. Compared to previous catalogs, our detections increase the sample of known compact radio sources in the ONC by more than a factor of seven. The newmore » data show complex emission on a wide range of spatial scales. Following a preliminary correction for the wideband primary-beam response, we determine radio spectral indices for 170 sources whose index uncertainties are less than ±0.5. We compare the radio to the X-ray and near-infrared point-source populations, noting similarities and differences.« less
Xu, Zhanfeng; Bunker, Christopher E; Harrington, Peter de B
2010-11-01
Monitoring the changes of jet fuel physical properties is important because fuel used in high-performance aircraft must meet rigorous specifications. Near-infrared (NIR) spectroscopy is a fast method to characterize fuels. Because of the complexity of NIR spectral data, chemometric techniques are used to extract relevant information from spectral data to accurately classify physical properties of complex fuel samples. In this work, discrimination of fuel types and classification of flash point, freezing point, boiling point (10%, v/v), boiling point (50%, v/v), and boiling point (90%, v/v) of jet fuels (JP-5, JP-8, Jet A, and Jet A1) were investigated. Each physical property was divided into three classes, low, medium, and high ranges, using two evaluations with different class boundary definitions. The class boundaries function as the threshold to alarm when the fuel properties change. Optimal partial least squares discriminant analysis (oPLS-DA), fuzzy rule-building expert system (FuRES), and support vector machines (SVM) were used to build the calibration models between the NIR spectra and classes of physical property of jet fuels. OPLS-DA, FuRES, and SVM were compared with respect to prediction accuracy. The validation of the calibration model was conducted by applying bootstrap Latin partition (BLP), which gives a measure of precision. Prediction accuracy of 97 ± 2% of the flash point, 94 ± 2% of freezing point, 99 ± 1% of the boiling point (10%, v/v), 98 ± 2% of the boiling point (50%, v/v), and 96 ± 1% of the boiling point (90%, v/v) were obtained by FuRES in one boundaries definition. Both FuRES and SVM obtained statistically better prediction accuracy over those obtained by oPLS-DA. The results indicate that combined with chemometric classifiers NIR spectroscopy could be a fast method to monitor the changes of jet fuel physical properties.
Fully Integrated Microfluidic Device for Direct Sample-to-Answer Genetic Analysis
NASA Astrophysics Data System (ADS)
Liu, Robin H.; Grodzinski, Piotr
Integration of microfluidics technology with DNA microarrays enables building complete sample-to-answer systems that are useful in many applications such as clinic diagnostics. In this chapter, a fully integrated microfluidic device [1] that consists of microfluidic mixers, valves, pumps, channels, chambers, heaters, and a DNA microarray sensor to perform DNA analysis of complex biological sample solutions is present. This device can perform on-chip sample preparation (including magnetic bead-based cell capture, cell preconcentration and purification, and cell lysis) of complex biological sample solutions (such as whole blood), polymerase chain reaction, DNA hybridization, and electrochemical detection. A few novel microfluidic techniques were developed and employed. A micromix-ing technique based on a cavitation microstreaming principle was implemented to enhance target cell capture from whole blood samples using immunomagnetic beads. This technique was also employed to accelerate DNA hybridization reaction. Thermally actuated paraffin-based microvalves were developed to regulate flows. Electrochemical pumps and thermopneumatic pumps were integrated on the chip to provide pumping of liquid solutions. The device is completely self-contained: no external pressure sources, fluid storage, mechanical pumps, or valves are necessary for fluid manipulation, thus eliminating possible sample contamination and simplifying device operation. Pathogenic bacteria detection from ~mL whole blood samples and single-nucleotide polymorphism analysis directly from diluted blood were demonstrated. The device provides a cost-effective solution to direct sample-to-answer genetic analysis, and thus has a potential impact in the fields of point-of-care genetic analysis, environmental testing, and biological warfare agent detection.
Garnero, Claudia; Chattah, Ana Karina; Aloisio, Carolina; Fabietti, Luis; Longhi, Marcela
2018-05-10
Norfloxacin, an antibiotic that exists in different solid forms, has very unfavorable properties in terms of solubility and stability. Binary complexes of norfloxacin, in the solid form C, and β-cyclodextrin were procured by the kneading method and physical mixture. Their effect on the solubility, the dissolution rate, and the chemical and physical stability of norfloxacin was evaluated. To perform stability studies, the solid samples were stored under accelerated storage conditions, for a period of 6 months. Physical stability was monitored through powder X-ray diffraction, high-resolution 13 C solid-state nuclear magnetic resonance, and scanning electron microscopy. The results showed evidence that the kneaded complex increased and modulated the dissolution rate of norfloxacin C. Furthermore, it was demonstrated that the photochemical stability was increased in the complex, without affecting its physical stability. The results point to the conclusion that the new kneading complex of norfloxacin constitutes an alternative tool to formulate a potential oral drug delivery system with improve oral bioavailability.
O'Mahony, Fiach C.; Papkovsky, Dmitri B.
2006-01-01
A simple method has been developed for the analysis of aerobic bacteria in complex samples such as broth and food homogenates. It employs commercial phosphorescent oxygen-sensitive probes to monitor oxygen consumption of samples containing bacteria using standard microtiter plates and fluorescence plate readers. As bacteria grow in aqueous medium, at certain points they begin to deplete dissolved oxygen, which is seen as an increase in probe fluorescence above baseline signal. The time required to reach threshold signal is used to either enumerate bacteria based on a predetermined calibration or to assess the effects of various effectors on the growth of test bacteria by comparison with an untreated control. This method allows for the sensitive (down to a single cell), rapid (0.5 to 12 h) enumeration of aerobic bacteria without the need to conduct lengthy (48 to 72 h) and tedious colony counts on agar plates. It also allows for screening a wide range of chemical and environmental samples for their toxicity. These assays have been validated with different bacteria, including Escherichia coli, Micrococcus luteus, and Pseudomonas fluorescens, with the enumeration of total viable counts in broth and industrial food samples (packaged ham, chicken, and mince meat), and comparison with established agar plating and optical-density-at-600-nm assays has been given. PMID:16461677
Allometry and Ecology of the Bilaterian Gut Microbiome
Sherrill-Mix, Scott; McCormick, Kevin; Lauder, Abigail; Bailey, Aubrey; Zimmerman, Laurie; Li, Yingying; Django, Jean-Bosco N.; Bertolani, Paco; Colin, Christelle; Hart, John A.; Hart, Terese B.; Georgiev, Alexander V.; Sanz, Crickette M.; Morgan, David B.; Atencia, Rebeca; Cox, Debby; Muller, Martin N.; Sommer, Volker; Piel, Alexander K.; Stewart, Fiona A.; Speede, Sheri; Roman, Joe; Wu, Gary; Taylor, Josh; Bohm, Rudolf; Rose, Heather M.; Carlson, John; Mjungu, Deus; Schmidt, Paul; Gaughan, Celeste; Bushman, Joyslin I.; Schmidt, Ella; Bittinger, Kyle; Collman, Ronald G.; Hahn, Beatrice H.
2018-01-01
ABSTRACT Classical ecology provides principles for construction and function of biological communities, but to what extent these apply to the animal-associated microbiota is just beginning to be assessed. Here, we investigated the influence of several well-known ecological principles on animal-associated microbiota by characterizing gut microbial specimens from bilaterally symmetrical animals (Bilateria) ranging from flies to whales. A rigorously vetted sample set containing 265 specimens from 64 species was assembled. Bacterial lineages were characterized by 16S rRNA gene sequencing. Previously published samples were also compared, allowing analysis of over 1,098 samples in total. A restricted number of bacterial phyla was found to account for the great majority of gut colonists. Gut microbial composition was associated with host phylogeny and diet. We identified numerous gut bacterial 16S rRNA gene sequences that diverged deeply from previously studied taxa, identifying opportunities to discover new bacterial types. The number of bacterial lineages per gut sample was positively associated with animal mass, paralleling known species-area relationships from island biogeography and implicating body size as a determinant of community stability and niche complexity. Samples from larger animals harbored greater numbers of anaerobic communities, specifying a mechanism for generating more-complex microbial environments. Predictions for species/abundance relationships from models of neutral colonization did not match the data set, pointing to alternative mechanisms such as selection of specific colonists by environmental niche. Taken together, the data suggest that niche complexity increases with gut size and that niche selection forces dominate gut community construction. PMID:29588401
Stable isotopes of water in estimation of groundwater dependence in peatlands
NASA Astrophysics Data System (ADS)
Isokangas, Elina; Rossi, Pekka; Ronkanen, Anna-Kaisa; Marttila, Hannu; Rozanski, Kazimierz; Kløve, Bjørn
2016-04-01
Peatland hydrology and ecology can be irreversibly affected by anthropogenic actions or climate change. Especially sensitive are groundwater dependent areas which are difficult to determine. Environmental tracers such as stable isotopes of water are efficient tools to identify these dependent areas and study water flow patterns in peatlands. In this study the groundwater dependence of a Finnish peatland complex situated next to an esker aquifer was studied. Groundwater seepage areas in the peatland were localized by thermal imaging and the subsoil structure was determined using ground penetrating radar. Water samples were collected for stable isotopes of water (δ18O and δ2H), temperature, pH and electrical conductivity at 133 locations of the studied peatland (depth of 10 cm) at approximately 100 m intervals during 4 August - 11 August 2014. In addition, 10 vertical profiles were sampled (10, 30, 60 and 90 cm depth) for the same parameters and for hydraulic conductivity. The cavity ring-down spectroscopy (CRDS) was applied to measure δ18O and δ2H values. The local meteoric water line was determined using precipitation samples from Nuoritta station located 17 km west of the study area and the local evaporation line was defined using water samples from lake Sarvilampi situated on the studied peatland complex. Both near-surface spatial survey and depth profiles of peatland water revealed very wide range in stable isotope composition, from approximately -13.0 to -6.0 ‰ for δ18O and from -94 to -49 ‰ for δ2H, pointing to spatially varying influence of groundwater input from near-by esker aquifer. In addition, position of the data points with respect to the local meteoric water line showed spatially varying degree of evaporation of peatland water. Stable isotope signatures of peatland water in combination with thermal images delineated the specific groundwater dependent areas. By combining the information gained from different types of observations, the conceptual hydrological model of the studied peatland complex, including groundwater - surface water interaction, was built in a new, innovative way.
Multibeam 3D Underwater SLAM with Probabilistic Registration.
Palomer, Albert; Ridao, Pere; Ribas, David
2016-04-20
This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM) using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds). An Iterative Closest Point (ICP) with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1) point-to-point association for coarse registration and (2) point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O(n2) to O(n) . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.
Amputation effects on the underlying complexity within transtibial amputee ankle motion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurdeman, Shane R., E-mail: shanewurdeman@gmail.com; Advanced Prosthetics Center, Omaha, Nebraska 68134; Myers, Sara A.
2014-03-15
The presence of chaos in walking is considered to provide a stable, yet adaptable means for locomotion. This study examined whether lower limb amputation and subsequent prosthetic rehabilitation resulted in a loss of complexity in amputee gait. Twenty-eight individuals with transtibial amputation participated in a 6 week, randomized cross-over design study in which they underwent a 3 week adaptation period to two separate prostheses. One prosthesis was deemed “more appropriate” and the other “less appropriate” based on matching/mismatching activity levels of the person and the prosthesis. Subjects performed a treadmill walking trial at self-selected walking speed at multiple points ofmore » the adaptation period, while kinematics of the ankle were recorded. Bilateral sagittal plane ankle motion was analyzed for underlying complexity through the pseudoperiodic surrogation analysis technique. Results revealed the presence of underlying deterministic structure in both prostheses and both the prosthetic and sound leg ankle (discriminant measure largest Lyapunov exponent). Results also revealed that the prosthetic ankle may be more likely to suffer loss of complexity than the sound ankle, and a “more appropriate” prosthesis may be better suited to help restore a healthy complexity of movement within the prosthetic ankle motion compared to a “less appropriate” prosthesis (discriminant measure sample entropy). Results from sample entropy results are less likely to be affected by the intracycle periodic dynamics as compared to the largest Lyapunov exponent. Adaptation does not seem to influence complexity in the system for experienced prosthesis users.« less
Floyd A. Johnson
1961-01-01
This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...
NASA Astrophysics Data System (ADS)
Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo
2009-10-01
Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.
Identifying the starting point of a spreading process in complex networks.
Comin, Cesar Henrique; Costa, Luciano da Fontoura
2011-11-01
When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.
Using Systems Thinking to train future leaders in global health.
Paxton, Anne; Frost, Laura J
2017-07-09
Systems Thinking provides a useful set of concepts and tools that can be used to train students to be effective and innovative global health leaders in an ever-changing and often chaotic world. This paper describes an experiential, multi-disciplinary curriculum that uses Systems Thinking to frame and analyse global health policies and practices. The curriculum uses case studies and hands-on activities to deepen students' understanding of the following concepts: complex adaptive systems, dynamic complexity, inter-relationships, feedback loops, policy resistance, mental models, boundary critique, leverage points, and multi-disciplinary, multi-sectoral, and multi-stakeholder thinking and action. A sample of Systems Thinking tools for analysing global health policies and practices are also introduced.
NASA Astrophysics Data System (ADS)
Bullen, T. D.; Bailey, S. W.; McGuire, K. J.; Brousseau, P.; Ross, D. S.; Bourgault, R.; Zimmer, M. A.
2010-12-01
Understanding the origin of metals in watersheds, as well as the transport and cycling processes that affect them is of critical importance to watershed science. Metals can be derived both from weathering of minerals in the watershed soils and bedrock and from atmospheric deposition, and can have highly variable residence times in the watershed due to cycling through plant communities and retention in secondary mineral phases prior to release to drainage waters. Although much has been learned about metal cycling and transport through watersheds using simple “box model” approaches that define unique input, output and processing terms, the fact remains that watersheds are inherently complex and variable in terms of substrate structure, hydrologic flowpaths and the influence of plants, all of which affect the chemical composition of water that ultimately passes through the watershed outlet. In an effort to unravel some of this complexity at a watershed scale, we have initiated an interdisciplinary, hydropedology-focused study of the hydrologic reference watershed (Watershed 3) at the Hubbard Brook Experimental Forest in New Hampshire, USA. This 41 hectare headwater catchment consists of a beech-birch-maple-spruce forest growing on soils developed on granitoid glacial till that mantles Paleozoic metamorphic bedrock. Soils vary from lateral spodosols downslope from bedrock exposures near the watershed crest to vertical and bi-modal spodosols along hillslopes to umbrepts at toe-slope positions and inferred hydrologic pinch points created by bedrock and till structure. Using a variety of chemical and isotope tracers (e.g., K/Na, Ca/Sr, Sr/Ba, Fe/Mn, 87Sr/86Sr, Ca-Sr-Fe stable isotopes) on water, soil and plant samples in an end-member mixing analysis approach, we are attempting to discretize the watershed according to soil types encountered along determined hydrologic flowpaths in order better constrain the various biogeochemical processes that control the delivery of metals to the watershed outlet. Our initial results reveal that along the numerous first-order streams that drain the watershed, chemical and Sr isotope compositions are highly variable from sample point to sample point on a given day and from season to season, reflecting the complex nature of hydrologic flowpaths that deliver water to the streams and hinting at the importance of groundwater seeps that appear to concentrate along the central axis of the watershed.
Novel pH sensing semiconductor for point-of-care detection of HIV-1 viremia
Gurrala, R.; Lang, Z.; Shepherd, L.; Davidson, D.; Harrison, E.; McClure, M.; Kaye, S.; Toumazou, C.; Cooke, G. S.
2016-01-01
The timely detection of viremia in HIV-infected patients receiving antiviral treatment is key to ensuring effective therapy and preventing the emergence of drug resistance. In high HIV burden settings, the cost and complexity of diagnostics limit their availability. We have developed a novel complementary metal-oxide semiconductor (CMOS) chip based, pH-mediated, point-of-care HIV-1 viral load monitoring assay that simultaneously amplifies and detects HIV-1 RNA. A novel low-buffer HIV-1 pH-LAMP (loop-mediated isothermal amplification) assay was optimised and incorporated into a pH sensitive CMOS chip. Screening of 991 clinical samples (164 on the chip) yielded a sensitivity of 95% (in vitro) and 88.8% (on-chip) at >1000 RNA copies/reaction across a broad spectrum of HIV-1 viral clades. Median time to detection was 20.8 minutes in samples with >1000 copies RNA. The sensitivity, specificity and reproducibility are close to that required to produce a point-of-care device which would be of benefit in resource poor regions, and could be performed on an USB stick or similar low power device. PMID:27829667
NASA Astrophysics Data System (ADS)
Walicka, A.; Jóźków, G.; Borkowski, A.
2018-05-01
The fluvial transport is an important aspect of hydrological and geomorphologic studies. The knowledge about the movement parameters of different-size fractions is essential in many applications, such as the exploration of the watercourse changes, the calculation of the river bed parameters or the investigation of the frequency and the nature of the weather events. Traditional techniques used for the fluvial transport investigations do not provide any information about the long-term horizontal movement of the rocks. This information can be gained by means of terrestrial laser scanning (TLS). However, this is a complex issue consisting of several stages of data processing. In this study the methodology for individual rocks segmentation from TLS point cloud has been proposed, which is the first step for the semi-automatic algorithm for movement detection of individual rocks. The proposed algorithm is executed in two steps. Firstly, the point cloud is classified as rocks or background using only geometrical information. Secondly, the DBSCAN algorithm is executed iteratively on points classified as rocks until only one stone is detected in each segment. The number of rocks in each segment is determined using principal component analysis (PCA) and simple derivative method for peak detection. As a result, several segments that correspond to individual rocks are formed. Numerical tests were executed on two test samples. The results of the semi-automatic segmentation were compared to results acquired by manual segmentation. The proposed methodology enabled to successfully segment 76 % and 72 % of rocks in the test sample 1 and test sample 2, respectively.
Probability and surprisal in auditory comprehension of morphologically complex words.
Balling, Laura Winther; Baayen, R Harald
2012-10-01
Two auditory lexical decision experiments document for morphologically complex words two points at which the probability of a target word given the evidence shifts dramatically. The first point is reached when morphologically unrelated competitors are no longer compatible with the evidence. Adapting terminology from Marslen-Wilson (1984), we refer to this as the word's initial uniqueness point (UP1). The second point is the complex uniqueness point (CUP) introduced by Balling and Baayen (2008), at which morphologically related competitors become incompatible with the input. Later initial as well as complex uniqueness points predict longer response latencies. We argue that the effects of these uniqueness points arise due to the large surprisal (Levy, 2008) carried by the phonemes at these uniqueness points, and provide independent evidence that how cumulative surprisal builds up in the course of the word co-determines response latencies. The presence of effects of surprisal, both at the initial uniqueness point of complex words, and cumulatively throughout the word, challenges the Shortlist B model of Norris and McQueen (2008), and suggests that a Bayesian approach to auditory comprehension requires complementation from information theory in order to do justice to the cognitive cost of updating probability distributions over lexical candidates. Copyright © 2012 Elsevier B.V. All rights reserved.
2013-07-01
drugs at potentially very low concentrations in a variety of complex media such as saliva, blood, urine , and other bodily fluids. These handheld...flow assays, such as pregnancy tests, disease and drug abuse screens, and blood protein markers, exhibit widespread use. Recent parallel advances...gadgets have the potential to lower the cost of diagnosis and save immense amounts of time by removing the need to collect, preserve, and ship samples
Zhang, Bin Bin; Shi, Yi; Chen, Hui; Zhu, Qing Xia; Lu, Feng; Li, Ying Wei
2018-01-02
By coupling surface-enhanced Raman spectroscopy (SERS) with thin-layer chromatography (TLC), a powerful method for detecting complex samples was successfully developed. However, in the TLC-SERS method, metal nanoparticles serving as the SERS-active substrate are likely to disturb the detection of target compounds, particularly in overlapping compounds after TLC development. In addition, the SERS detection of compounds that are invisible under both visible light and UV 254/365 after TLC development is still a significant challenge. In this study, we demonstrated a facile strategy to fabricate a TLC plate with metal-organic framework-modified gold nanoparticles as a separable SERS substrate, on which all separated components, including overlapping and invisible compounds, could be detected by a point-by-point SERS scan along the developing direction. Rhodamine 6G (R6G) was used as a probe to evaluate the performance of the substrate. The results indicated that the substrate provided good sensitivity and reproducibility, and optimal SERS signals could be collected in 5 s. Furthermore, this new substrate exhibited a long shelf life. Thus, our method has great potential for the sensitive and rapid detection of overlapping and invisible compounds in complex samples after TLC development. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Quadtree of TIN: a new algorithm of dynamic LOD
NASA Astrophysics Data System (ADS)
Zhang, Junfeng; Fei, Lifan; Chen, Zhen
2009-10-01
Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.
Yang, Xiupei; Jia, Zhihui; Yang, Xiaocui; Li, Gu; Liao, Xiangjun
2017-03-01
A cloud point extraction (CPE) method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS) The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I)/diethyldithiocarbamate (DDTC) complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I)/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL -1 Ag + in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL -1 for both Zn 2+ and Cu 2+ , 80 μg·mL -1 for Pb 2+ , 1000 μg·mL -1 for Mn 2+ , and 100 μg·mL -1 for both Cd 2+ and Ni 2+ . The calibration curve was linear in the range of 1-500 ng·mL -1 with a limit of detection (LOD) at 0.3 ng·mL -1 . The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.
Castor, José Martín Rosas; Portugal, Lindomar; Ferrer, Laura; Hinojosa-Reyes, Laura; Guzmán-Mar, Jorge Luis; Hernández-Ramírez, Aracely; Cerdà, Víctor
2016-08-01
A simple, inexpensive and rapid method was proposed for the determination of bioaccessible arsenic in corn and rice samples using an in vitro bioaccessibility assay. The method was based on the preconcentration of arsenic by cloud point extraction (CPE) using o,o-diethyldithiophosphate (DDTP) complex, which was generated from an in vitro extract using polyethylene glycol tert-octylphenyl ether (Triton X-114) as a surfactant prior to its detection by atomic fluorescence spectrometry with a hydride generation system (HG-AFS). The CPE method was optimized by a multivariate approach (two-level full factorial and Doehlert designs). A photo-oxidation step of the organic species prior to HG-AFS detection was included for the accurate quantification of the total As. The limit of detection was 1.34μgkg(-1) and 1.90μgkg(-1) for rice and corn samples, respectively. The accuracy of the method was confirmed by analyzing certified reference material ERM BC-211 (rice powder). The corn and rice samples that were analyzed showed a high bioaccessible arsenic content (72-88% and 54-96%, respectively), indicating a potential human health risk. Copyright © 2016 Elsevier Ltd. All rights reserved.
Khan, Sumaira; Kazi, Tasneem G; Baig, Jameel A; Kolachi, Nida F; Afridi, Hassan I; Wadhwa, Sham Kumar; Shah, Abdul Q; Kandhro, Ghulam A; Shah, Faheem
2010-10-15
A cloud point extraction (CPE) method has been developed for the determination of trace quantity of vanadium ions in pharmaceutical formulations (PF), dialysate (DS) and parenteral solutions (PS). The CPE of vanadium (V) using 8-hydroxyquinoline (oxine) as complexing reagent and mediated by nonionic surfactant (Triton X-114) was investigated. The parameters that affect the extraction efficiency of CPE, such as pH of sample solution, concentration of oxine and Triton X-114, equilibration temperature and time period for shaking were investigated in detail. The validity of CPE of V was checked by standard addition method in real samples. The extracted surfactant-rich phase was diluted with nitric acid in ethanol, prior to subjecting electrothermal atomic absorption spectrometry. Under these conditions, the preconcentration of 50 mL sample solutions, allowed raising an enrichment factor of 125-fold. The lower limit of detection obtained under the optimal conditions was 42 ng/L. The proposed method has been successfully applied to the determination of trace quantity of V in various pharmaceutical preparations with satisfactory results. The concentration ranges of V in PF, DS and PS samples were found in the range of 10.5-15.2, 0.65-1.32 and 1.76-6.93 microg/L, respectively. 2010 Elsevier B.V. All rights reserved.
Herwig, Ela; Marchetti-Deschmann, Martina; Wenz, Christian; Rüfer, Andreas; Redl, Heinz; Bahrami, Soheyl; Allmaier, Günter
2015-06-01
Sepsis represents a significant cause of mortality in intensive care units. Early diagnosis of sepsis is essential to increase the survival rate of patients. Among others, C-reactive protein (CRP) is commonly used as a sepsis marker. In this work we introduce immune precipitation combined with microchip capillary gel electrophoresis (IP-MCGE) for the detection and quantification of CRP in serum samples. First high-abundance proteins (HSA, IgG) are removed from serum samples using affinity spin cartridges, and then the remaining proteins are labeled with a fluorescence dye and incubated with an anti-CRP antibody, and the antigen/antibody complex is precipitated with protein G-coated magnetic beads. After precipitation the complex is eluted from the beads and loaded onto the MCGE system. CRP could be reliably detected and quantified, with a detection limit of 25 ng/μl in serum samples and 126 pg/μl in matrix-free samples. The overall sensitivity (LOQ = 75 ng/μl, R(2) = 0.9668) of the method is lower than that of some specially developed methods (e.g., immune radiometric assay) but is comparable to those of clinically accepted ELISA methods. The straightforward sample preparation (not prone to mistakes), reduced sample and reagent volumes (including the antibodies), and high throughput (10 samples/3 h) are advantages and therefore IP-MCGE bears potential for point-of-care diagnosis. Copyright © 2015 Elsevier Inc. All rights reserved.
A one-way shooting algorithm for transition path sampling of asymmetric barriers
NASA Astrophysics Data System (ADS)
Brotzakis, Z. Faidon; Bolhuis, Peter G.
2016-10-01
We present a novel transition path sampling shooting algorithm for the efficient sampling of complex (biomolecular) activated processes with asymmetric free energy barriers. The method employs a fictitious potential that biases the shooting point toward the transition state. The method is similar in spirit to the aimless shooting technique by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)], but is targeted for use with the one-way shooting approach, which has been shown to be more effective than two-way shooting algorithms in systems dominated by diffusive dynamics. We illustrate the method on a 2D Langevin toy model, the association of two peptides and the initial step in dissociation of a β-lactoglobulin dimer. In all cases we show a significant increase in efficiency.
Spectroscopic analysis of the powdery complex chitosan-iodine
NASA Astrophysics Data System (ADS)
Gegel, Natalia O.; Babicheva, Tatyana S.; Belyakova, Olga A.; Lugovitskaya, Tatyana N.; Shipovskaya, Anna B.
2018-04-01
A chitosan-iodine complex was obtained by modification of polymer powder in the vapor of an iodine-containing sorbate and studied by electron and IR spectroscopy, optical rotation dispersion. It was found that the electronic spectra of an aqueous solution of the modified chitosan (the source one and that stored for a year) showed intense absorption bands of triiodide and iodate ions, and also polyiodide ions, bound to the macromolecule by exciton bonding with charge transfer. Analysis of the IR spectra shows destruction of the network of intramolecular and intermolecular hydrogen bonds in the iodinated chitosan powder in comparison with the source polymer and the formation of a new chemical substance. E.g., the absorption band of deformation vibrations of the hydroxyl group disappears in the modified sample, and that of the protonated amino group shifts toward shorter wavelengths. The intensity of the stretching vibration band of the glucopyranose ring atoms significantly reduces. Heating of the modified sample at a temperature below the thermal degradation point of the polymer leads to stabilization of the chitosan-iodine complex. Based on our studies, the hydroxyl and amino groups of the aminopolysaccharide have been recognized as the centers of retention of polyiodide chains in the chitosan matrix.
Wang, Zhuo; Jin, Shuilin; Liu, Guiyou; Zhang, Xiurui; Wang, Nan; Wu, Deliang; Hu, Yang; Zhang, Chiping; Jiang, Qinghua; Xu, Li; Wang, Yadong
2017-05-23
The development of single-cell RNA sequencing has enabled profound discoveries in biology, ranging from the dissection of the composition of complex tissues to the identification of novel cell types and dynamics in some specialized cellular environments. However, the large-scale generation of single-cell RNA-seq (scRNA-seq) data collected at multiple time points remains a challenge to effective measurement gene expression patterns in transcriptome analysis. We present an algorithm based on the Dynamic Time Warping score (DTWscore) combined with time-series data, that enables the detection of gene expression changes across scRNA-seq samples and recovery of potential cell types from complex mixtures of multiple cell types. The DTWscore successfully classify cells of different types with the most highly variable genes from time-series scRNA-seq data. The study was confined to methods that are implemented and available within the R framework. Sample datasets and R packages are available at https://github.com/xiaoxiaoxier/DTWscore .
A precision multi-sampler for deep-sea hydrothermal microbial mat studies
NASA Astrophysics Data System (ADS)
Breier, J. A.; Gomez-Ibanez, D.; Reddington, E.; Huber, J. A.; Emerson, D.
2012-12-01
A new tool was developed for deep-sea microbial mat studies by remotely operated vehicles and was successfully deployed during a cruise to the hydrothermal vent systems of the Mid-Cayman Rise. The Mat Sampler allows for discrete, controlled material collection from complex microbial structures, vertical-profiling within thick microbial mats and particulate and fluid sample collection from venting seafloor fluids. It has a reconfigurable and expandable sample capacity based on magazines of 6 syringes, filters, or water bottles. Multiple magazines can be used such that 12-36 samples can be collected routinely during a single dive; several times more if the dive is dedicated for this purpose. It is capable of hosting in situ physical, electrochemical, and optical sensors, including temperature and oxygen probes in order to guide sampling and to record critical environmental parameters at the time and point of sample collection. The precision sampling capability of this instrument will greatly enhance efforts to understand the structured, delicate, microbial mat communities that grow in diverse benthic habitats.
Belu, A; Schnitker, J; Bertazzo, S; Neumann, E; Mayer, D; Offenhäusser, A; Santoro, F
2016-07-01
The preparation of biological cells for either scanning or transmission electron microscopy requires a complex process of fixation, dehydration and drying. Critical point drying is commonly used for samples investigated with a scanning electron beam, whereas resin-infiltration is typically used for transmission electron microscopy. Critical point drying may cause cracks at the cellular surface and a sponge-like morphology of nondistinguishable intracellular compartments. Resin-infiltrated biological samples result in a solid block of resin, which can be further processed by mechanical sectioning, however that does not allow a top view examination of small cell-cell and cell-surface contacts. Here, we propose a method for removing resin excess on biological samples before effective polymerization. In this way the cells result to be embedded in an ultra-thin layer of epoxy resin. This novel method highlights in contrast to standard methods the imaging of individual cells not only on nanostructured planar surfaces but also on topologically challenging substrates with high aspect ratio three-dimensional features by scanning electron microscopy. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Altunay, Nail; Gürkan, Ramazan
2015-05-15
A new cloud-point extraction (CPE) for the determination of antimony species in biological and beverages samples has been established with flame atomic absorption spectrometry (FAAS). The method is based on the fact that formation of the competitive ion-pairing complex of Sb(III) and Sb(V) with Victoria Pure Blue BO (VPB(+)) at pH 10. The antimony species were individually detected by FAAS. Under the optimized conditions, the calibration range for Sb(V) is 1-250 μg L(-1) with a detection limit of 0.25 μg L(-1) and sensitive enhancement factor of 76.3 while the calibration range for Sb(III) is 10-400 μg L(-1) with a detection limit of 5.15 μg L(-1) and sensitive enhancement factor of 48.3. The precision as a relative standard deviation is in range of 0.24-2.35%. The method was successfully applied to the speciative determination of antimony species in the samples. The validation was verified by analysis of certified reference materials (CRMs). Copyright © 2014 Elsevier Ltd. All rights reserved.
Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail
2015-08-01
The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.
Surface active complexes formed between keratin polypeptides and ionic surfactants.
Pan, Fang; Lu, Zhiming; Tucker, Ian; Hosking, Sarah; Petkov, Jordan; Lu, Jian R
2016-12-15
Keratins are a group of important proteins in skin and hair and as biomaterials they can provide desirable properties such as strength, biocompatibility, and moisture regaining and retaining. The aim of this work is to develop water-soluble keratin polypeptides from sheep wool and then explore how their surface adsorption behaves with and without surfactants. Successful preparation of keratin samples was demonstrated by identification of the key components from gel electrophoresis and the reproducible production of gram scale samples with and without SDS (sodium dodecylsulphate) during wool fibre dissolution. SDS micelles could reduce the formation of disulphide bonds between keratins during extraction, reducing inter-molecular crosslinking and improving keratin polypeptide solubility. However, Zeta potential measurements of the two polypeptide batches demonstrated almost identical pH dependent surface charge distributions with isoelectric points around pH 3.5, showing complete removal of SDS during purification by dialysis. In spite of different solubility from the two batches of keratin samples prepared, very similar adsorption and aggregation behavior was revealed from surface tension measurements and dynamic light scattering. Mixing of keratin polypeptides with SDS and C 12 TAB (dodecyltrimethylammonium bromide) led to the formation of keratin-surfactant complexes that were substantially more effective at reducing surface tension than the polypeptides alone, showing great promise in the delivery of keratin polypeptides via the surface active complexes. Neutron reflection measurements revealed the coexistence of surfactant and keratin polypeptides at the interface, thus providing the structural support to the observed surface tension changes associated with the formation of the surface active complexes. Copyright © 2016. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Somerson, Jacob; Plaxco, Kevin W
2018-04-15
The ability to measure the concentration of specific small molecules continuously and in real-time in complex sample streams would impact many areas of agriculture, food safety, and food production. Monitoring for mycotoxin taint in real time during food processing, for example, could improve public health. Towards this end, we describe here an inexpensive electrochemical DNA-based sensor that supports real-time monitor of the mycotoxin ochratoxin A in a flowing stream of foodstuffs.
Language decline across the life span: findings from the Nun Study.
Kemper, S; Greiner, L H; Marquis, J G; Prenovost, K; Mitzner, T L
2001-06-01
The present study examines language samples from the Nun Study. Measures of grammatical complexity and idea density were obtained from autobiographies written over a 60-year span. Participants who had met criteria for dementia were contrasted with those who did not. Grammatical complexity initially averaged 4.78 (on a 0-to-7-point scale) for participants who did not meet criteria for dementia and declined .04 units per year; grammatical complexity for participants who met criteria for dementia initially averaged 3.86 and declined .03 units per year. Idea density averaged 5.35 propositions per 10 words initially for participants who did not meet criteria for dementia and declined an average of .03 units per year, whereas idea density averaged 4.34 propositions per 10 words initially for participants who met criteria for dementia and declined .02 units per year. Adult experiences, in general, did not moderate these declines.
Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.
Gumus, Abdurrahman; Ahsan, Syed; Dogan, Belgin; Jiang, Li; Snodgrass, Ryan; Gardner, Andrea; Lu, Zhengda; Simpson, Kenneth; Erickson, David
2016-01-01
The use of point-of-care (POC) devices in limited resource settings where access to commonly used infrastructure, such as water and electricity, can be restricted represents simultaneously one of the best application fits for POC systems as well as one of the most challenging places to deploy them. Of the many challenges involved in these systems, the preparation and processing of complex samples like stool, vomit, and biopsies are particularly difficult due to the high number and varied nature of mechanical and chemical interferents present in the sample. Previously we have demonstrated the ability to use solar-thermal energy to perform PCR based nucleic acid amplifications. In this work demonstrate how the technique, using similar infrastructure, can also be used to perform solar-thermal based sample processing system for extracting and isolating Vibrio Cholerae nucleic acids from fecal samples. The use of opto-thermal energy enables the use of sunlight to drive thermal lysing reactions in large volumes without the need for external electrical power. Using the system demonstrate the ability to reach a 95°C threshold in less than 5 minutes and maintain a stable sample temperature of +/− 2°C following the ramp up. The system is demonstrated to provide linear results between 104 and 108 CFU/mL when the released nucleic acids were quantified via traditional means. Additionally, we couple the sample processing unit with our previously demonstrated solar-thermal PCR and tablet based detection system to demonstrate very low power sample-in-answer-out detection. PMID:27231636
Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.
2018-04-01
Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.
1991-01-01
The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.
Damage Detection in Rotorcraft Composite Structures Using Thermography and Laser-Based Ultrasound
NASA Technical Reports Server (NTRS)
Anastasi, Robert F.; Zalameda, Joseph N.; Madaras, Eric I.
2004-01-01
New rotorcraft structural composite designs incorporate lower structural weight, reduced manufacturing complexity, and improved threat protection. These new structural concepts require nondestructive evaluation inspection technologies that can potentially be field-portable and able to inspect complex geometries for damage or structural defects. Two candidate technologies were considered: Thermography and Laser-Based Ultrasound (Laser UT). Thermography and Laser UT have the advantage of being non-contact inspection methods, with Thermography being a full-field imaging method and Laser UT a point scanning technique. These techniques were used to inspect composite samples that contained both embedded flaws and impact damage of various size and shape. Results showed that the inspection techniques were able to detect both embedded and impact damage with varying degrees of success.
NASA Astrophysics Data System (ADS)
Skala, Vaclav
2016-06-01
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.
p-type doping by platinum diffusion in low phosphorus doped silicon
NASA Astrophysics Data System (ADS)
Ventura, L.; Pichaud, B.; Vervisch, W.; Lanois, F.
2003-07-01
In this work we show that the cooling rate following a platinum diffusion strongly influences the electrical conductivity in weakly phosphorus doped silicon. Diffusions were performed at the temperature of 910 °C in the range of 8 32 hours in 0.6, 30, and 60 Ωrm cm phosphorus doped silicon samples. Spreading resistance profile analyses clearly show an n-type to p-type conversion under the surface when samples are cooled slowly. On the other hand, a compensation of the phosphorus donors can only be observed when samples are quenched. One Pt related acceptor deep level at 0.43 eV from the valence band is assumed to be at the origin of the type conversion mechanism. Its concentration increases by lowering the applied cooling rate. A complex formation with fast species such as interstitial Pt atoms or intrinsic point defects is expected. In 0.6 Ωrm cm phosphorus doped silicon, no acceptor deep level in the lower band gap is detected by DLTS measurement. This removes the opportunity of a pairing between phosphorus and platinum and suggests the possibility of a Fermi level controlled complex formation.
Eddhif, Balkis; Allavena, Audrey; Liu, Sylvie; Ribette, Thomas; Abou Mrad, Ninette; Chiavassa, Thierry; d'Hendecourt, Louis Le Sergeant; Sternberg, Robert; Danger, Gregoire; Geffroy-Rodier, Claude; Poinot, Pauline
2018-03-01
The present work aims at developing two LC-HRMS setups for the screening of organic matter in astrophysical samples. Their analytical development has been demonstrated on a 100-µg residue coming from the photo-thermo chemical processing of a cometary ice analog produced in laboratory. The first 1D-LC-HRMS setup combines a serially coupled columns configuration with HRMS detection. It has allowed to discriminate among different chemical families (amino acids, sugars, nucleobases and oligopeptides) in only one chromatographic run without neither a priori acid hydrolysis nor chemical derivatisation. The second setup is a dual-LC configuration which connects a series of trapping columns with analytical reverse-phase columns. By coupling on-line these two distinct LC units with a HRMS detection, high mass compounds (350
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less
Wang, Lin; Lv, Xiangguo; Jin, Chongrui; Guo, Hailin; Shu, Huiquan; Fu, Qiang; Sa, Yinglong
2018-02-01
To develop a standardized PU-score (posterior urethral stenosis score), with the goal of using this scoring system as a preliminary predictor of surgical complexity and prognosis of posterior urethral stenosis. We retrospectively reviewed records of all patients who underwent posterior urethral surgery at our institution from 2013 to 2015. The PU-score is based on 5 components, namely etiology (1 or 2 points), location (1-3 points), length (1-3 points), urethral fistula (1 or 2 points), and posterior urethral false passage (1 point). We calculated the score of all patients and analyzed its association with surgical complexity, stenosis recurrence, intraoperative blood loss, erectile dysfunction, and urinary incontinence. There were 144 patients who underwent low complexity urethral surgery (direct vision internal urethrotomy, anastomosis with or without crural separation) with a mean score of 5.1 points, whereas 143 underwent high complexity urethroplasty (anastomosis with inferior pubectomy or urethrorectal fistula repair, perineal or scrotum skin flap urethroplasty, bladder flap urethroplasty) with a mean score of 6.9 points. The increase of PU-score was predictive of higher surgical complexity (P = .000), higher recurrence (P = .002), more intraoperative blood loss (P = .000), and decrease of preoperative (P = .037) or postoperative erectile function (P = .047). However, no association was observed between PU-score and urinary incontinence (P = .213). The PU-score is a novel and meaningful scoring system that describes the essential factors in determining the complexity and prognosis for posterior urethral stenosis. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.
2017-02-01
Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.
Comparison of VFA titration procedures used for monitoring the biogas process.
Lützhøft, Hans-Christian Holten; Boe, Kanokwan; Fang, Cheng; Angelidaki, Irini
2014-05-01
Titrimetric determination of volatile fatty acids (VFAs) contents is a common way to monitor a biogas process. However, digested manure from co-digestion biogas plants has a complex matrix with high concentrations of interfering components, resulting in varying results when using different titration procedures. Currently, no standardized procedure is used and it is therefore difficult to compare the performance among plants. The aim of this study was to evaluate four titration procedures (for determination of VFA-levels of digested manure samples) and compare results with gas chromatographic (GC) analysis. Two of the procedures are commonly used in biogas plants and two are discussed in literature. The results showed that the optimal titration results were obtained when 40 mL of four times diluted digested manure was gently stirred (200 rpm). Results from samples with different VFA concentrations (1-11 g/L) showed linear correlation between titration results and GC measurements. However, determination of VFA by titration generally overestimated the VFA contents compared with GC measurements when samples had low VFA concentrations, i.e. around 1 g/L. The accuracy of titration increased when samples had high VFA concentrations, i.e. around 5 g/L. It was further found that the studied ionisable interfering components had lowest effect on titration when the sample had high VFA concentration. In contrast, bicarbonate, phosphate and lactate had significant effect on titration accuracy at low VFA concentration. An extended 5-point titration procedure with pH correction was best to handle interferences from bicarbonate, phosphate and lactate at low VFA concentrations. Contrary, the simplest titration procedure with only two pH end-points showed the highest accuracy among all titration procedures at high VFA concentrations. All in all, if the composition of the digested manure sample is not known, the procedure with only two pH end-points should be the procedure of choice, due to its simplicity and accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Betekhtin, V. I.; Kadomtsev, A. G.; Kardashev, B. K.
2006-08-01
The effect of the amplitude of vibrational deformation on the elastic modulus and internal friction of microcrystalline aluminum samples produced by equal-channel angular pressing was studied. The samples have various deformation and thermal histories. The elastic and inelastic (microplastic) properties of the samples are investigated. As the degree of plastic deformation increases, the Young’s modulus E, the amplitude-independent decrement δi, and the microplastic flow stress σ increase. As the annealing temperature increases, the quantities δi and σ decrease noticeably and the modulus E exhibits a more complex behavior. The experimental data are discussed under the assumption that the dislocation mobility depends on both the spectrum of point defects and the internal stresses, whose level is determined by the degree of plastic deformation and the temperature of subsequent annealing. The concept of internal stresses is also used to analyze the data on the effect of the degree of deformation and annealing on the rupture strength of the samples.
On-chip wavelength multiplexed detection of cancer DNA biomarkers in blood
Cai, H.; Stott, M. A.; Ozcelik, D.; Parks, J. W.; Hawkins, A. R.; Schmidt, H.
2016-01-01
We have developed an optofluidic analysis system that processes biomolecular samples starting from whole blood and then analyzes and identifies multiple targets on a silicon-based molecular detection platform. We demonstrate blood filtration, sample extraction, target enrichment, and fluorescent labeling using programmable microfluidic circuits. We detect and identify multiple targets using a spectral multiplexing technique based on wavelength-dependent multi-spot excitation on an antiresonant reflecting optical waveguide chip. Specifically, we extract two types of melanoma biomarkers, mutated cell-free nucleic acids —BRAFV600E and NRAS, from whole blood. We detect and identify these two targets simultaneously using the spectral multiplexing approach with up to a 96% success rate. These results point the way toward a full front-to-back chip-based optofluidic compact system for high-performance analysis of complex biological samples. PMID:28058082
Effects of three phosphate industrial sites on ground-water quality in central Florida, 1979 to 1980
Miller, R.L.; Sutcliffe, Horace
1984-01-01
Geologic, hydrologic, and water quality data and information on test holes collected in the vicinity of gypsum stack complexes at two phosphate chemical plants and one phosphatic clayey waste disposal pond at a phosphate mine and beneficiation plant in central Florida are presented. The data were collected from September 1979 to October 1980 at the AMAX Phosphate, Inc. chemical plant, Piney Point; the USS Agri-Chemicals chemical plant, Bartow; and the International Minerals and Chemical Corporation Clear Springs mine, Bartow. Approximately 5,400 field and laboratory water quality determinations on water samples collected from about 100 test holes and 28 surface-water , 5 rainfall, and other sampling sites at phosphate industry beneficiation and chemical plant waste disposal operations are tabulated. Maps are included to show sampling sites. (USGS)
Miller, Ronald L.; Sutcliffe, Horace
1982-01-01
This report is a complilation of geologic, hydrologic, and water-quality data and information on test holes collected in the vicinity of gypsum stack complexes at two phosphate chemical plants and one phosphatic clayey waste disposal pond at a phosphate mine and beneficiation plant in central Florida. The data were collected from September 1979 to October 1980 at thee AMAX Phosphate, Inc., chemical plant, Piney Point; the USS AgriChemicals chemical plant, Bartow; and the International Minerals and Chemical Corporation Clear Springs mine, Bartow. Approximmmtely 5,400 field and laboratory water-quality determinations on water samples were collected from about 78 test holes and 31 surface-water, rainfall, and other sampling sites at phosphate industry beneficiation and chemical plant waste-disposal operations. Maps show locations of sampling sites. (USGS)
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
Pharmaceutical applications using NIR technology in the cloud
NASA Astrophysics Data System (ADS)
Grossmann, Luiz; Borges, Marco A.
2017-05-01
NIR technology has been available for a long time, certainly more than 50 years. Without any doubt, it has found many niche applications, especially in the pharmaceutical, food, agriculture and other industries due to its flexibility. There are a number of advantages over other existing analytical technologies we can list, for example virtually no need for sample preparation; usually NIR does not demand sample destruction and subsequent discard; NIR provides fast results; NIR does not require extensive operator training and carries small operating costs. However, the key point about NIR technology is the fact that it's more related to statistics than chemistry or, in other words, we are more concerned about analyzing and distinguishing features within the data than looking deep into the chemical entities themselves. A simple scan reading in the NIR range usually involves huge inflows of data points. Usually we decompose the signals into hundreds of predictor variables and use complex algorithms to predict classes or quantify specific content. NIR is all about math, especially by converting chemical information into numbers. Easier said than done. A NIR signal is a very complex one. Usually the signal responses are not specific to a particular material, rather, each grouṕs responses add up, thus providing low specificity of a spectral reading. This paper proposes a simple and efficient method to analyze and compare NIR spectra for the purpose of identifying the presence of active pharmaceutical ingredients in finished products using low cost NIR scanning devices connected to the internet cloud.
Active control of acoustic field-of-view in a biosonar system.
Yovel, Yossi; Falk, Ben; Moss, Cynthia F; Ulanovsky, Nachum
2011-09-01
Active-sensing systems abound in nature, but little is known about systematic strategies that are used by these systems to scan the environment. Here, we addressed this question by studying echolocating bats, animals that have the ability to point their biosonar beam to a confined region of space. We trained Egyptian fruit bats to land on a target, under conditions of varying levels of environmental complexity, and measured their echolocation and flight behavior. The bats modulated the intensity of their biosonar emissions, and the spatial region they sampled, in a task-dependant manner. We report here that Egyptian fruit bats selectively change the emission intensity and the angle between the beam axes of sequentially emitted clicks, according to the distance to the target, and depending on the level of environmental complexity. In so doing, they effectively adjusted the spatial sector sampled by a pair of clicks-the "field-of-view." We suggest that the exact point within the beam that is directed towards an object (e.g., the beam's peak, maximal slope, etc.) is influenced by three competing task demands: detection, localization, and angular scanning-where the third factor is modulated by field-of-view. Our results suggest that lingual echolocation (based on tongue clicks) is in fact much more sophisticated than previously believed. They also reveal a new parameter under active control in animal sonar-the angle between consecutive beams. Our findings suggest that acoustic scanning of space by mammals is highly flexible and modulated much more selectively than previously recognized.
Goebel, L; Zurakowski, D; Müller, A; Pape, D; Cucchiarini, M; Madry, H
2014-10-01
To compare the 2D and 3D MOCART system obtained with 9.4 T high-field magnetic resonance imaging (MRI) for the ex vivo analysis of osteochondral repair in a translational model and to correlate the data with semiquantitative histological analysis. Osteochondral samples representing all levels of repair (sheep medial femoral condyles; n = 38) were scanned in a 9.4 T high-field MRI. The 2D and adapted 3D MOCART systems were used for grading after point allocation to each category. Each score was correlated with corresponding reconstructions between both MOCART systems. Data were next correlated with corresponding categories of an elementary (Wakitani) and a complex (Sellers) histological scoring system as gold standards. Correlations between most 2D and 3D MOCART score categories were high, while mean total point values of 3D MOCART scores tended to be 15.8-16.1 points higher compared to the 2D MOCART scores based on a Bland-Altman analysis. "Defect fill" and "total points" of both MOCART scores correlated with corresponding categories of Wakitani and Sellers scores (all P ≤ 0.05). "Subchondral bone plate" also correlated between 3D MOCART and Sellers scores (P < 0.001). Most categories of the 2D and 3D MOCART systems correlate, while total scores were generally higher using the 3D MOCART system. Structural categories "total points" and "defect fill" can reliably be assessed by 9.4 T MRI evaluation using either system, "subchondral bone plate" using the 3D MOCART score. High-field MRI is valuable to objectively evaluate osteochondral repair in translational settings. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Arain, Salma Aslam; Kazi, Tasneem Gul; Afridi, Hassan Imran; Arain, Mariam Shahzadi; Panhwar, Abdul Haleem; Khan, Naeemullah; Baig, Jameel Ahmed; Shah, Faheem
2016-04-01
A simple and rapid dispersive liquid-liquid microextraction procedure based on ionic liquid assisted microemulsion (IL-µE-DLLME) combined with cloud point extraction has been developed for preconcentration copper (Cu(2+)) in drinking water and serum samples of adolescent female hepatitits C (HCV) patients. In this method a ternary system was developed to form microemulsion (µE) by phase inversion method (PIM), using ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate ([C4mim][PF6]) and nonionic surfactant, TX-100 (as a stabilizer in aqueous media). The Ionic liquid microemulsion (IL-µE) was evaluated through visual assessment, optical light microscope and spectrophotometrically. The Cu(2+) in real water and aqueous acid digested serum samples were complexed with 8-hydroxyquinoline (oxine) and extracted into IL-µE medium. The phase separation of stable IL-µE was carried out by the micellar cloud point extraction approach. The influence of of different parameters such as pH, oxine concentration, centrifugation time and rate were investigated. At optimized experimental conditions, the limit of detection and enhancement factor were found to be 0.132 µg/L and 70 respectively, with relative standard deviation <5%. In order to validate the developed method, certified reference materials (SLRS-4 Riverine water) and human serum (Sero-M10181) were analyzed. The resulting data indicated a non-significant difference in obtained and certified values of Cu(2+). The developed procedure was successfully applied for the preconcentration and determination of trace levels of Cu(2+) in environmental and biological samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Combining SVM and flame radiation to forecast BOF end-point
NASA Astrophysics Data System (ADS)
Wen, Hongyuan; Zhao, Qi; Xu, Lingfei; Zhou, Munchun; Chen, Yanru
2009-05-01
Because of complex reactions in Basic Oxygen Furnace (BOF) for steelmaking, the main end-point control methods of steelmaking have insurmountable difficulties. Aiming at these problems, a support vector machine (SVM) method for forecasting the BOF steelmaking end-point is presented based on flame radiation information. The basis is that the furnace flame is the performance of the carbon oxygen reaction, because the carbon oxygen reaction is the major reaction in the steelmaking furnace. The system can acquire spectrum and image data quickly in the steelmaking adverse environment. The structure of SVM and the multilayer feed-ward neural network are similar, but SVM model could overcome the inherent defects of the latter. The model is trained and forecasted by using SVM and some appropriate variables of light and image characteristic information. The model training process follows the structure risk minimum (SRM) criterion and the design parameter can be adjusted automatically according to the sampled data in the training process. Experimental results indicate that the prediction precision of the SVM model and the executive time both meet the requirements of end-point judgment online.
Challenges in early clinical development of adjuvanted vaccines.
Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David
2015-06-08
A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.; ...
2016-02-03
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Heuristic-driven graph wavelet modeling of complex terrain
NASA Astrophysics Data System (ADS)
Cioacǎ, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Nǎpǎrus, Magdalena; Stoicescu, Ioana; Peringer, Alexander; Buttler, Alexandre; Golay, François
2015-03-01
We present a novel method for building a multi-resolution representation of large digital surface models. The surface points coincide with the nodes of a planar graph which can be processed using a critically sampled, invertible lifting scheme. To drive the lazy wavelet node partitioning, we employ an attribute aware cost function based on the generalized quadric error metric. The resulting algorithm can be applied to multivariate data by storing additional attributes at the graph's nodes. We discuss how the cost computation mechanism can be coupled with the lifting scheme and examine the results by evaluating the root mean square error. The algorithm is experimentally tested using two multivariate LiDAR sets representing terrain surface and vegetation structure with different sampling densities.
NASA Astrophysics Data System (ADS)
Rahman, Abu Zayed Mohammad Saliqur; Cao, Xingzhong; Wang, Baoyi; Evslin, Jarah; Xu, Qiu; Atobe, Kozo
2016-12-01
We investigated neutron-irradiation-induced point defects in spinel single crystals using a synchrotron VUV-UV source and positron lifetime spectroscopy. Photoexcitation (PE) spectra near 230 nm and their corresponding photoluminescence (PL) spectra at 475 nm were attributed to F-centers. With increasing irradiation temperature and fluence, PE efficiency and PL intensity decreased dramatically. Positron lifetimes (PLT) of neutron-irradiated and non-irradiated samples were measured to identify the cation vacancies. A PLT measurement of 250 ps was obtained in a neutron-irradiated (20 K) sample which is tentatively attributed to an aluminum monovacancy. Decreasing PLT with higher irradiation indicates the formation of oxygen-vacancy complex centers.
NASA Astrophysics Data System (ADS)
Giovanis, D. G.; Shields, M. D.
2018-07-01
This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.
MacDonald, Morgan C; Juran, Luke; Jose, Jincy; Srinivasan, Sekar; Ali, Syed I; Aronson, Kristan J; Hall, Kevin
2016-01-01
Point-of-use water treatment has received widespread application in the developing world to help mitigate waterborne infectious disease. This study examines the efficacy of a combined filter and chemical disinfection technology in removing bacterial contaminants, and more specifically changes in its performance resulting from seasonal weather variability. During a 12-month field trial in Chennai, India, mean log-reductions were 1.51 for E. coli and 1.67 for total coliforms, and the highest concentration of indicator bacteria in treated water samples were found during the monsoon season. Analysis of variance revealed significant differences in the microbial load of indicator organisms (coliforms and E. coli) between seasons, storage time since treatment (TST), and samples with and without chlorine residuals. Findings indicate that the bacteriological quality of drinking water treated in the home is determined by a complex interaction of environmental and sociological conditions. Moreover, while the effect of disinfection was independent of season, the impact of storage TST on water quality was found to be seasonally dependent.
López-García, Ignacio; Vicente-Martínez, Yesica; Hernández-Córdoba, Manuel
2015-01-01
The cloud point extraction (CPE) of silver nanoparticles (AgNPs) by Triton X-114 allows chromium (III) ions to be transferred to the surfactant-rich phase, where they can be measured by electrothermal atomic absorption spectrometry. Using 20 mL sample and 50 μL Triton X-114 (30% w/v), the enrichment factor was 1150, and calibration graphs were obtained in the 5-100 ng L(-1) chromium range in the presence of 5 µg L(-1) AgNPs. Speciation of trivalent and hexavalent chromium was achieved by carrying out two CPE experiments, one of them in the presence of ethylenediaminetetraacetate. While in the first experiment, in absence of the complexing agent, the concentration of total chromium was obtained, the analytical signal measured in the presence of this chemical allowed the chromium (VI) concentration to be measured, being that of chromium (III) calculated by difference. The reliability of the procedure was verified by using three standard reference materials before applying to water, beer and wine samples. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Araújo, Éverton José Ferreira de; Silva, Oskar Almeida; Rezende-Júnior, Luís Mário; Sousa, Ian Jhemes Oliveira; Araújo, Danielle Yasmin Moura Lopes de; Carvalho, Rusbene Bruno Fonseca de; Pereira, Sean Telles; Gutierrez, Stanley Juan Chavez; Ferreira, Paulo Michel Pinheiro; Lima, Francisco das Chagas Alves
2017-08-01
This study performed a physicochemical characterization of the inclusion complex generated between Riparin A and β-cyclodextrin (Rip A/β-CD) and compared the cytotoxic potential of the incorporated Rip A upon Artemia salina larvae. Samples were analyzed by phase solubility diagram, dissolution profile, differential scanning calorimetry, X-ray diffraction, infrared spectroscopy, proton nuclear magnetic resonance, scanning electron microscopy and artemicidal action. Riparin A/β-cyclodextrin complexes presented increased water solubility, AL type solubility diagram and Kst constant of 373 L/mol. Thermal analysis demonstrated reduction of the melt peak of complexed Rip A at 116.2 °C. Infrared spectroscopy confirmed generation of inclusion complexes, 1H NMR pointed out the interaction with H-3 of β-CD cavities, alterations in the crystalline natures of Rip A when incorporated within β-CD were observed and inclusion complexes presented higher cytotoxic on A. salina nauplii, with CL50 value of 117.2 (84.9-161.8) μg/mL. So, Rip A was incorporated into β-CDs with high efficiency and water solubility of Rip A was improved. Such solubility was corroborated by cytotoxic evaluation and these outcomes support the improvement of biological properties for complexes between Riparin A/β-cyclodextrin.
NASA Astrophysics Data System (ADS)
Michael, H. A.; Tan, F.; Yoo, K.; Imhoff, P. T.
2017-12-01
While organo-mineral complexes can protect organic matter (OM) from biodegradation, their impact on soil mineral weathering is not clear. Previous bench-scale experiments that focused on specific OM and minerals showed that the adsorption of OM to mineral surfaces accelerates the dissolution of some minerals. However, the impact of natural organo-mineral complexes on mineral dissolution under unsaturated conditions is not well known. In this study, soil samples prepared from an undisturbed forest site were used to determine mineral weathering rates under differing conditions of OM sorption to minerals. Two types of soil samples were generated: 1) soil with OM (C horizon soil from 84-100cm depth), and 2) soil without OM (the same soil as in 1) but with OM removed by heating to 350°for 24 h). Soil samples were column-packed and subjected to intermittent infiltration and drainage to mimic natural rainfall events. Each soil sample type was run in duplicate. The unsaturated condition was created by applying gas pressure to the column, and the unsaturated chemical weathering rates during each cycle were calculated from the effluent concentrations. During a single cycle, when applying the same gas pressure, soils with OM retained more moisture than OM-removed media, indicating increased water retention capacity under the impact of OM. This is consistent with the water retention data measured by evaporation experiments (HYPROP) and the dew point method (WP4C Potential Meter). Correspondingly, silicon (Si) denudation rates indicated that dissolution of silicate minerals was 2-4 times higher in OM soils, suggesting that organo-mineral complexes accelerate mineral dissolution under unsaturated conditions. When combining data from all cycles, the results showed that Si denudation rates were positively related to soil water content: denundation rate increased with increasing water content. Therefore, natural mineral chemical weathering under unsaturated conditions, while widely considered to be facilitated by biological and chemical activities, may also be affected by soil retention properties.
Decision Making in Health and Medicine
NASA Astrophysics Data System (ADS)
Hunink, Myriam; Glasziou, Paul; Siegel, Joanna; Weeks, Jane; Pliskin, Joseph; Elstein, Arthur; Weinstein, Milton C.
2001-11-01
Decision making in health care means navigating through a complex and tangled web of diagnostic and therapeutic uncertainties, patient preferences and values, and costs. In addition, medical therapies may include side effects, surgery may lead to undesirable complications, and diagnostic technologies may produce inconclusive results. In many clinical and health policy decisions it is necessary to counterbalance benefits and risks, and to trade off competing objectives such as maximizing life expectancy vs optimizing quality of life vs minimizing the required resources. This textbook plots a clear course through these complex and conflicting variables. It clearly explains and illustrates tools for integrating quantitative evidence-based data and subjective outcome values in making clinical and health policy decisions. An accompanying CD-ROM features solutions to the exercises, PowerPoint® presentations of the illustrations, and sample models and tables.
Zhang, Xiao-Tai; Wang, Shu; Xing, Guo-Wen
2017-02-01
Ginsenoside is a large family of triterpenoid saponins from Panax ginseng, which possesses various important biological functions. Due to the very similar structures of these complex glycoconjugates, it is crucial to develop a powerful analytic method to identify ginsenosides qualitatively or quantitatively. We herein report an eight-channel fluorescent sensor array as artificial tongue to achieve the discriminative sensing of ginsenosides. The fluorescent cross-responsive array was constructed by four boronlectins bearing flexible boronic acid moieties (FBAs) with multiple reactive sites and two linear poly(phenylene-ethynylene) (PPEs). An "on-off-on" response pattern was afforded on the basis of superquenching of fluorescent indicator PPEs and an analyte-induced allosteric indicator displacement (AID) process. Most importantly, it was found that the canonical distribution of ginsenoside data points analyzed by linear discriminant analysis (LDA) was highly correlated with the inherent molecular structures of the analytes, and the absence of overlaps among the five point groups reflected the effectiveness of the sensor array in the discrimination process. Almost all of the unknown ginsenoside samples at different concentrations were correctly identified on the basis of the established mathematical model. Our current work provided a general and constructive method to improve the quality assessment and control of ginseng and its extracts, which are useful and helpful for further discriminating other complex glycoconjugate families.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
NASA Astrophysics Data System (ADS)
Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.
2017-10-01
Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.
Gürkan, Ramazan; Korkmaz, Sema; Altunay, Nail
2016-08-01
A new ultrasonic-thermostatic-assisted cloud point extraction procedure (UTA-CPE) was developed for preconcentration at the trace levels of vanadium (V) and molybdenum (Mo) in milk, vegetables and foodstuffs prior to determination via flame atomic absorption spectrometry (FAAS). The method is based on the ion-association of stable anionic oxalate complexes of V(V) and Mo(VI) with [9-(diethylamino)benzo[a]phenoxazin-5-ylidene]azanium; sulfate (Nile blue A) at pH 4.5, and then extraction of the formed ion-association complexes into micellar phase of polyoxyethylene(7.5)nonylphenyl ether (PONPE 7.5). The UTA-CPE is greatly simplified and accelerated compared to traditional cloud point extraction (CPE). The analytical parameters optimized are solution pH, the concentrations of complexing reagents (oxalate and Nile blue A), the PONPE 7.5 concentration, electrolyte concentration, sample volume, temperature and ultrasonic power. Under the optimum conditions, the calibration curves for Mo(VI) and V(V) are obtained in the concentration range of 3-340µgL(-1) and 5-250µgL(-1) with high sensitivity enhancement factors (EFs) of 145 and 115, respectively. The limits of detection (LODs) for Mo(VI) and V(V) are 0.86 and 1.55µgL(-1), respectively. The proposed method demonstrated good performances such as relative standard deviations (as RSD %) (≤3.5%) and spiked recoveries (95.7-102.3%). The accuracy of the method was assessed by analysis of two standard reference materials (SRMs) and recoveries of spiked solutions. The method was successfully applied into the determination of trace amounts of Mo(VI) and V(V) in milk, vegetables and foodstuffs with satisfactory results. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hegrová, Jitka; Steiner, Oliver; Goessler, Walter; Tanda, Stefan; Anděl, Petr
2017-09-01
A comprehensive overview of the influence of transport on the environment is presented in this study. The complex analysis of soil and needle samples provides an extensive set of data, which presents elemental contamination of the environment near roads. Traffic pollution (including winter road treatment) has a significant negative influence on our environment. Besides sodium and chlorine from winter maintenance many other elements are emitted into the environment. Three possible sources of contamination are assumed for environmental contamination evaluation: car emission, winter maintenance and abrasion from breaks and clutches. The chemical analysis focused on the description of samples from inorganic point of view. The influence of the contamination potential on the sodium and chlorine content in the samples of 1st year-old and 2nd year-old needles of Norway spruce (Picea abies) and Scots pine (Pinus sylvestris) is discussed. Additional soil samples were taken from each sampling site and analyzed to get insight in the sodium and chlorine distribution. Statistical evaluation was used for interpretation of complex interaction patterns between element concentrations in different aged needles based on localities character including distance from the road and element concentration in soils. This species of needles were chosen because of its heightened sensitivity towards salinization. The study was conducted in different parts of the Czech Republic. The resulting database is a source of valuable information about the influence of transport on the environment.
Application of the Attagene FACTORIAL™ assay to ...
Bioassays can be used to evaluate the integrated effects of complex mixtures from both known and unidentified contaminants present in environmental samples. However, such bio-monitoring approaches have typically focused only on one or a few pathways (e.g. estrogen receptor, androgen receptor) despite the fact that the chemicals in a mixture may exhibit a range of biological activities. High-throughput screening approaches that can rapidly assess samples for a broad diversity of biological activities offer a means to provide a more comprehensive characterization of complex mixtures. The Attagene FactorialTM platform is a high-throughput, cell based assay utilized by US EPA’s ToxCast Program, which provides high-content assessment of over 90 different gene regulatory pathways and all 48 human nuclear receptors (NRs). This assay has previously been used in a preliminary screening of surface water extracts from sites across the Great Lakes. In the current study, surface waters samples from 38 sites were collected, extracted, and screened through the Factorial assay as part of a USGS nationwide stream assessment. All samples were evaluated in a six point, 3-fold dilution series and analyzed using the ToxCast Data Pipeline (TCPL) to generate dose-response curves and corresponding half-maximal activity concentration (AC50) estimates. A total of 27 assay endpoints responded to extracts from one or more sites, with up to 14 assays active for a single extract. The four
A temperature characteristic research and compensation design for micro-machined gyroscope
NASA Astrophysics Data System (ADS)
Fu, Qiang; di, Xin-Peng; Chen, Wei-Ping; Yin, Liang; Liu, Xiao-Wei
2017-02-01
The all temperature range stability is the most important technology of MEMS angular velocity sensor according to the principle of capacity detecting. The correlation between driven force and zero-point of sensor is summarized according to the temperature characteristic of the air-damping and resonant frequency of sensor header. A constant trans-conductance high-linearity amplifier is designed to realize the low phase-drift and low amplitude-drift interface circuit at all-temperature range. The chip is fabricated in a standard 0.5 μm CMOS process. Compensation achieved by driven force to zero-point drift caused by the stiffness of physical construction and air-damping is adopted. Moreover, the driven force can be obtained from the drive-circuit to avoid the complex sampling. The test result shows that the zero-point drift is lower than 30∘/h (1-sigma) at the temperature range from -40∘C to 60∘C after three-order compensation made by driven force.
Buelow, Janice M; Johnson, Cynthia S; Perkins, Susan M; Austin, Joan K; Dunn, David W
2013-04-01
Caregivers of children with both epilepsy and learning problems need assistance to manage their child's complex medical and mental health problems. We tested the cognitive behavioral intervention "Creating Avenues for Parent Partnership" (CAPP) which was designed to help caregivers develop knowledge as well as the confidence and skills to manage their child's condition. The CAPP intervention consisted of a one-day cognitive behavioral program and three follow-up group sessions. The sample comprised 31 primary caregivers. Caregivers reported that the program was useful (mean = 3.66 on a 4-point scale), acceptable (mean = 4.28 on a 5-point scale), and "pretty easy" (mean = 1.97 on a 4-point scale). Effect sizes were small to medium in paired t tests (comparison of intervention to control) and paired analysis of key variables in the pre- and post-tests. The CAPP program shows promise in helping caregivers build skills to manage their child's condition. Copyright © 2013 Elsevier Inc. All rights reserved.
Frequency-Dependence of Relative Permeability in Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowler, N.
2006-03-06
A study to characterize metal plates by means of a model-based, broadband, four-point potential drop measurement technique has shown that the relative permeability of alloy 1018 low-carbon steel is complex and a function of frequency. A magnetic relaxation is observed at approximately 5 kHz. The relaxation can be described in terms of a parametric (Cole-Cole) model. Factors which influence the frequency, amplitude and breadth of the relaxation, such as applied current amplitude, sample geometry and disorder (e.g. percent carbon content and surface condition), are considered.
GKI chloride in water, analysis method. GKI boron in water, analysis method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morriss, L.L.
1979-05-01
Procedures for the chemical analysis of chlorides and boron in water are presented. Chlorides can be titrated with mercuric nitrate to form mercuric chloride. At pH 2.3 to 2.8, diphenylcarbazone indicates the end point of this titration by formation of a purple complex with mercury ions. When a sample of water containing boron is acidified and evaporated in the presence of curcumin, a red colored product called rosocyanine is formed. This is dissolved and can be measured photometrically or visually. (DMC)
Hartmann, Georg; Schuster, Michael
2013-01-25
The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L(-1) is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L(-1). The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L(-1) is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
Technologies for imaging neural activity in large volumes
Ji, Na; Freeman, Jeremy; Smith, Spencer L.
2017-01-01
Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194
Measurement of helium isotopes in soil gas as an indicator of tritium groundwater contamination.
Olsen, Khris B; Dresel, P Evan; Evans, John C; McMahon, William J; Poreda, Robert
2006-05-01
The focus of this study was to define the shape and extent of tritium groundwater contamination emanating from a legacy burial ground and to identify vadose zone sources of tritium using helium isotopes (3He and 4He) in soil gas. Helium isotopes were measured in soil-gas samples collected from 70 sampling points around the perimeter and downgradient of a burial ground that contains buried radioactive solid waste. The soil-gas samples were analyzed for helium isotopes using rare gas mass spectrometry. 3He/4He ratios, reported as normalized to the air ratio (RA), were used to locate the tritium groundwater plume emanating from the burial ground. The 3He (excess) suggested that the general location of the tritium source is within the burial ground. This study clearly demonstrated the efficacy of the 3He method for application to similar sites elsewhere within the DOE weapons complex.
Grey W. Pendleton
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation...
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
Effect of sample initial magnetic field on the metal magnetic memory NDT result
NASA Astrophysics Data System (ADS)
Moonesan, Mahdi; Kashefi, Mehrdad
2018-08-01
One of the major concerns regarding the use of Metal Magnetic Memory (MMM) technique is the complexity of residual magnetization effect on output signals. The present study investigates the influence of residual magnetic field on stress induced magnetization. To this end, various initial magnetic fields were induced on a low carbon steel sample, and for each level of residual magnetic field, the sample was subjected to a set of 4-point bending tests and, their corresponding MMM signals were collected from the surface of the bended sample using a tailored metal magnetic memory scanning device. Results showed a strong correlation between sample residual magnetic field and its corresponding level of stress induced magnetic field. It was observed that the sample magnetic field increases with applying the bending stress as long as the initial residual magnet field is low (i.e. <117 mG), but starts decreasing with higher levels of initial residual magnetic fields. Besides, effect of bending stress on the MMM output of a notched sample was investigated. The result, again, showed that MMM signals exhibit a drop at stress concentration zone when sample has high level of initial residual magnetic field.
NASA Astrophysics Data System (ADS)
Serafini, John; Hossain, A.; James, R. B.; Guziewicz, M.; Kruszka, R.; Słysz, W.; Kochanowska, D.; Domagala, J. Z.; Mycielski, A.; Sobolewski, Roman
2017-07-01
We present our studies on both photoconductive (PC) and electro-optic (EO) responses of (Cd,Mg)Te single crystals. In an In-doped Cd0.92Mg0.08Te single crystal, subpicosecond electrical pulses were optically generated via a PC effect, coupled into a transmission line, and, subsequently, detected using an internal EO sampling scheme, all in the same (Cd,Mg)Te material. For photo-excitation and EO sampling, we used femtosecond optical pulses generated by the same Ti:sapphire laser with the wavelengths of 410 and 820 nm, respectively. The shortest transmission line distance between the optical excitation and EO sampling points was 75 μm. By measuring the transient waveforms at different distances from the excitation point, we calculated the transmission-line complex propagation factor, as well as the THz frequency attenuation factor and the propagation velocity, all of which allowed us to reconstruct the electromagnetic transient generated directly at the excitation point, showing that the original PC transient was subpicosecond in duration with a fall time of ˜500 fs. Finally, the measured EO retardation, together with the amount of the electric-field penetration, allowed us to determine the magnitude of the internal EO effect in our (Cd,Mg)Te crystal. The obtained THz-frequency EO coefficient was equal to 0.4 pm/V, which is at the lower end among the values reported for CdTe-based ternaries, apparently, due to the disorientation of the tested crystal that resulted in the non-optimal EO measurement condition.
Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H
2014-01-01
A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.
Serafini, John; Hossain, A.; James, R. B.; ...
2017-07-03
We present our studies on both photoconductive (PC) and electro-optic (EO) responses of (Cd,Mg)Te single crystals. In an In-doped Cd 0.92Mg 0.08Te single crystal, subpicosecond electrical pulses were optically generated via a PC effect, coupled into a transmission line, and, subsequently, detected using an internal EO sampling scheme, all in the same (Cd,Mg)Te material. For photo-excitation and EO sampling, we used femtosecond optical pulses generated by the same Ti:sapphire laser with the wavelength 410 and 820 nm, respectively. The shortest transmission line distance between the optical excitation and EO sampling points was 75 μm. By measuring the transient waveforms atmore » different distances from the excitation point, we calculated the transmission-line complex propagation factor, as well as the THz frequency attenuation factor and the propagation velocity, all of which allowed us to reconstruct the electromagnetic transient generated directly at the excitation point, showing that the original PC transient was subpicosecond in duration with a fall time of ~500 fs. Finally, the measured EO retardation, together with the amount of the electric-field penetration, allowed us to determine the magnitude of the internal EO effect in our (Cd,Mg)Te crystal. The obtained THz-frequency EO coefficient was equal to 0.4 pm/V, which is at the lower end among the values reported for CdTe-based ternaries, due to a twinned structure and misalignment of the tested (Cd,Mg)Te crystal.« less
On an Integral with Two Branch Points
ERIC Educational Resources Information Center
de Oliveira, E. Capelas; Chiacchio, Ary O.
2006-01-01
The paper considers a class of real integrals performed by using a convenient integral in the complex plane. A complex integral containing a multi-valued function with two branch points is transformed into another integral containing a pole and a unique branch point. As a by-product we obtain a new class of integrals which can be calculated in a…
Development of a mass spectrometer system for the measurement of inert gases in meteorites
NASA Technical Reports Server (NTRS)
Palma, R. L.
1983-01-01
The study of the inert gases in meteorites has provided many clues as to the origin and evolution of the solar system. Particularly crucial and complex are the gases krypton and xenon. To accurately measure the isotopic compositions of these gases requires a mass spectrometer of high sensitivity and resolution. A previously unused and largely untested mass spectrometer system was brought to the point where it was ready for routine sample analyses. This involved, among other things, focusing the ion beam for optimal peak shape and sensitivity, documenting the instrument's response to a series of characteristic tests such as multplier gain checks, and interfacing the instrument to a computer to run the sample analyses. Following this testing and setting up, three iron meteorite samples were to be analyzed for argon, krypton, and xenon. The three samples were shown in prior work to possibly contain primordial heavy inert gases. Although these analyses have not yet been carried out, it is anticipated that they will be completed in the near future.
Integrated Blood Barcode Chips
Fan, Rong; Vermesh, Ophir; Srivastava, Alok; Yen, Brian K.H.; Qin, Lidong; Ahmad, Habib; Kwong, Gabriel A.; Liu, Chao-Chao; Gould, Juliane; Hood, Leroy; Heath, James R.
2008-01-01
Blood comprises the largest version of the human proteome1. Changes of plasma protein profiles can reflect physiological or pathological conditions associated with many human diseases, making blood the most important fluid for clinical diagnostics2-4. Nevertheless, only a handful of plasma proteins are utilized in routine clinical tests. This is due to a host of reasons, including the intrinsic complexity of the plasma proteome1, the heterogeneity of human diseases and the fast kinetics associated with protein degradation in sampled blood5. Simple technologies that can sensitively sample large numbers of proteins over broad concentration ranges, from small amounts of blood, and within minutes of sample collection, would assist in solving these problems. Herein, we report on an integrated microfluidic system, called the Integrated Blood Barcode Chip (IBBC). It enables on-chip blood separation and the rapid measurement of a panel of plasma proteins from small quantities of blood samples including a fingerprick of whole blood. This platform holds potential for inexpensive, non-invasive, and informative clinical diagnoses, particularly, for point-of-care. PMID:19029914
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daghbouj, N.; Faculté des Sciences de Monastir, Université de Monastir, Monastir; Cherkashin, N., E-mail: nikolay.cherkashin@cemes.fr
2016-04-07
Hydrogen and helium co-implantation is nowadays used to efficiently transfer thin Si layers and fabricate silicon on insulator wafers for the microelectronic industry. The synergy between the two implants which is reflected through the dramatic reduction of the total fluence needed to fracture silicon has been reported to be strongly influenced by the implantation order. Contradictory conclusions on the mechanisms involved in the formation and thermal evolution of defects and complexes have been drawn. In this work, we have experimentally studied in detail the characteristics of Si samples co-implanted with He and H, comparing the defects which are formed followingmore » each implantation and after annealing. We show that the second implant always ballistically destroys the stable defects and complexes formed after the first implant and that the redistribution of these point defects among new complexes drives the final difference observed in the samples after annealing. When H is implanted first, He precipitates in the form of nano-bubbles and agglomerates within H-related platelets and nano-cracks. When He is implanted first, the whole He fluence is ultimately used to pressurize H-related platelets which quickly evolve into micro-cracks and surface blisters. We provide detailed scenarios describing the atomic mechanisms involved during and after co-implantation and annealing which well-explain our results and the reasons for the apparent contradictions reported at the state of the art.« less
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
Plenoptic mapping for imaging and retrieval of the complex field amplitude of a laser beam.
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C
2016-12-26
The plenoptic sensor has been developed to sample complicated beam distortions produced by turbulence in the low atmosphere (deep turbulence or strong turbulence) with high density data samples. In contrast with the conventional Shack-Hartmann wavefront sensor, which utilizes all the pixels under each lenslet of a micro-lens array (MLA) to obtain one data sample indicating sub-aperture phase gradient and photon intensity, the plenoptic sensor uses each illuminated pixel (with significant pixel value) under each MLA lenslet as a data point for local phase gradient and intensity. To characterize the working principle of a plenoptic sensor, we propose the concept of plenoptic mapping and its inverse mapping to describe the imaging and reconstruction process respectively. As a result, we show that the plenoptic mapping is an efficient method to image and reconstruct the complex field amplitude of an incident beam with just one image. With a proof of concept experiment, we show that adaptive optics (AO) phase correction can be instantaneously achieved without going through a phase reconstruction process under the concept of plenoptic mapping. The plenoptic mapping technology has high potential for applications in imaging, free space optical (FSO) communication and directed energy (DE) where atmospheric turbulence distortion needs to be compensated.
Letter Report: Stable Hydrogen and Oxygen Isotope Analysis of B-Complex Perched Water Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Brady D.; Moran, James J.; Nims, Megan K.
Fine-grained sediments associated with the Cold Creek Unit at Hanford have caused the formation of a perched water aquifer in the deep vadose zone at the B Complex area, which includes waste sites in the 200-DV-1 Operable Unit and the single-shell tank farms in Waste Management Area B-BX-BY. High levels of contaminants, such as uranium, technetium-99, and nitrate, make this aquifer a continuing source of contamination for the groundwater located a few meters below the perched zone. Analysis of deuterium ( 2H) and 18-oxygen ( 18O) of nine perched water samples from three different wells was performed. Samples represent timemore » points from hydraulic tests performed on the perched aquifer using the three wells. The isotope analyses showed that the perched water had δ 2H and δ 18O ratios consistent with the regional meteoric water line, indicating that local precipitation events at the Hanford site likely account for recharge of the perched water aquifer. Data from the isotope analysis can be used along with pumping and recovery data to help understand the perched water dynamics related to aquifer size and hydraulic control of the aquifer in the future.« less
Statistical assessment on a combined analysis of GRYN-ROMN-UCBN upland vegetation vital signs
Irvine, Kathryn M.; Rodhouse, Thomas J.
2014-01-01
As of 2013, Rocky Mountain and Upper Columbia Basin Inventory and Monitoring Networks have multiple years of vegetation data and Greater Yellowstone Network has three years of vegetation data and monitoring is ongoing in all three networks. Our primary objective is to assess whether a combined analysis of these data aimed at exploring correlations with climate and weather data is feasible. We summarize the core survey design elements across protocols and point out the major statistical challenges for a combined analysis at present. The dissimilarity in response designs between ROMN and UCBN-GRYN network protocols presents a statistical challenge that has not been resolved yet. However, the UCBN and GRYN data are compatible as they implement a similar response design; therefore, a combined analysis is feasible and will be pursued in future. When data collected by different networks are combined, the survey design describing the merged dataset is (likely) a complex survey design. A complex survey design is the result of combining datasets from different sampling designs. A complex survey design is characterized by unequal probability sampling, varying stratification, and clustering (see Lohr 2010 Chapter 7 for general overview). Statistical analysis of complex survey data requires modifications to standard methods, one of which is to include survey design weights within a statistical model. We focus on this issue for a combined analysis of upland vegetation from these networks, leaving other topics for future research. We conduct a simulation study on the possible effects of equal versus unequal probability selection of points on parameter estimates of temporal trend using available packages within the R statistical computing package. We find that, as written, using lmer or lm for trend detection in a continuous response and clm and clmm for visually estimated cover classes with “raw” GRTS design weights specified for the weight argument leads to substantially different results and/or computational instability. However, when only fixed effects are of interest, the survey package (svyglm and svyolr) may be suitable for a model-assisted analysis for trend. We provide possible directions for future research into combined analysis for ordinal and continuous vital sign indictors.
DuBois, Debra C; Piel, William H; Jusko, William J
2008-01-01
High-throughput data collection using gene microarrays has great potential as a method for addressing the pharmacogenomics of complex biological systems. Similarly, mechanism-based pharmacokinetic/pharmacodynamic modeling provides a tool for formulating quantitative testable hypotheses concerning the responses of complex biological systems. As the response of such systems to drugs generally entails cascades of molecular events in time, a time series design provides the best approach to capturing the full scope of drug effects. A major problem in using microarrays for high-throughput data collection is sorting through the massive amount of data in order to identify probe sets and genes of interest. Due to its inherent redundancy, a rich time series containing many time points and multiple samples per time point allows for the use of less stringent criteria of expression, expression change and data quality for initial filtering of unwanted probe sets. The remaining probe sets can then become the focus of more intense scrutiny by other methods, including temporal clustering, functional clustering and pharmacokinetic/pharmacodynamic modeling, which provide additional ways of identifying the probes and genes of pharmacological interest. PMID:15212590
Ultrasonic emissions during ice nucleation and propagation in plant xylem.
Charrier, Guillaume; Pramsohler, Manuel; Charra-Vaskou, Katline; Saudreau, Marc; Améglio, Thierry; Neuner, Gilbert; Mayr, Stefan
2015-08-01
Ultrasonic acoustic emission analysis enables nondestructive monitoring of damage in dehydrating or freezing plant xylem. We studied acoustic emissions (AE) in freezing stems during ice nucleation and propagation, by combining acoustic and infrared thermography techniques and controlling the ice nucleation point. Ultrasonic activity in freezing samples of Picea abies showed two distinct phases: the first on ice nucleation and propagation (up to 50 AE s(-1) ; reversely proportional to the distance to ice nucleation point), and the second (up to 2.5 AE s(-1) ) after dissipation of the exothermal heat. Identical patterns were observed in other conifer and angiosperm species. The complex AE patterns are explained by the low water potential of ice at the ice-liquid interface, which induced numerous and strong signals. Ice propagation velocities were estimated via AE (during the first phase) and infrared thermography. Acoustic activity ceased before the second phase probably because the exothermal heating and the volume expansion of ice caused decreasing tensions. Results indicate cavitation events at the ice front leading to AE. Ultrasonic emission analysis enabled new insights into the complex process of xylem freezing and might be used to monitor ice propagation in natura. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
A Practical Philosophy of Complex Climate Modelling
NASA Technical Reports Server (NTRS)
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...
2016-05-10
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less
Interpolation Approach To Computer-Generated Holograms
NASA Astrophysics Data System (ADS)
Yatagai, Toyohiko
1983-10-01
A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.
NASA Astrophysics Data System (ADS)
Wang, Jinxia; Dou, Aixia; Wang, Xiaoqing; Huang, Shusong; Yuan, Xiaoxiang
2016-11-01
Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.
NASA Astrophysics Data System (ADS)
Agrò, Alessandro; Zanella, Elena; Le Pennec, Jean-Luc; Temel, Abidin
2017-04-01
Pyroclastic flow deposits, known as ash-flow tuffs or ignimbrites, are invaluable materials for paleomagnetic studies, with many applications for geological and tectonic purposes. However, little attention has been paid to evaluating the consistency and reliability of the paleomagnetic data when results are obtained on a single volcanic unit with uneven magnetic mineralogy. In this work we investigate this issue by concentrating on the Kızılkaya ignimbrite, the youngest large-volume unit of the Neogene ignimbrite sequence of the Central Anatolian Volcanic Province in Turkey, bringing evidence of significant magnetic heterogeneities in ignimbrite deposits (magnetic mineralogy, susceptibility, magnetic remanence, coercivity, etc.) and emphasizing the importance of a stratigraphic sampling strategy for this type of volcanic rocks in order to obtain reliable paleomagnetic data. Six sections were sampled at different stratigraphic heights within the devitrified portion of the ignimbrite. Isothermal remanence measurements point to low-Ti titanomagnetite as the main magnetic carrier at all sites; at some sites, the occurrence of oxidized Ti-magnetite and hematite is disclosed. The bulk susceptibility (km) decreases vertically at two out of six sections: its value for the topmost samples is commonly one order of magnitude lower than that of the samples at the base. In most cases, low km values relate to high coercivity of remanence (BCR) values, which range from 25 to > 400 mT, and to low S-ratios (measured at 0.3 T) between 0.28 and 0.99. These data point to the occurrence of oxidized magnetic phases. We therefore consider the km parameter as a reliable proxy to check the ignimbrite oxidation stage and to detect the presence of oxidized Ti-magnetite and hematite within the deposit. The characteristic remanent magnetization is determined after stepwise thermal and AF demagnetization and clearly isolated by principal component analysis at most sites. For these sites, the site-mean paleomagnetic direction is consistent with data from the literature. At a few other sites, the remanence is more complex: the direction moves along a great circle during demagnetization and no stable end-point is reached. The occurrence of oxidized Ti-magnetite or hematite as well as two remanence components with overlapping coercivity and blocking temperature spectra suggest that the Kızılkaya ignimbrite acquired first a thermal remanent magnetization and then, during the final cooling or a short time later, a secondary remanent magnetization component which is interpreted as a CRM acquired during post-emplacement devitrification processes. Notwithstanding the Kızılkaya ignimbrite is a single cooling unit, its magnetic properties suffered substantial variations laterally and vertically within the deposit. The Kızılkaya case shows that thick pyroclastic deposits should be sampled using a stratigraphic approach, at different sites and different stratigraphic heights at each individual sampling location, otherwise, under-sampling may significantly affect the paleomagnetic results. When sampling is performed on a short duration or on very poorly preserved deposits we recommend drilling the lower-central portion in the most strongly welded and devitrified facies. Such sampling strategy avoids complications arising from the potential presence of a pervasive secondary CRM masking the original ChRM.
NASA Astrophysics Data System (ADS)
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-01
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO22 +-Sal1] and [UO22 +-Sal2]. Among them, [UO22 +-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO22 +-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO22 +-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57 ng mL- 1, the linear regression equation was ΔF = 438.0 c (ng mL- 1) + 175.6 with the correlation coefficient r = 0.9981. The limit of detection was 0.066 ng mL- 1. The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed.
Active Control of Acoustic Field-of-View in a Biosonar System
Yovel, Yossi; Falk, Ben; Moss, Cynthia F.; Ulanovsky, Nachum
2011-01-01
Active-sensing systems abound in nature, but little is known about systematic strategies that are used by these systems to scan the environment. Here, we addressed this question by studying echolocating bats, animals that have the ability to point their biosonar beam to a confined region of space. We trained Egyptian fruit bats to land on a target, under conditions of varying levels of environmental complexity, and measured their echolocation and flight behavior. The bats modulated the intensity of their biosonar emissions, and the spatial region they sampled, in a task-dependant manner. We report here that Egyptian fruit bats selectively change the emission intensity and the angle between the beam axes of sequentially emitted clicks, according to the distance to the target, and depending on the level of environmental complexity. In so doing, they effectively adjusted the spatial sector sampled by a pair of clicks—the “field-of-view.” We suggest that the exact point within the beam that is directed towards an object (e.g., the beam's peak, maximal slope, etc.) is influenced by three competing task demands: detection, localization, and angular scanning—where the third factor is modulated by field-of-view. Our results suggest that lingual echolocation (based on tongue clicks) is in fact much more sophisticated than previously believed. They also reveal a new parameter under active control in animal sonar—the angle between consecutive beams. Our findings suggest that acoustic scanning of space by mammals is highly flexible and modulated much more selectively than previously recognized. PMID:21931535
High-speed holographic system for full-field transient vibrometry of the human tympanic membrane
NASA Astrophysics Data System (ADS)
Dobrev, I.; Harrington, E. J.; Cheng, T.; Furlong, C.; Rosowski, J. J.
2014-07-01
Understanding of the human hearing process requires the quantification of the transient response of the human ear and the human tympanic membrane (TM or eardrum) in particular. Current state-of-the-art medical methods to quantify the transient acousto-mechanical response of the TM provide only averaged acoustic or local information at a few points. This may be insufficient to fully describe the complex patterns unfolding across the full surface of the TM. Existing engineering systems for full-field nanometer measurements of transient events, typically based on holographic methods, constrain the maximum sampling speed and/or require complex experimental setups. We have developed and implemented of a new high-speed (i.e., > 40 Kfps) holographic system (HHS) with a hybrid spatio-temporal local correlation phase sampling method that allows quantification of the full-field nanometer transient (i.e., > 10 kHz) displacement of the human TM. The HHS temporal accuracy and resolution is validated versus a LDV on both artificial membranes and human TMs. The high temporal (i.e., < 24 μs) and spatial (i.e., >100k data points) resolution of our HHS enables simultaneous measurement of the time waveform of the full surface of the TM. These capabilities allow for quantification of spatially-dependent motion parameters such as energy propagation delays surface wave speeds, which can be used to infer local material properties across the surface of the TM. The HHS could provide a new tool for the investigation of the auditory system with applications in medical research, in-vivo clinical diagnosis as well as hearing aids design.
Complex Seismic Anisotropy at the Edges of a Very-low Velocity Province in the Lowermost Mantle
NASA Astrophysics Data System (ADS)
Wang, Y.; Wen, L.
2005-12-01
A prominent very-low velocity province (VLVP) in the lowermost mantle is revealed, and has been extensively mapped out in recent seismic studies (e.g., Wang and Wen, 2004). Seismic evidence unambiguously indicates that the VLVP is compositionally distinct, and its seismic structure can be best explained by partial melting driven by a compositional change produced in the early Earth's history (Wen, 2001; Wen et. al, 2001; Wang and Wen, 2004). In this presentation, we study the seismic anisotropic behavior inside the VLVP and its surrounding area using SKS and SKKS waveform data. We collect 272 deep earthquakes recorded by more than 80 stations in the Kaapvaal seismic array in southern Africa from 1997 to 1999. Based on the data quality, we choose SKS and SKKS waveform data for 16 earthquakes to measure the anisotropic parameters: the fast polarization direction and the splitting time, using the method of Silver and Chan (1991). A total of 162 high-quality measurements are obtained based on the statistics analysis of shear wave splitting results. The obtained anisotropy exhibits different patterns for the SKS and SKKS phases sampling inside the VLVP and at the edges of the VLVP. When the SKS and SKKS phases sample inside the VLVP, their fast polarization directions exhibit a pattern that strongly correlates with stations, gradually changing from 11°N~to 80°N~across the seismic array from south to north and rotating back to the North direction over short distances for several northernmost stations. The anisotropy pattern obtained from the analysis of the SKKS phases is the same as that from the SKS phases. However, when the SKS and SKKS phases sample at the edges of the VLVP, the measured anisotropy exhibits a very complex pattern. The obtained fast polarization directions change rapidly over a small distance, and they no longer correlate with stations; the measurements obtained from the SKS analysis also differ with those from the SKKS analysis. As the SKS and SKKS phases have similar propagation paths in the lithosphere beneath the array, but different sampling points near the core mantle boundary. The anisotropy in the lithosphere should have a similar influence on SKS and SKKS phases. Therefore, the similar anisotropy obtained from the SKS and SKKS phases sampling inside the VLVP and its correlation with seismic stations suggest that the observed anisotropy variation across the seismic array is mainly due to the anisotropy in the lithosphere beneath the Kaapvaal seismic array, and the interior of the VLVP is isotropic or weakly anisotropic. On the other hand, for the SKS and SKKS phases sampling at the edges of the VLVP, the observed complex anisotropy pattern and the lack of correlation between the results from the SKS and SKKS analyses indicate that part of that anisotropy has to originate from the lowermost mantle near the exit points of these phases at the core mantle boundary, revealing a complex flow pattern at the edges of the VLVP.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Viewpoints: Interactive Exploration of Large Multivariate Earth and Space Science Data Sets
NASA Astrophysics Data System (ADS)
Levit, C.; Gazis, P. R.
2006-05-01
Analysis and visualization of extremely large and complex data sets may be one of the most significant challenges facing earth and space science investigators in the forthcoming decades. While advances in hardware speed and storage technology have roughly kept up with (indeed, have driven) increases in database size, the same is not of our abilities to manage the complexity of these data. Current missions, instruments, and simulations produce so much data of such high dimensionality that they outstrip the capabilities of traditional visualization and analysis software. This problem can only be expected to get worse as data volumes increase by orders of magnitude in future missions and in ever-larger supercomputer simulations. For large multivariate data (more than 105 samples or records with more than 5 variables per sample) the interactive graphics response of most existing statistical analysis, machine learning, exploratory data analysis, and/or visualization tools such as Torch, MLC++, Matlab, S++/R, and IDL stutters, stalls, or stops working altogether. Fortunately, the graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform application which leverages much of the power latent in the GPU to enable smooth interactive exploration and analysis of large high- dimensional data using a variety of classical and recent techniques. The targeted application is the interactive analysis of large, complex, multivariate data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 106-108.
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J.R. Jr.
1984-04-01
Reservoir characterization of Mesaverde meanderbelt sandstones is used to determined directional continuity of permeable zones. A 500-m (1600 ft) wide fluvial meanderbelt in the Mesaverde Group is exposed as laterally continuous 3-10-m (10-33-ft) high sandstone cliffs north of Rangely, Colorado. Forty-eight detailed measured sections through 3 point bar complexes oriented at right angles to the long axis of deposition and 1 complex oriented parallel to deposition were prepared. Sections were tied together by detailed sketches delineating and tracing major bounding surfaces such as scours and clay drapes. These complexes contain 3 to 8 multilateral sandstone packages separated by 5-20 cmmore » (2-8 in.) interbedded siltstone and shale beds. Component facies are point bars, crevasse splays, chute bars, and floodplain/overbank deposits. Two types of lateral accretion surfaces are recognized in the point bar facies. Gently dipping lateral accretions containing fining-upward sandstone packages. Large scale trough cross-bedding at the base grades upward into ripples and plane beds. Steeply dipping lateral accretion surfaces enclose beds characterized by climbing ripple cross laminations. Bounding surfaces draped by shale lags can seal vertically stacked point bars from reservoir communication. Scoured boundaries allow communication in some stacked point bars. Crevasse splays showing climbing ripples form tongues of very fine-grained sandstone which flank point bars. Chute channels commonly cut upper point bar surfaces at their downstream end. Chute facies are upward-fining with small scale troughs and common dewatering structures. Siltstones and shales underlie the point bar complexes and completely encase the meanderbelt system. Bounding surfaces at the base of the complexes are erosional and contain large shale rip-up clasts.« less
Computationally efficient algorithm for Gaussian Process regression in case of structured samples
NASA Astrophysics Data System (ADS)
Belyaev, M.; Burnaev, E.; Kapushev, Y.
2016-04-01
Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.
Relationship between road traffic noisescape and urban form in Hong Kong.
Lam, Kin-Che; Ma, Weichun; Chan, Pak Kin; Hui, Wing Chi; Chung, King Lam; Chung, Yi-tak Teresa; Wong, Chun Yin; Lin, Hui
2013-12-01
This paper reports on a study which explored the possible relationship between road traffic noisescape and urban form in Hong Kong. A total of 212 residential complexes from 11 contrasting urban forms were sampled, and their noise levels assessed both at dwelling and neighbourhood scales by noise mapping. Its findings indicate that residential complexes with different urban forms have significantly different noisescape attributes. There is a strong correlation between the noise characteristics and morphological indicators at the dwelling scale. A less obstreperous noisescape is associated with urban forms with lower road and building densities, and with building arrangements which provide self-noise screening. These findings suggest that urban form is an influential determinant of the noisescape in the urban environment, and they point to the need to rethink the conventional approach to managing the urban acoustic environment.
Unraveling snake venom complexity with 'omics' approaches: challenges and perspectives.
Zelanis, André; Tashima, Alexandre Keiji
2014-09-01
The study of snake venom proteomes (venomics) has been experiencing a burst of reports, however the comprehensive knowledge of the dynamic range of proteins present within a single venom, the set of post-translational modifications (PTMs) as well as the lack of a comprehensive database related to venom proteins are among the main challenges in venomics research. The phenotypic plasticity in snake venom proteomes together with their inherent toxin proteoform diversity, points out to the use of integrative analysis in order to better understand their actual complexity. In this regard, such a systems venomics task should encompass the integration of data from transcriptomic and proteomic studies (specially the venom gland proteome), the identification of biological PTMs, and the estimation of artifactual proteomes and peptidomes generated by sample handling procedures. Copyright © 2014 Elsevier Ltd. All rights reserved.
Molecular diagnosis of α-thalassemia in a multiethnic population.
Gilad, Oded; Shemer, Orna Steinberg; Dgany, Orly; Krasnov, Tanya; Nevo, Michal; Noy-Lotan, Sharon; Rabinowicz, Ron; Amitai, Nofar; Ben-Dor, Shifra; Yaniv, Isaac; Yacobovich, Joanne; Tamary, Hannah
2017-06-01
α-Thalassemia, one of the most common genetic diseases, is caused by deletions or point mutations affecting one to four α-globin genes. Molecular diagnosis is important to prevent the most severe forms of the disease. However, the diagnosis of α-thalassemia is complex due to a high variability of the genetic defects involved, with over 250 described mutations. We summarize herein the findings of genetic analyses of DNA samples referred to our laboratory for the molecular diagnosis of α-thalassemia, along with a detailed clinical description. We utilized a diagnostic algorithm including Gap-PCR, to detect known deletions, followed by sequencing of the α-globin gene, to identify known and novel point mutations, and multiplex ligation-dependent probe amplification (MLPA) for the diagnosis of rare or novel deletions. α-Thalassemia was diagnosed in 662 of 975 samples referred to our laboratory. Most commonly found were deletions (75.3%, including two novel deletions previously described by us); point mutations comprised 25.4% of the cases, including five novel mutations. Our population included mostly Jews (of Ashkenazi and Sephardic origin) and Muslim Arabs, who presented with a higher rate of point mutations and hemoglobin H disease. Overall, we detected 53 different genotype combinations causing a spectrum of clinical phenotypes, from asymptomatic to severe anemia. Our work constitutes the largest group of patients with α-thalassemia originating in the Mediterranean whose clinical characteristics and molecular basis have been determined. We suggest a diagnostic algorithm that leads to an accurate molecular diagnosis in multiethnic populations. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Vinther, Kristina H; Tveskov, Claus; Möller, Sören; Auscher, Soren; Osmanagic, Armin; Egstrup, Kenneth
2017-06-01
Our aim was to investigate the association of premature atrial complexes and the risk of recurrent stroke or death in patients with ischemic stroke in sinus rhythm. In a prospective cohort study, we used 24-hour Holter recordings to evaluate premature atrial complexes in patients consecutively admitted with ischemic strokes. Excessive premature atrial complexes were defined as >14 premature atrial complexes per hour and 3 or more runs of premature atrial complexes per 24 hours. During follow-up, 48-hour Holter recordings were performed after 6 and 12 months. Among patients in sinus rhythm, the association of excessive premature atrial complexes and the primary end point of recurrent stroke or death were estimated in both crude and adjusted Cox proportional hazards models. We further evaluated excessive premature atrial complexes contra atrial fibrillation in relation to the primary end point. Of the 256 patients included, 89 had atrial fibrillation. Of the patients in sinus rhythm (n = 167), 31 had excessive premature atrial complexes. During a median follow-up of 32 months, 50 patients (30% of patients in sinus rhythm) had recurrent strokes (n = 20) or died (n = 30). In both crude and adjusted models, excessive premature atrial complexes were associated with the primary end point, but not with newly diagnosed atrial fibrillation. Compared with patients in atrial fibrillation, those with excessive premature atrial complexes had similarly high risks of the primary end point. In patients with ischemic stroke and sinus rhythm, excessive premature atrial complexes were associated with a higher risk of recurrent stroke or death. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Mueller, Silke C; Drewelow, Bernd
2013-05-01
The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.
Jia, Jingjing; Li, Huajiao; Zhou, Jinsheng; Jiang, Meihui; Dong, Di
2018-03-01
Research on the price fluctuation transmission of the carbon trading pilot market is of great significance for the establishment of China's unified carbon market and its development in the future. In this paper, the carbon market transaction prices of Beijing, Shanghai, Tianjin, Shenzhen, and Guangdong were selected from December 29, 2013 to March 26, 2016, as sample data. Based on the view of the complex network theory, we construct a price fluctuation transmission network model of five pilot carbon markets in China, with the purposes of analyzing the topological features of this network, including point intensity, weighted clustering coefficient, betweenness centrality, and community structure, and elucidating the characteristics and transmission mechanism of price fluctuation in China's five pilot cities. The results of point intensity and weighted clustering coefficient show that the carbon prices in the five markets remained unchanged and transmitted smoothly in general, and price fragmentation is serious; however, at some point, the price fluctuates with mass phenomena. The result of betweenness centrality reflects that a small number of price fluctuations can control the whole market carbon price transmission and price fluctuation evolves in an alternate manner. The study provides direction for the scientific management of the carbon price. Policy makers should take a positive role in promoting market activity, preventing the risks that may arise from mass trade and scientifically forecasting the volatility of trading prices, which will provide experience for the establishment of a unified carbon market in China.
Thomas B. Lynch; Jeffrey H. Gove
2013-01-01
Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...
Cho, Hui Hun; Kim, Si Hyun; Heo, Jun Hyuk; Moon, Young Eel; Choi, Young Hun; Lim, Dong Cheol; Han, Kwon-Hoon; Lee, Jung Heon
2016-06-21
We report the development of a colorimetric sensor that allows for the quantitative measurement of the acid content via acid-base titration in a single-step. In order to create the sensor, we used a cobalt coordination system (Co-complex sensor) that changes from greenish blue colored Co(H2O)4(OH)2 to pink colored Co(H2O)6(2+) after neutralization. Greenish blue and pink are two complementary colors with a strong contrast. As a certain amount of acid is introduced to the Co-complex sensor, a portion of greenish blue colored Co(H2O)4(OH)2 changes to pink colored Co(H2O)6(2+), producing a different color. As the ratio of greenish blue and pink in the Co-complex sensor is determined by the amount of neutralization reaction occurring between Co(H2O)4(OH)2 and an acid, the sensor produced a spectrum of green, yellow green, brown, orange, and pink colors depending on the acid content. In contrast, the color change appeared only beyond the end point for normal acid-base titration. When we mixed this Co-complex sensor with different concentrations of citric acid, tartaric acid, and malic acid, three representative organic acids in fruits, we observed distinct color changes for each sample. This color change could also be observed in real fruit juice. When we treated the Co-complex sensor with real tangerine juice, it generated diverse colors depending on the concentration of citric acid in each sample. These results provide a new angle on simple but quantitative measurements of analytes for on-site usage in various applications, such as in food, farms, and the drug industry.
Saukkoriipi, Annika; Bratcher, Holly B.; Bloigu, Aini; Juvonen, Raija; Silvennoinen-Kassinen, Sylvi; Peitso, Ari; Harju, Terttu; Vainio, Olli; Kuusi, Markku; Maiden, Martin C. J.; Leinonen, Maija; Käyhty, Helena; Toropainen, Maija
2012-01-01
The relationship between carriage and the development of invasive meningococcal disease is not fully understood. We investigated the changes in meningococcal carriage in 892 military recruits in Finland during a nonepidemic period (July 2004 to January 2006) and characterized all of the oropharyngeal meningococcal isolates obtained (n = 215) by using phenotypic (serogrouping and serotyping) and genotypic (porA typing and multilocus sequence typing) methods. For comparison, 84 invasive meningococcal disease strains isolated in Finland between January 2004 and February 2006 were also analyzed. The rate of meningococcal carriage was significantly higher at the end of military service than on arrival (18% versus 2.2%; P < 0.001). Seventy-four percent of serogroupable carriage isolates belonged to serogroup B, and 24% belonged to serogroup Y. Most carriage isolates belonged to the carriage-associated ST-60 clonal complex. However, 21.5% belonged to the hyperinvasive ST-41/44 clonal complex. Isolates belonging to the ST-23 clonal complex were cultured more often from oropharyngeal samples taken during the acute phase of respiratory infection than from samples taken at health examinations at the beginning and end of military service (odds ratio [OR], 6.7; 95% confidence interval [95% CI], 2.7 to 16.4). The ST-32 clonal complex was associated with meningococcal disease (OR, 17.8; 95% CI, 3.8 to 81.2), while the ST-60 clonal complex was associated with carriage (OR, 10.7; 95% CI, 3.3 to 35.2). These findings point to the importance of meningococcal vaccination for military recruits and also to the need for an efficacious vaccine against serogroup B isolates. PMID:22135261
Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris
Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey
2005-01-01
Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...
Influence of processing conditions on point defects and luminescence centers in ZnO
NASA Astrophysics Data System (ADS)
Zhong, J.; Kitai, A. H.; Mascher, P.
1993-12-01
Positron lifetime spectroscopy and cathodoluminescence were employed to study luminescence centers in ZnO. The samples were high-purity polycrystalline ceramics sintered at temperatures ranging from 800 to 1400 C for 2 to 40 h. Scanning electron microscopy shows that as annealing temperatures and/or times increase, the average grain size increases and can reach 30 micron for samples sintered at 1200 C. At the same time, the positron bulk lifetime approaches theoretically estimated single-crystal values, while the integrated luminescence intensity increase significantly. A further increase of the sintering temperature beyond 1200 C results in a decrease in the luminescence intensity, in good agreement with the only weak luminescence observed in single-crystalline material. The positron lifetime spectra clearly show the existence of the dominant vacancy-type defect, most likely a complex involving V(sub Zn), or the divacancy, V(sub Zn)V(sub O), independent of sample thermal history. The concentration of this center steadily decreases with increasing sintering temperatures. It is concluded that the yellow luminescence centers are related to charged zinc vacancies trapped in the grain boundary regions. We propose that the observed broadness of the spectra likely originates from the modification of the electronic configuration of the luminescence centers due to their complex environment. A direct connection between the positron and the luminescence results could not be established; instead, they appear to reflect two relatively independent aspects of the samples. It could be shown, however, that positron annihilation measurements can be used effectively to monitor the evolution of the microstructure of the samples, in good agreement with scanning electron micrographs.
Fiebig, Lukas; Laux, Ralf; Binder, Rudolf; Ebner, Thomas
2016-10-01
1. Liquid chromatography (LC)-high resolution mass spectrometry (HRMS) techniques proved to be well suited for the identification of predicted and unexpected drug metabolites in complex biological matrices. 2. To efficiently discriminate between drug-related and endogenous matrix compounds, however, sophisticated postacquisition data mining tools, such as control comparison techniques are needed. For preclinical absorption, distribution, metabolism and excretion (ADME) studies that usually lack a placebo-dosed control group, the question arises how high-quality control data can be yielded using only a minimum number of control animals. 3. In the present study, the combination of LC-traveling wave ion mobility separation (TWIMS)-HRMS(E) and multivariate data analysis was used to study the polymer patterns of the frequently used formulation constituents polyethylene glycol 400 and polysorbate 80 in rat plasma and urine after oral and intravenous administration, respectively. 4. Complex peak patterns of both constituents were identified underlining the general importance of a vehicle-dosed control group in ADME studies for control comparison. Furthermore, the detailed analysis of administration route, blood sampling time and gender influences on both vehicle peak pattern as well as endogenous matrix background revealed that high-quality control data is obtained when (i) control animals receive an intravenous dose of the vehicle, (ii) the blood sampling time point is the same for analyte and control sample and (iii) analyte and control samples of the same gender are compared.
Healthcare teams as complex adaptive systems: Focus on interpersonal interaction.
Pype, Peter; Krystallidou, Demi; Deveugele, Myriam; Mertens, Fien; Rubinelli, Sara; Devisch, Ignaas
2017-11-01
The aim of this study is to test the feasibility of a tool to objectify the functioning of healthcare teams operating in the complexity zone, and to evaluate its usefulness in identifying areas for team quality improvement. We distributed The Complex Adaptive Leadership (CAL™) Organisational Capability Questionnaire (OCQ) to all members of one palliative care team (n=15) and to palliative care physicians in Flanders, Belgium (n=15). Group discussions were held on feasibility aspects and on the low scoring topics. Data was analysed calculating descriptive statistics (sum score, mean and standard deviation). The one sample T-Test was used to detect differences within each group. Both groups of participants reached mean scores ranging from good to excellent. The one sample T test showed statistically significant differences between participants' sum scores within each group (p<0,001). Group discussion led to suggestions for quality improvement e.g. enhanced feedback strategies between team members. The questionnaire used in our study shows to be a feasible and useful instrument for the evaluation of the palliative care teams' day-to-day operations and to identify areas for quality improvement. The CAL™OCQ is a promising instrument to evaluate any healthcare team functioning. A group discussion on the questionnaire scores can serve as a starting point to identify targets for quality improvement initiatives. Copyright © 2017 Elsevier B.V. All rights reserved.
Integrated fluorescence correlation spectroscopy device for point-of-care clinical applications
Olson, Eben; Torres, Richard; Levene, Michael J.
2013-01-01
We describe an optical system which reduces the cost and complexity of fluorescence correlation spectroscopy (FCS), intended to increase the suitability of the technique for clinical use. Integration of the focusing optics and sample chamber into a plastic component produces a design which is simple to align and operate. We validate the system by measurements on fluorescent dye, and compare the results to a commercial instrument. In addition, we demonstrate its application to measurements of concentration and multimerization of the clinically relevant protein von Willebrand factor (vWF) in human plasma. PMID:23847733
Detecting recurrence domains of dynamical systems by symbolic dynamics.
beim Graben, Peter; Hutt, Axel
2013-04-12
We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.
NASA Technical Reports Server (NTRS)
Deutsch, A.; Buhl, D.; Brockmeyer, P.; Lakomy, R.; Flucks, M.
1992-01-01
Within the framework of the Sudbury project a considerable number of Sr-Nd isotope analyses were carried out on petrographically well-defined samples of different breccia units. Together with isotope data from the literature these data are reviewed under the aspect of a self-consistent impact model. The crucial point of this model is that the Sudbury Igneous Complex (SIC) is interpreted as a differentiated impact melt sheet without any need for an endogenic 'magmatic' component such as 'impact-triggered' magmatism or 'partial' impact melting of the crust and mixing with a mantle-derived magma.
Stanescu, T; Jaffray, D
2018-05-25
Magnetic resonance imaging is expected to play a more important role in radiation therapy given the recent developments in MR-guided technologies. MR images need to consistently show high spatial accuracy to facilitate RT specific tasks such as treatment planning and in-room guidance. The present study investigates a new harmonic analysis method for the characterization of complex 3D fields derived from MR images affected by system-related distortions. An interior Dirichlet problem based on solving the Laplace equation with boundary conditions (BCs) was formulated for the case of a 3D distortion field. The second-order boundary value problem (BVP) was solved using a finite elements method (FEM) for several quadratic geometries - i.e., sphere, cylinder, cuboid, D-shaped, and ellipsoid. To stress-test the method and generalize it, the BVP was also solved for more complex surfaces such as a Reuleaux 9-gon and the MR imaging volume of a scanner featuring a high degree of surface irregularities. The BCs were formatted from reference experimental data collected with a linearity phantom featuring a volumetric grid structure. The method was validated by comparing the harmonic analysis results with the corresponding experimental reference fields. The harmonic fields were found to be in good agreement with the baseline experimental data for all geometries investigated. In the case of quadratic domains, the percentage of sampling points with residual values larger than 1 mm were 0.5% and 0.2% for the axial components and vector magnitude, respectively. For the general case of a domain defined by the available MR imaging field of view, the reference data showed a peak distortion of about 12 mm and 79% of the sampling points carried a distortion magnitude larger than 1 mm (tolerance intrinsic to the experimental data). The upper limits of the residual values after comparison with the harmonic fields showed max and mean of 1.4 mm and 0.25 mm, respectively, with only 1.5% of sampling points exceeding 1 mm. A novel harmonic analysis approach relying on finite element methods was introduced and validated for multiple volumes with surface shape functions ranging from simple to highly complex. Since a boundary value problem is solved the method requires input data from only the surface of the desired domain of interest. It is believed that the harmonic method will facilitate (a) the design of new phantoms dedicated for the quantification of MR image distortions in large volumes and (b) an integrative approach of combining multiple imaging tests specific to radiotherapy into a single test object for routine imaging quality control. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
Which skills and factors better predict winning and losing in high-level men's volleyball?
Peña, Javier; Rodríguez-Guerra, Jorge; Buscà, Bernat; Serra, Núria
2013-09-01
The aim of this study was to determine which skills and factors better predicted the outcomes of regular season volleyball matches in the Spanish "Superliga" and were significant for obtaining positive results in the game. The study sample consisted of 125 matches played during the 2010-11 Spanish men's first division volleyball championship. Matches were played by 12 teams composed of 148 players from 17 different nations from October 2010 to March 2011. The variables analyzed were the result of the game, team category, home/away court factors, points obtained in the break point phase, number of service errors, number of service aces, number of reception errors, percentage of positive receptions, percentage of perfect receptions, reception efficiency, number of attack errors, number of blocked attacks, attack points, percentage of attack points, attack efficiency, and number of blocks performed by both teams participating in the match. The results showed that the variables of team category, points obtained in the break point phase, number of reception errors, and number of blocked attacks by the opponent were significant predictors of winning or losing the matches. Odds ratios indicated that the odds of winning a volleyball match were 6.7 times greater for the teams belonging to higher rankings and that every additional point in Complex II increased the odds of winning a match by 1.5 times. Every reception and blocked ball error decreased the possibility of winning by 0.6 and 0.7 times, respectively.
Lateral Flow Immunoassays for Ebola Virus Disease Detection in Liberia.
Phan, Jill C; Pettitt, James; George, Josiah S; Fakoli, Lawrence S; Taweh, Fahn M; Bateman, Stacey L; Bennett, Richard S; Norris, Sarah L; Spinnler, David A; Pimentel, Guillermo; Sahr, Phillip K; Bolay, Fatorma K; Schoepp, Randal J
2016-10-15
Lateral flow immunoassays (LFIs) are point-of-care diagnostic assays that are designed for single use outside a formal laboratory, with in-home pregnancy tests the best-known example of these tests. Although the LFI has some limitations over more-complex immunoassay procedures, such as reduced sensitivity and the potential for false-positive results when using complex sample matrices, the assay has the benefits of a rapid time to result and ease of use. These benefits make it an attractive option for obtaining rapid results in an austere environment. In an outbreak of any magnitude, a field-based rapid diagnostic assay would allow proper patient transport and for safe burials to be conducted without the delay caused by transport of samples between remote villages and testing facilities. Use of such point-of-care instruments in the ongoing Ebola virus disease (EVD) outbreak in West Africa would have distinct advantages in control and prevention of local outbreaks, but proper understanding of the technology and interpretation of results are important. In this study, a LFI, originally developed by the Naval Medical Research Center for Ebola virus environmental testing, was evaluated for its ability to detect the virus in clinical samples in Liberia. Clinical blood and plasma samples and post mortem oral swabs submitted to the Liberian Institute for Biomedical Research, the National Public Health Reference Laboratory for EVD testing, were tested and compared to results of real-time reverse transcription-polymerase chain reaction (rRT-PCR), using assays targeting Ebola virus glycoprotein and nucleoprotein. The LFI findings correlated well with those of the real-time RT-PCR assays used as benchmarks. Rapid antigen-detection tests such as LFIs are attractive alternatives to traditional immunoassays but have reduced sensitivity and specificity, resulting in increases in false-positive and false-negative results. An understanding of the strengths, weaknesses, and limitations of a particular assay lets the diagnostician choose the correct situation to use the correct assay and properly interpret the results. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira
2009-09-28
The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40.
Hierarchical Probabilistic Inference of Cosmic Shear
NASA Astrophysics Data System (ADS)
Schneider, Michael D.; Hogg, David W.; Marshall, Philip J.; Dawson, William A.; Meyers, Joshua; Bard, Deborah J.; Lang, Dustin
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Whiley, Harriet; Keegan, Alexandra; Fallowfield, Howard; Bentham, Richard
2014-01-01
Inhalation of potable water presents a potential route of exposure to opportunistic pathogens and hence warrants significant public health concern. This study used qPCR to detect opportunistic pathogens Legionella spp., L. pneumophila and MAC at multiple points along two potable water distribution pipelines. One used chlorine disinfection and the other chloramine disinfection. Samples were collected four times over the year to provide seasonal variation and the chlorine or chloramine residual was measured during collection. Legionella spp., L. pneumophila and MAC were detected in both distribution systems throughout the year and were all detected at a maximum concentration of 103 copies/mL in the chlorine disinfected system and 106, 103 and 104 copies/mL respectively in the chloramine disinfected system. The concentrations of these opportunistic pathogens were primarily controlled throughout the distribution network through the maintenance of disinfection residuals. At a dead-end and when the disinfection residual was not maintained significant (p < 0.05) increases in concentration were observed when compared to the concentration measured closest to the processing plant in the same pipeline and sampling period. Total coliforms were not present in any water sample collected. This study demonstrates the ability of Legionella spp., L. pneumophila and MAC to survive the potable water disinfection process and highlights the need for greater measures to control these organisms along the distribution pipeline and at point of use. PMID:25046636
Whiley, Harriet; Keegan, Alexandra; Fallowfield, Howard; Bentham, Richard
2014-07-18
Inhalation of potable water presents a potential route of exposure to opportunistic pathogens and hence warrants significant public health concern. This study used qPCR to detect opportunistic pathogens Legionella spp., L. pneumophila and MAC at multiple points along two potable water distribution pipelines. One used chlorine disinfection and the other chloramine disinfection. Samples were collected four times over the year to provide seasonal variation and the chlorine or chloramine residual was measured during collection. Legionella spp., L. pneumophila and MAC were detected in both distribution systems throughout the year and were all detected at a maximum concentration of 103 copies/mL in the chlorine disinfected system and 106, 103 and 104 copies/mL respectively in the chloramine disinfected system. The concentrations of these opportunistic pathogens were primarily controlled throughout the distribution network through the maintenance of disinfection residuals. At a dead-end and when the disinfection residual was not maintained significant (p < 0.05) increases in concentration were observed when compared to the concentration measured closest to the processing plant in the same pipeline and sampling period. Total coliforms were not present in any water sample collected. This study demonstrates the ability of Legionella spp., L. pneumophila and MAC to survive the potable water disinfection process and highlights the need for greater measures to control these organisms along the distribution pipeline and at point of use.
Gourlay-Francé, C; Bressy, A; Uher, E; Lorgeoux, C
2011-01-01
The occurrence and the partitioning of polycyclic aromatic hydrocarbons (PAHs) and seven metals (Al, Cd, Cr, Cu, Ni, Pb and Zn) were investigated in activated sludge wastewater treatment plants by means of passive and active sampling. Concentrations total dissolved and particulate contaminants were determined in wastewater at several points across the treatment system by means of grab sampling. Truly dissolved PAHs were sampled by means of semipermeable membrane devices. Labile (inorganic and weakly complexed) dissolved metals were also sampled using the diffusive gradient in thin film technique. This study confirms the robustness and the validity of these two passive sampling techniques in wastewater. All contaminant concentrations decreased in wastewater along the treatment, although dissolved and labile concentrations sometimes increased for substances with less affinity with organic matter. Solid-liquid and dissolved organic matter/water partitioning constants were estimated. The high variability of both partitioning constants for a simple substance and the poor relation between K(D) and K(OW) shows that the binding capacities of particles and organic matter are not uniform within the treatment and that other process than equilibrium sorption affect contaminant repartition and fate in wastewater.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Stough, Con; King, Rebecca; Papafotiou, Katherine; Swann, Phillip; Ogden, Edward; Wesnes, Keith; Downey, Luke A
2012-04-01
This study investigated the acute (3-h) and 24-h post-dose cognitive effects of oral 3,4-methylenedioxymethamphetamine (MDMA), d-methamphetamine, and placebo in a within-subject double-blind laboratory-based study in order to compare the effect of these two commonly used illicit drugs on a large number of recreational drug users. Sixty-one abstinent recreational users of illicit drugs comprised the participant sample, with 33 females and 28 males, mean age 25.45 years. The three testing sessions involved oral consumption of 100 mg MDMA, 0.42 mg/kg d-methamphetamine, or a matching placebo. The drug administration was counter-balanced, double-blind, and medically supervised. Cognitive performance was assessed during drug peak (3 h) and at 24 h post-dosing time-points. Blood samples were also taken to quantify the levels of drug present at the cognitive testing time-points. Blood concentrations of both methamphetamine and MDMA at drug peak samples were consistent with levels observed in previous studies. The major findings concern poorer performance in the MDMA condition at peak concentration for the trail-making measures and an index of working memory (trend level), and more accurate performance on a choice reaction task within the methamphetamine condition. Most of the differences in performance between the MDMA, methamphetamine, and placebo treatments diminished by the 24-h testing time-point, although some performance improvements subsisted for choice reaction time for the methamphetamine condition. Further research into the acute effects of amphetamine preparations is necessary to further quantify the acute disruption of aspects of human functioning crucial to complex activities such as attention, selective memory, and psychomotor performance.
NASA Astrophysics Data System (ADS)
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
Effect of point defects on the amorphization of metallic alloys during ion implantation. [NiTi
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedraza, D.F.; Mansur, L.K.
1985-01-01
A theoretical model of radiation-induced amorphization of ordered intermetallic compounds is developed. The mechanism is proposed to be the buildup of lattice defects to very high concentrations, which destabilizes the crystalline structure. Because simple point defects do not normally reach such levels during irradiation, a new defect complex containing a vacancy and an interstitial is hypothesized. Crucial properties of the complex are that the interstitial sees a local chemical environment similar to that of an atom in the ordered lattice, that the formation of the complex prevents mutual recombination and that the complex is immobile. The evolution of a disordermore » based on complexes is not accompanied by like point defect aggregation. The latter leads to the development of a sink microstructure in alloys that do not become amorphous. For electron irradiation, the complexes form by diffusional encounters. For ion irradiation, complexes are also formed directly in cascades. The possibility of direct amorphization in cascades is also included. Calculations for the compound NiTi show reasonable agreement with measured amorphization kinetics.« less
Understanding Long-Term Variations in an Elephant Piosphere Effect to Manage Impacts
Landman, Marietjie; Schoeman, David S.; Hall-Martin, Anthony J.; Kerley, Graham I. H.
2012-01-01
Surface water availability is a key driver of elephant impacts on biological diversity. Thus, understanding the spatio-temporal variations of these impacts in relation to water is critical to their management. However, elephant piosphere effects (i.e. the radial pattern of attenuating impact) are poorly described, with few long-term quantitative studies. Our understanding is further confounded by the complexity of systems with elephant (i.e. fenced, multiple water points, seasonal water availability, varying population densities) that likely limit the use of conceptual models to predict these impacts. Using 31 years of data on shrub structure in the succulent thickets of the Addo Elephant National Park, South Africa, we tested elephant effects at a single water point. Shrub structure showed a clear sigmoid response with distance from water, declining at both the upper and lower limits of sampling. Adjacent to water, this decline caused a roughly 300-m radial expansion of the grass-dominated habitats that replace shrub communities. Despite the clear relationship between shrub structure and ecological functioning in thicket, the extent of elephant effects varied between these features with distance from water. Moreover, these patterns co-varied with other confounding variables (e.g. the location of neighboring water points), which limits our ability to predict such effects in the absence of long-term data. We predict that elephant have the ability to cause severe transformation in succulent thicket habitats with abundant water supply and elevated elephant numbers. However, these piosphere effects are complex, suggesting that a more integrated understanding of elephant impacts on ecological heterogeneity may be required before water availability is used as a tool to manage impacts. We caution against the establishment of water points in novel succulent thicket habitats, and advocate a significant reduction in water provisioning at our study site, albeit with greater impacts at each water point. PMID:23028942
Understanding long-term variations in an elephant piosphere effect to manage impacts.
Landman, Marietjie; Schoeman, David S; Hall-Martin, Anthony J; Kerley, Graham I H
2012-01-01
Surface water availability is a key driver of elephant impacts on biological diversity. Thus, understanding the spatio-temporal variations of these impacts in relation to water is critical to their management. However, elephant piosphere effects (i.e. the radial pattern of attenuating impact) are poorly described, with few long-term quantitative studies. Our understanding is further confounded by the complexity of systems with elephant (i.e. fenced, multiple water points, seasonal water availability, varying population densities) that likely limit the use of conceptual models to predict these impacts. Using 31 years of data on shrub structure in the succulent thickets of the Addo Elephant National Park, South Africa, we tested elephant effects at a single water point. Shrub structure showed a clear sigmoid response with distance from water, declining at both the upper and lower limits of sampling. Adjacent to water, this decline caused a roughly 300-m radial expansion of the grass-dominated habitats that replace shrub communities. Despite the clear relationship between shrub structure and ecological functioning in thicket, the extent of elephant effects varied between these features with distance from water. Moreover, these patterns co-varied with other confounding variables (e.g. the location of neighboring water points), which limits our ability to predict such effects in the absence of long-term data. We predict that elephant have the ability to cause severe transformation in succulent thicket habitats with abundant water supply and elevated elephant numbers. However, these piosphere effects are complex, suggesting that a more integrated understanding of elephant impacts on ecological heterogeneity may be required before water availability is used as a tool to manage impacts. We caution against the establishment of water points in novel succulent thicket habitats, and advocate a significant reduction in water provisioning at our study site, albeit with greater impacts at each water point.
From Invention to Innovation: Risk Analysis to Integrate One Health Technology in the Dairy Farm
Lombardo, Andrea; Boselli, Carlo; Amatiste, Simonetta; Ninci, Simone; Frazzoli, Chiara; Dragone, Roberto; De Rossi, Alberto; Grasso, Gerardo; Mantovani, Alberto; Brajon, Giovanni
2017-01-01
Current Hazard Analysis Critical Control Points (HACCP) approaches mainly fit for food industry, while their application in primary food production is still rudimentary. The European food safety framework calls for science-based support to the primary producers’ mandate for legal, scientific, and ethical responsibility in food supply. The multidisciplinary and interdisciplinary project ALERT pivots on the development of the technological invention (BEST platform) and application of its measurable (bio)markers—as well as scientific advances in risk analysis—at strategic points of the milk chain for time and cost-effective early identification of unwanted and/or unexpected events of both microbiological and toxicological nature. Health-oriented innovation is complex and subject to multiple variables. Through field activities in a dairy farm in central Italy, we explored individual components of the dairy farm system to overcome concrete challenges for the application of translational science in real life and (veterinary) public health. Based on an HACCP-like approach in animal production, the farm characterization focused on points of particular attention (POPAs) and critical control points to draw a farm management decision tree under the One Health view (environment, animal health, food safety). The analysis was based on the integrated use of checklists (environment; agricultural and zootechnical practices; animal health and welfare) and laboratory analyses of well water, feed and silage, individual fecal samples, and bulk milk. The understanding of complex systems is a condition to accomplish true innovation through new technologies. BEST is a detection and monitoring system in support of production security, quality and safety: a grid of its (bio)markers can find direct application in critical points for early identification of potential hazards or anomalies. The HACCP-like self-monitoring in primary production is feasible, as well as the biomonitoring of live food producing animals as sentinel population for One Health. PMID:29218304
From Invention to Innovation: Risk Analysis to Integrate One Health Technology in the Dairy Farm.
Lombardo, Andrea; Boselli, Carlo; Amatiste, Simonetta; Ninci, Simone; Frazzoli, Chiara; Dragone, Roberto; De Rossi, Alberto; Grasso, Gerardo; Mantovani, Alberto; Brajon, Giovanni
2017-01-01
Current Hazard Analysis Critical Control Points (HACCP) approaches mainly fit for food industry, while their application in primary food production is still rudimentary. The European food safety framework calls for science-based support to the primary producers' mandate for legal, scientific, and ethical responsibility in food supply. The multidisciplinary and interdisciplinary project ALERT pivots on the development of the technological invention (BEST platform) and application of its measurable (bio)markers-as well as scientific advances in risk analysis-at strategic points of the milk chain for time and cost-effective early identification of unwanted and/or unexpected events of both microbiological and toxicological nature. Health-oriented innovation is complex and subject to multiple variables. Through field activities in a dairy farm in central Italy, we explored individual components of the dairy farm system to overcome concrete challenges for the application of translational science in real life and (veterinary) public health. Based on an HACCP-like approach in animal production, the farm characterization focused on points of particular attention (POPAs) and critical control points to draw a farm management decision tree under the One Health view (environment, animal health, food safety). The analysis was based on the integrated use of checklists (environment; agricultural and zootechnical practices; animal health and welfare) and laboratory analyses of well water, feed and silage, individual fecal samples, and bulk milk. The understanding of complex systems is a condition to accomplish true innovation through new technologies. BEST is a detection and monitoring system in support of production security, quality and safety: a grid of its (bio)markers can find direct application in critical points for early identification of potential hazards or anomalies. The HACCP-like self-monitoring in primary production is feasible, as well as the biomonitoring of live food producing animals as sentinel population for One Health.
High frequency lateral flow affinity assay using superparamagnetic nanoparticles
NASA Astrophysics Data System (ADS)
Lago-Cachón, D.; Rivas, M.; Martínez-García, J. C.; Oliveira-Rodríguez, M.; Blanco-López, M. C.; García, J. A.
2017-02-01
Lateral flow assay is one of the simplest and most extended techniques in medical diagnosis for point-of-care testing. Although it has been traditionally a positive/negative test, some work has been lately done to add quantitative abilities to lateral flow assay. One of the most successful strategies involves magnetic beads and magnetic sensors. Recently, a new technique of superparamagnetic nanoparticle detection has been reported, based on the increase of the impedance induced by the nanoparticles on a RF-current carrying copper conductor. This method requires no external magnetic field, which reduces the system complexity. In this work, nitrocellulose membranes have been installed on the sensor, and impedance measurements have been carried out during the sample diffusion by capillarity along the membrane. The impedance of the sensor changes because of the presence of magnetic nanoparticles. The results prove the potentiality of the method for point-of-care testing of biochemical substances and nanoparticle capillarity flow studies.
Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction
Cruz Zurian, Heber; Atefi, Seyed Reza; Seoane Martinez, Fernando; Lukowicz, Paul
2017-01-01
In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93.3% accuracy from a known group of participants, and 89.1% from strangers. PMID:29120389
Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction.
Zhou, Bo; Altamirano, Carlos Andres Velez; Zurian, Heber Cruz; Atefi, Seyed Reza; Billing, Erik; Martinez, Fernando Seoane; Lukowicz, Paul
2017-11-09
In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm 2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments' contribution. The best performing feature-classifier combination can recognize the gestures with a 93 . 3 % accuracy from a known group of participants, and 89 . 1 % from strangers.
Optimal Ventilation Control in Complex Urban Tunnels with Multi-Point Pollutant Discharge
DOT National Transportation Integrated Search
2017-10-01
Zhen Tan (ORCID ID 0000-0003-1711-3557) H. Oliver Gao (ORCID ID 0000-0002-7861-9634) We propose an optimal ventilation control model for complex urban vehicular tunnels with distributed pollutant discharge points. The control problem is formulated as...
NASA Astrophysics Data System (ADS)
Herries, A. I. R.; Kovacheva, M.; Kostadinova, M.; Shaw, J.
2007-07-01
Archaeomagnetic results are presented from a series of burnt structures at the Thracian site of Halka Bunar. Archaeointensity and archaeodirectional studies were undertaken on three kilns from a pottery production complex. This has been dated to the late 4th and early 3rd century B.C. (325-280 B.C.) based on coins found associated with the kilns [Tonkova, M., 2003. Newly discovered Thracian Centre of the Early Hellenistic Age at the Spring "Halka Bunar" in the Land of C. Gorno Belevo. Annuary of the Institute of Archaeology with Museum. Bulgarian Academy Sci. 2, 148-196 (in Bulgarian)]. This data provides a new point for the Bulgarian archaeomagnetic curve (Dec: 348.70 ± 5.79, Inc: 62.20 ± 2.70, and Fa: 77.23 ± 2.17 μT). The kilns are thought to have been used for producing different types of pottery in a range of heating atmospheres and at different temperatures. Therefore, special attention was paid to the magnetic mineralogy of the samples and its effect on the palaeodata. Kiln 3, orange clay samples were dominated by fine to ultra-fine grained single domain and superparamagnetic magnetite, with a small proportion of haematite. The samples were heated in a high temperature oxidising environment. Kiln 2 was probably used to make grey ware pottery. The samples are light grey and were dominated by stable single domain magnetite formed by high temperature heating in a more reducing environment. Kiln 4, mottled samples consisted of a variable mineralogy showing characteristics of both Kiln 2 and Kiln 3 samples. It was probably used to make traditional, mottled, Thracian ware pottery and was heated to lower temperatures in a mixed environment of heating. Samples heated in an oxidising environment gave more reliable Thellier results than samples heated in a reducing environment in antiquity, as the latter altered heavily on re-heating. A fourth kiln and a destruction feature from different trenches than the kiln complex were also investigated to establish their age. Archaeodirectional data was not recoverable from these two structures due to post-burning disturbance. The mean archaeointensity from Kiln 5 (mean 78.0 ± 1.7 μT) is consistent with that from the main kiln complex (mean 77.23 ± 2.17 μT) and is therefore considered to be contemporary. It was probably not used to make pottery. The destruction feature records much lower archaeointensity values (mean 65.1 ± 1.1 μT). When this value is compared to the existing reference points of the Bulgarian database it suggests this feature is younger than the kilns (250-140 B.C.). Multiple age use of the site is therefore confirmed with a main period of occupation in the late 4th and early 3rd century B.C. and another phase of occupation in the mid 3rd to mid 2nd century B.C.
Huang, Shuai; Mo, Ting-Ting; Norris, Tom; Sun, Si; Zhang, Ting; Han, Ting-Li; Rowan, Angela; Xia, Yin-Yin; Zhang, Hua; Qi, Hong-Bo; Baker, Philip N
2017-10-12
Complex lipids are important constituents of the central nervous system. Studies have shown that supplementation with complex milk lipids (CML) in pregnancy may increase the level of fetal gangliosides (GA), with the potential to improve cognitive outcomes. We aim to recruit approximately 1500 pregnant women in the first trimester (11-14 weeks) and randomise them into one of the three treatment groups: standard maternal milk formulation, CML-enhanced maternal milk formulation or no maternal milk intervention with standard pregnancy advice (ie, the standard care). Maternal lifestyle and demographic data will be collected throughout the pregnancy, as well as biological samples (eg, blood, hair, urine, buccal smear, cord blood, cord and placenta samples). Data from standard obstetric care recorded in hospital maternity notes (eg, ultrasound reports, results of oral glucose tolerance test and pregnancy outcome data) will also be extracted. Postnatal follow-up will be at 6 weeks and 12 months of age, at which point infant cognitive development will be assessed (Bayley Scales of Infant Development I). This project was approved by the Ethics Committee of Chongqing Medical University. Dissemination of findings will take the form of publications in peer-reviewed journals and presentations at national and international conferences. ChiCTR-IOR-16007700; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine
2017-11-16
Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.
Simulating evolution of protein complexes through gene duplication and co-option.
Haarsma, Loren; Nelesen, Serita; VanAndel, Ethan; Lamine, James; VandeHaar, Peter
2016-06-21
We present a model of the evolution of protein complexes with novel functions through gene duplication, mutation, and co-option. Under a wide variety of input parameters, digital organisms evolve complexes of 2-5 bound proteins which have novel functions but whose component proteins are not independently functional. Evolution of complexes with novel functions happens more quickly as gene duplication rates increase, point mutation rates increase, protein complex functional probability increases, protein complex functional strength increases, and protein family size decreases. Evolution of complexity is inhibited when the metabolic costs of making proteins exceeds the fitness gain of having functional proteins, or when point mutation rates get so large the functional proteins undergo deleterious mutations faster than new functional complexes can evolve. Copyright © 2016 Elsevier Ltd. All rights reserved.
P-value interpretation and alpha allocation in clinical trials.
Moyé, L A
1998-08-01
Although much value has been placed on type I error event probabilities in clinical trials, interpretive difficulties often arise that are directly related to clinical trial complexity. Deviations of the trial execution from its protocol, the presence of multiple treatment arms, and the inclusion of multiple end points complicate the interpretation of an experiment's reported alpha level. The purpose of this manuscript is to formulate the discussion of P values (and power for studies showing no significant differences) on the basis of the event whose relative frequency they represent. Experimental discordance (discrepancies between the protocol's directives and the experiment's execution) is linked to difficulty in alpha and beta interpretation. Mild experimental discordance leads to an acceptable adjustment for alpha or beta, while severe discordance results in their corruption. Finally, guidelines are provided for allocating type I error among a collection of end points in a prospectively designed, randomized controlled clinical trial. When considering secondary end point inclusion in clinical trials, investigators should increase the sample size to preserve the type I error rates at acceptable levels.
SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry
NASA Astrophysics Data System (ADS)
Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.
2007-03-01
A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
DUCTILE-PHASE TOUGHENED TUNGSTEN FOR PLASMA-FACING MATERIALS IN FUSION REACTORS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henager, Charles H.; Setyawan, Wahyu; Roosendaal, Timothy J.
2017-05-01
Tungsten (W) and W-alloys are the leading candidates for plasma-facing components in nuclear fusion reactor designs because of their high melting point, strength retention at high temperatures, high thermal conductivity, and low sputtering yield. However, tungsten is brittle and does not exhibit the required fracture toughness for licensing in nuclear applications. A promising approach to increasing fracture toughness of W-alloys is by ductile-phase toughening (DPT). In this method, a ductile phase is included in a brittle matrix to prevent on inhibit crack propagation by crack blunting, crack bridging, crack deflection, and crack branching. Model examples of DPT tungsten are exploredmore » in this study, including W-Cu and W-Ni-Fe powder product composites. Three-point and four-point notched and/or pre-cracked bend samples were tested at several strain rates and temperatures to help understand deformation, cracking, and toughening in these materials. Data from these tests are used for developing and calibrating crack-bridging models. Finite element damage mechanics models are introduced as a modeling method that appears to capture the complexity of crack growth in these materials.« less
Computed Potential Energy Surfaces and Minimum Energy Pathways for Chemical Reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)
1994-01-01
Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. For some dynamics methods, global potential energy surfaces are required. In this case, it is necessary to obtain the energy at a complete sampling of all the possible arrangements of the nuclei, which are energetically accessible, and then a fitting function must be obtained to interpolate between the computed points. In other cases, characterization of the stationary points and the reaction pathway connecting them is sufficient. These properties may be readily obtained using analytical derivative methods. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives usefull results for a number of chemically important systems. The talk will focus on a number of applications including global potential energy surfaces, H + O2, H + N2, O(3p) + H2, and reaction pathways for complex reactions, including reactions leading to NO and soot formation in hydrocarbon combustion.
Design of refractive laser beam shapers to generate complex irradiance profiles
NASA Astrophysics Data System (ADS)
Li, Meijie; Meuret, Youri; Duerr, Fabian; Vervaeke, Michael; Thienpont, Hugo
2014-05-01
A Gaussian laser beam is reshaped to have specific irradiance distributions in many applications in order to ensure optimal system performance. Refractive optics are commonly used for laser beam shaping. A refractive laser beam shaper is typically formed by either two plano-aspheric lenses or by one thick lens with two aspherical surfaces. Ray mapping is a general optical design technique to design refractive beam shapers based on geometric optics. This design technique in principle allows to generate any rotational-symmetric irradiance profile, yet in literature ray mapping is mainly developed to transform a Gaussian irradiance profile to a uniform profile. For more complex profiles especially with low intensity in the inner region, like a Dark Hollow Gaussian (DHG) irradiance profile, ray mapping technique is not directly applicable in practice. In order to these complex profiles, the numerical effort of calculating the aspherical surface points and fitting a surface with sufficient accuracy increases considerably. In this work we evaluate different sampling approaches and surface fitting methods. This allows us to propose and demonstrate a comprehensive numerical approach to efficiently design refractive laser beam shapers to generate rotational-symmetric collimated beams with a complex irradiance profile. Ray tracing analysis for several complex irradiance profiles demonstrates excellent performance of the designed lenses and the versatility of our design procedure.
Composite analysis for Escherichia coli at coastal beaches
Bertke, E.E.
2007-01-01
At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.
Intensity of Territorial Marking Predicts Wolf Reproduction: Implications for Wolf Monitoring
García, Emilio J.
2014-01-01
Background The implementation of intensive and complex approaches to monitor large carnivores is resource demanding, restricted to endangered species, small populations, or small distribution ranges. Wolf monitoring over large spatial scales is difficult, but the management of such contentious species requires regular estimations of abundance to guide decision-makers. The integration of wolf marking behaviour with simple sign counts may offer a cost-effective alternative to monitor the status of wolf populations over large spatial scales. Methodology/Principal Findings We used a multi-sampling approach, based on the collection of visual and scent wolf marks (faeces and ground scratching) and the assessment of wolf reproduction using howling and observation points, to test whether the intensity of marking behaviour around the pup-rearing period (summer-autumn) could reflect wolf reproduction. Between 1994 and 2007 we collected 1,964 wolf marks in a total of 1,877 km surveyed and we searched for the pups' presence (1,497 howling and 307 observations points) in 42 sampling sites with a regular presence of wolves (120 sampling sites/year). The number of wolf marks was ca. 3 times higher in sites with a confirmed presence of pups (20.3 vs. 7.2 marks). We found a significant relationship between the number of wolf marks (mean and maximum relative abundance index) and the probability of wolf reproduction. Conclusions/Significance This research establishes a real-time relationship between the intensity of wolf marking behaviour and wolf reproduction. We suggest a conservative cutting point of 0.60 for the probability of wolf reproduction to monitor wolves on a regional scale combined with the use of the mean relative abundance index of wolf marks in a given area. We show how the integration of wolf behaviour with simple sampling procedures permit rapid, real-time, and cost-effective assessments of the breeding status of wolf packs with substantial implications to monitor wolves at large spatial scales. PMID:24663068
Verberkmoes, Nathan C; Hervey, W Judson; Shah, Manesh; Land, Miriam; Hauser, Loren; Larimer, Frank W; Van Berkel, Gary J; Goeringer, Douglas E
2005-02-01
There is currently a great need for rapid detection and positive identification of biological threat agents, as well as microbial species in general, directly from complex environmental samples. This need is most urgent in the area of homeland security, but also extends into medical, environmental, and agricultural sciences. Mass-spectrometry-based analysis is one of the leading technologies in the field with a diversity of different methodologies for biothreat detection. Over the past few years, "shotgun"proteomics has become one method of choice for the rapid analysis of complex protein mixtures by mass spectrometry. Recently, it was demonstrated that this methodology is capable of distinguishing a target species against a large database of background species from a single-component sample or dual-component mixtures with relatively the same concentration. Here, we examine the potential of shotgun proteomics to analyze a target species in a background of four contaminant species. We tested the capability of a common commercial mass-spectrometry-based shotgun proteomics platform for the detection of the target species (Escherichia coli) at four different concentrations and four different time points of analysis. We also tested the effect of database size on positive identification of the four microbes used in this study by testing a small (13-species) database and a large (261-species) database. The results clearly indicated that this technology could easily identify the target species at 20% in the background mixture at a 60, 120, 180, or 240 min analysis time with the small database. The results also indicated that the target species could easily be identified at 20% or 6% but could not be identified at 0.6% or 0.06% in either a 240 min analysis or a 30 h analysis with the small database. The effects of the large database were severe on the target species where detection above the background at any concentration used in this study was impossible, though the three other microbes used in this study were clearly identified above the background when analyzed with the large database. This study points to the potential application of this technology for biological threat agent detection but highlights many areas of needed research before the technology will be useful in real world samples.
Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.
Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît
2011-01-01
Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Exploring hyperspectral imaging data sets with topological data analysis.
Duponchel, Ludovic
2018-02-13
Analytical chemistry is rapidly changing. Indeed we acquire always more data in order to go ever further in the exploration of complex samples. Hyperspectral imaging has not escaped this trend. It quickly became a tool of choice for molecular characterisation of complex samples in many scientific domains. The main reason is that it simultaneously provides spectral and spatial information. As a result, chemometrics has provided many exploration tools (PCA, clustering, MCR-ALS …) well-suited for such data structure at early stage. However we are today facing a new challenge considering the always increasing number of pixels in the data cubes we have to manage. The idea is therefore to introduce a new paradigm of Topological Data Analysis in order explore hyperspectral imaging data sets highlighting its nice properties and specific features. With this paper, we shall also point out the fact that conventional chemometric methods are often based on variance analysis or simply impose a data model which implicitly defines the geometry of the data set. Thus we will show that it is not always appropriate in the framework of hyperspectral imaging data sets exploration. Copyright © 2017 Elsevier B.V. All rights reserved.
Mapping of epistatic quantitative trait loci in four-way crosses.
He, Xiao-Hong; Qin, Hongde; Hu, Zhongli; Zhang, Tianzhen; Zhang, Yuan-Ming
2011-01-01
Four-way crosses (4WC) involving four different inbred lines often appear in plant and animal commercial breeding programs. Direct mapping of quantitative trait loci (QTL) in these commercial populations is both economical and practical. However, the existing statistical methods for mapping QTL in a 4WC population are built on the single-QTL genetic model. This simple genetic model fails to take into account QTL interactions, which play an important role in the genetic architecture of complex traits. In this paper, therefore, we attempted to develop a statistical method to detect epistatic QTL in 4WC population. Conditional probabilities of QTL genotypes, computed by the multi-point single locus method, were used to sample the genotypes of all putative QTL in the entire genome. The sampled genotypes were used to construct the design matrix for QTL effects. All QTL effects, including main and epistatic effects, were simultaneously estimated by the penalized maximum likelihood method. The proposed method was confirmed by a series of Monte Carlo simulation studies and real data analysis of cotton. The new method will provide novel tools for the genetic dissection of complex traits, construction of QTL networks, and analysis of heterosis.
Soomro, Rubina; Ahmed, M. Jamaluddin; Memon, Najma; Khan, Humaira
2008-01-01
A simple high sensitive, selective, and rapid spectrophotometric method for the determination of trace gold based on the rapid reaction of gold(III) with bis(salicylaldehyde)orthophenylenediamine (BSOPD) in aqueous and micellar media has been developed. BSOPD reacts with gold(III) in slightly acidic solution to form a 1:1 brownish-yellow complex, which has an maximum absorption peak at 490 nm in both aqueous and micellar media. The most remarkable point of this method is that the molar absorptivities of the gold-BSOPD complex form in the presence of the nonionic TritonX-100 surfactant are almost a 10 times higher than the value observed in the aqueous solution, resulting in an increase in the sensitivity and selectivity of the method. The apparent molar absorptivities were found to be 2.3 × 104 L mol−1 cm−1 and 2.5 × 105 L mol−1 cm−1 in aqueous and micellar media, respectively. The reaction is instantaneous and the maximum absorbance was obtained after 10 min at 490 nm and remains constant for over 24 h at room temperature. The linear calibration graphs were obtained for 0.1–30 mg L−1 and 0.01–30 mg L−1 of gold(III) in aqueous and surfactant media, respectively. The interference from over 50 cations, anions and complexing agents has been studied at 1 mg L−1 of Au(III); most metal ions can be tolerated in considerable amounts in aqueous micellar solutions. The Sandell’s sensitivity, the limit of detection and relative standard deviation (n = 9) were found to be 5 ng cm−2, 1 ng mL−1 and 2%, respectively in aqueous micellar solutions. Its sensitivity and selectivity are remarkably higher than that of other reagents in the literature. The proposed method was successfully used in the determination of gold in several standard reference materials (alloys and steels), environmental water samples (potable and polluted), and biological samples (blood and urine), geological, soil and complex synthetic mixtures. The results obtained agree well with those samples analyzed by atomic absorption spectrophotometry (AAS). PMID:19609392
Bezinge, Leonard; Maceiczyk, Richard M; Lignos, Ioannis; Kovalenko, Maksym V; deMello, Andrew J
2018-06-06
Recent advances in the development of hybrid organic-inorganic lead halide perovskite (LHP) nanocrystals (NCs) have demonstrated their versatility and potential application in photovoltaics and as light sources through compositional tuning of optical properties. That said, due to their compositional complexity, the targeted synthesis of mixed-cation and/or mixed-halide LHP NCs still represents an immense challenge for traditional batch-scale chemistry. To address this limitation, we herein report the integration of a high-throughput segmented-flow microfluidic reactor and a self-optimizing algorithm for the synthesis of NCs with defined emission properties. The algorithm, named Multiparametric Automated Regression Kriging Interpolation and Adaptive Sampling (MARIA), iteratively computes optimal sampling points at each stage of an experimental sequence to reach a target emission peak wavelength based on spectroscopic measurements. We demonstrate the efficacy of the method through the synthesis of multinary LHP NCs, (Cs/FA)Pb(I/Br) 3 (FA = formamidinium) and (Rb/Cs/FA)Pb(I/Br) 3 NCs, using MARIA to rapidly identify reagent concentrations that yield user-defined photoluminescence peak wavelengths in the green-red spectral region. The procedure returns a robust model around a target output in far fewer measurements than systematic screening of parametric space and additionally enables the prediction of other spectral properties, such as, full-width at half-maximum and intensity, for conditions yielding NCs with similar emission peak wavelength.
In situ Analysis of North American Diamond: Implications for Diamond Growth Modeling
NASA Astrophysics Data System (ADS)
Schulze, D. J.; Van Rythoven, A. D.; Hauri, E.; Wang, J.
2014-12-01
Diamond crystals from three North American kimberlite occurrences were investigated with cathodoluminescence (CL) and secondary ion mass spectrometry (SIMS) to determine their growth history, carbon isotope composition and nitrogen content. Samples analyzed include sixteen from Lynx (Quebec), twelve from Kelsey Lake (Colorado) and eighteen from A154 South (Diavik mine, Northwest Territories). Growth histories for the samples vary from simple to highly complex based on their CL images and depending on the individual stone. Deformation lamellae are evident in CL images of the Lynx crystals which typically are brownish in color. Two to five points per diamond were analyzed by SIMS for carbon isotope composition (δ13CPDB) and three to seven points for nitrogen content. The results for the A154 South (δ13CPDB = -6.76 to -1.68 ‰) and Kelsey Lake (δ13CPDB = -11.81 to -2.43 ‰) stones (mixed peridotitic and eclogitic suites) are similar to earlier reported values. The Lynx kimberlite stones have anomalously high carbon isotope ratios and range from -3.58 to +1.74 ‰. The Lynx diamond suite is almost entirely peridotitic. The unusually high (i.e. >-5‰) δ13C values of the Lynx diamonds, as well as those from Wawa, Ontario and Renard, Quebec, may indicate an anomalous carbon reservoir for the Superior cratonic mantle relative to other cratons. In addition to the heavier carbon isotope values, the Lynx samples have very low nitrogen contents (<100 ppm). Nitrogen contents for Kelsey Lake and Diavik samples are more typical and range to ~1100 ppm. Comparison of observed core to rim variations in nitrogen content and carbon isotopes with modeled Rayleigh fractionation trends for published diamond growth mechanisms allows for evaluation of carbon speciation and other parent fluid conditions. Observed trends that closely follow modeled data are rare, but appear to suggest diamond growth from carbonate-bearing fluids at Lynx and Diavik, and growth from a methane-bearing fluid at Kelsey Lake. However the majority of crystals appear to have very complex growth histories that are clearly the result of multiple growth and resorption events. Trends observed in most of the samples from this study are chaotic and no consistent patterns are seen.
Park, Si Hong; Kim, Sun Ae; Lee, Sang In; Rubinelli, Peter M.; Roto, Stephanie M.; Pavlidis, Hilary O.; McIntyre, Donald R.; Ricke, Steven C.
2017-01-01
Feed supplements are utilized in the poultry industry as a means for improving growth performance and reducing pathogens. The aim of the present study was to evaluate the effects of Diamond V Original XPCTM (XPC, a fermented product generated from yeast cultures) on Salmonella Typhimurium ST 97 along with its potential for modulation of the cecal microbiota by using an anaerobic in vitro mixed culture assay. Cecal slurries obtained from three broiler chickens at each of three sampling ages (14, 28, and 42 days) were generated and exposed to a 24 h pre-incubation period with the various treatments: XPC (1% XPC, ceca, and feeds), CO (ceca only), and NC (negative control) group consisting of ceca and feeds. The XPC, CO, and NC were each challenged with S. Typhimurium and subsequently plated on selective media at 0, 24, and 48 h. Plating results indicated that the XPC treatment significantly reduced the survival of S. Typhimurium at the 24 h plating time point for both the 28 and 42 days bird sampling ages, while S. Typhimurium reduction in the NC appeared to eventually reach the same population survival level at the 48 h plating time point. For microbiome analysis, Trial 1 revealed that XPC, CO, and NC groups exhibited a similar pattern of taxa summary. However, more Bacteroidetes were observed in the CO group at 24 and 48 h. There were no significant differences (P > 0.05) in alpha diversity among samples based on day, hour and treatment. For beta diversity analysis, a pattern shift was observed when samples clustered according to sampling hour. In Trial 2, both XPC and NC groups exhibited the highest Firmicutes level at 0 h but the Bacteroidetes group became dominant at 6 h. Complexity of alpha diversity was increased in the initial contents from older birds and became less complex after 6 h of incubation. Beta diversity analysis was clustered as a function of treatment NC and XPC groups and by individual hours including 6, 12, 24, and 48 h. Overall, addition of XPC influenced microbiome diversity in a similar fashion to the profile of the NC group. PMID:28659891
NASA Technical Reports Server (NTRS)
White, D. R.
1976-01-01
A high-vacuum complex composed of an atmospheric decontamination system, sample-processing chambers, storage chambers, and a transfer system was built to process and examine lunar material while maintaining quarantine status. Problems identified, equipment modifications, and procedure changes made for Apollo 11 and 12 sample processing are presented. The sample processing experiences indicate that only a few operating personnel are required to process the sample efficiently, safely, and rapidly in the high-vacuum complex. The high-vacuum complex was designed to handle the many contingencies, both quarantine and scientific, associated with handling an unknown entity such as the lunar sample. Lunar sample handling necessitated a complex system that could not respond rapidly to changing scientific requirements as the characteristics of the lunar sample were better defined. Although the complex successfully handled the processing of Apollo 11 and 12 lunar samples, the scientific requirement for vacuum samples was deleted after the Apollo 12 mission just as the vacuum system was reaching its full potential.
Coran, Silvia A; Giannellini, Valerio; Bambagiotti-Alberti, Massimo
2004-08-06
A HPTLC-densitometric method, based on an external standard approach, was developed in order to obtain a novel procedure for routine analysis of secoisolariciresinol diglucoside (SDG) in flaxseed with a minimum of sample pre-treatment. Optimization of TLC conditions for the densitometric scanning was reached by eluting HPTLC silica gel plates in a horizontal developing chamber. Quantitation of SDG was performed in single beam reflectance mode by using a computer-controlled densitometric scanner and applying a five-point calibration in the 1.00-10.00 microg/spot range. As no sample preparation was required, the proposed HPTLC-densitometric procedure demonstrated to be reliable, yet using an external standard approach. The proposed method is precise, reproducible and accurate and can be employed profitably in place of HPLC for the determination of SDG in complex matrices.
Investigation of metal ions sorption of brown peat moss powder
NASA Astrophysics Data System (ADS)
Kelus, Nadezhda; Blokhina, Elena; Novikov, Dmitry; Novikova, Yaroslavna; Chuchalin, Vladimir
2017-11-01
For regularities research of sorptive extraction of heavy metal ions by cellulose and its derivates from aquatic solution of electrolytes it is necessary to find possible mechanism of sorption process and to choice a model describing this process. The present article investigates the regularities of aliovalent metals sorption on brown peat moss powder. The results show that sorption isotherm of Al3+ ions is described by Freundlich isotherm and sorption isotherms of Na+ i Ni2+ are described by Langmuir isotherm. To identify the mechanisms of brown peat moss powder sorption the IR-spectra of the initial brown peat moss powder samples and brown peat moss powder samples after Ni (II) sorption were studied. Metal ion binding mechanisms by brown peat moss powder points to ion exchange, physical adsorption, and complex formation with hydroxyl and carboxyl groups.
Versatile electrophoresis-based self-test platform.
Guijt, Rosanne M
2015-03-01
Lab on a Chip technology offers the possibility to extract chemical information from a complex sample in a simple, automated way without the need for a laboratory setting. In the health care sector, this chemical information could be used as a diagnostic tool for example to inform dosing. In this issue, the research underpinning a family of electrophoresis-based point-of-care devices for self-testing of ionic analytes in various sample matrices is described [Electrophoresis 2015, 36, 712-721.]. Hardware, software, and methodological chances made to improve the overall analytical performance in terms of accuracy, precision, detection limit, and reliability are discussed. In addition to the main focus of lithium monitoring, new applications including the use of the platform for veterinary purposes, sodium, and for creatinine measurements are included. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Biofunctionalization of silica-coated magnetic particles mediated by a peptide
NASA Astrophysics Data System (ADS)
Care, Andrew; Chi, Fei; Bergquist, Peter L.; Sunna, Anwar
2014-08-01
A linker peptide sequence with affinity to silica-containing materials was fused to Streptococcus protein G', an antibody-binding protein. This recombinant fusion protein, linker-protein G (LPG) was produced in E. coli and exhibited strong affinity to silica-coated magnetic particles and was able to bind to them at different pHs, indicating a true pH-independent binding. LPG was used as an anchorage point for the oriented immobilization of antibodies onto the surface of the particles. These particle-bound "LPG-Antibody complexes" mediated the binding and recovery of different cell types (e.g., human stem cells, Legionella, Cryptosporidium and Giardia), enabling their rapid and simple visualization and identification. This strategy was used also for the efficient capture of Cryptosporidium oocysts from water samples. These results demonstrate that LPG can mediate the direct biofunctionalization of silica-coated magnetic particles without the need for complex surface chemical modification.
Fully developed turbulence and complex time singularities
NASA Astrophysics Data System (ADS)
Dombre, T.; Gagne, Y.; Hopfinger, E.
The hypothesis of Frisch and Morf (1981), relating intermittent bursts observed in high-pass-filtered turbulent-flow data to complex time singularities in the solution of the Navier-Stokes equations, is tested experimentally. Velocity signals filtered at high-pass frequency 1 kHz and low-pass frequency 6 kHz are recorded for 7 min at sampling frequency 20 kHz in a flow of mean velocity 6.1 m/s, with mesh length d = 7.5 cm, observation point x/d = 40, R sub lambda = 67, dissipation length eta = 0.5 mm, and Kolmogorov frequency fK = about 2 kHz. The results are presented in graphs, and it is shown that the exponential behavior of the energy spectrum settles well before fK, the spectra of individual bursts having exponential behavior and delta(asterisk) values consistent with the Frisch-Morf hypothesis, at least for high-amplitude events.
Evaluation of several two-dimensional gel electrophoresis techniques in cardiac proteomics.
Li, Zhao Bo; Flint, Paul W; Boluyt, Marvin O
2005-09-01
Two-dimensional gel electrophoresis (2-DE) is currently the best method for separating complex mixtures of proteins, and its use is gradually becoming more common in cardiac proteome analysis. A number of variations in basic 2-DE have emerged, but their usefulness in analyzing cardiac tissue has not been evaluated. The purpose of the present study was to systematically evaluate the capabilities and limitations of several 2-DE techniques for separating proteins from rat heart tissue. Immobilized pH gradient strips of various pH ranges, parameters of protein loading and staining, subcellular fractionation, and detection of phosphorylated proteins were studied. The results provide guidance for proteome analysis of cardiac and other tissues in terms of selection of the isoelectric point separating window for cardiac proteins, accurate quantitation of cardiac protein abundance, stabilization of technical variation, reduction of sample complexity, enrichment of low-abundant proteins, and detection of phosphorylated proteins.
Pinderhughes, Ellen E; Zhang, Xian; Agerbak, Susanne
2015-12-01
Drawing on a model of ethnic-racial socialization (E-RS; Pinderhughes, 2013), this study examined hypothesized relations among parents' role variables (family ethnic identity and acknowledgment of cultural and racial differences), cultural socialization (CS) behaviors, and children's self-perceptions (ethnic self-label and feelings about self-label). The sample comprised 44 U.S.-based parents and their daughters ages 6 to 9 who were adopted from China. Correlation analyses revealed that parents' role variables and CS behaviors were related, and children's ethnic self-label was related to family ethnic identity and CS behaviors. Qualitative analyses point to complexities in children's ethnic identity and between family and children's ethnic identities. Together, these findings provide support for the theoretical model and suggest that although ethnic identity among international transracial adoptees (ITRAs) has similarities to that of nonadopted ethnic minority children, their internal experiences are more complex. © 2015 Wiley Periodicals, Inc.
Defining Long-Duration Traverses of Lunar Volcanic Complexes with LROC NAC Images
NASA Technical Reports Server (NTRS)
Stopar, J. D.; Lawrence, S. J.; Joliff, B. L.; Speyerer, E. J.; Robinson, M. S.
2016-01-01
A long-duration lunar rover [e.g., 1] would be ideal for investigating large volcanic complexes like the Marius Hills (MH) (approximately 300 x 330 km), where widely spaced sampling points are needed to explore the full geologic and compositional variability of the region. Over these distances, a rover would encounter varied surface morphologies (ranging from impact craters to rugged lava shields), each of which need to be considered during the rover design phase. Previous rovers including Apollo, Lunokhod, and most recently Yutu, successfully employed pre-mission orbital data for planning (at scales significantly coarser than that of the surface assets). LROC was specifically designed to provide mission-planning observations at scales useful for accurate rover traverse planning (crewed and robotic) [2]. After-the-fact analyses of the planning data can help improve predictions of future rover performance [e.g., 3-5].
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
NASA Astrophysics Data System (ADS)
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).
Biener, Gabriel; Stoneman, Michael R; Acbas, Gheorghe; Holz, Jessica D; Orlova, Marianna; Komarova, Liudmila; Kuchin, Sergei; Raicu, Valerică
2013-12-27
Multiphoton micro-spectroscopy, employing diffraction optics and electron-multiplying CCD (EMCCD) cameras, is a suitable method for determining protein complex stoichiometry, quaternary structure, and spatial distribution in living cells using Förster resonance energy transfer (FRET) imaging. The method provides highly resolved spectra of molecules or molecular complexes at each image pixel, and it does so on a timescale shorter than that of molecular diffusion, which scrambles the spectral information. Acquisition of an entire spectrally resolved image, however, is slower than that of broad-bandwidth microscopes because it takes longer times to collect the same number of photons at each emission wavelength as in a broad bandwidth. Here, we demonstrate an optical micro-spectroscopic scheme that employs a laser beam shaped into a line to excite in parallel multiple sample voxels. The method presents dramatically increased sensitivity and/or acquisition speed and, at the same time, has excellent spatial and spectral resolution, similar to point-scan configurations. When applied to FRET imaging using an oligomeric FRET construct expressed in living cells and consisting of a FRET acceptor linked to three donors, the technique based on line-shaped excitation provides higher accuracy compared to the point-scan approach, and it reduces artifacts caused by photobleaching and other undesired photophysical effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erkelens, C.
1995-04-01
This report details the archaeological investigation of a 200 foot wide sample corridor extending approximately 9 miles along the southern portion of Maui within the present districts of Hana and Makawao. The survey team documented a total of 51 archaeological sites encompassing 233 surface features. Archaeological sites are abundant throughout the region and only become scarce where vegetation has been bulldozed for ranching activities. At the sea-land transition points for the underwater transmission cable, both Ahihi Bay and Huakini Bay are subjected to seasonal erosion and redeposition of their boulder shorelines. The corridor at the Ahihi Bay transition point runsmore » through the Maonakala Village Complex which is an archaeological site on the State Register of Historic Places within a State Natural Area Reserve. Numerous other potentially significant archaeological sites lie within the project corridor. It is likely that rerouting of the corridor in an attempt to avoid known sites would result in other undocumented sites located outside the sample corridor being impacted. Given the distribution of archaeological sites, there is no alternative route that can be suggested that is likely to avoid encountering sites. Twelve charcoal samples were obtained for potential taxon identification and radiocarbon analysis. Four of these samples were subsequently submitted for dating and species identification. Bird bones from various locations within a lava tube were collected for identification. Sediment samples for subsequent pollen analysis were obtained from within two lava tubes. With these three sources of information it is hoped that paleoenvironmental data can be recovered that will enable a better understanding of the setting for Hawaiian habitation of the area.« less
Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks
NASA Astrophysics Data System (ADS)
Valdez, Marc Andrew; Jaschke, Daniel; Vargas, David L.; Carr, Lincoln D.
2017-12-01
We quantify the emergent complexity of quantum states near quantum critical points on regular 1D lattices, via complex network measures based on quantum mutual information as the adjacency matrix, in direct analogy to quantifying the complexity of electroencephalogram or functional magnetic resonance imaging measurements of the brain. Using matrix product state methods, we show that network density, clustering, disparity, and Pearson's correlation obtain the critical point for both quantum Ising and Bose-Hubbard models to a high degree of accuracy in finite-size scaling for three classes of quantum phase transitions, Z2, mean field superfluid to Mott insulator, and a Berzinskii-Kosterlitz-Thouless crossover.
Utilization of microwave energy for decontamination of oil polluted soils.
Iordache, Daniela; Niculae, Dumitru; Francisc, Ioan Hathazi
2010-01-01
Soil oil (petroleum) product pollution represents a great environmental threat as it may contaminate the neighboring soils and surface and underground water. Liquid fuel contamination may occur anywhere during oil (petroleum) product transportation, storing, handling and utilization. The polluted soil recovery represents a complex process due to the wide range of physical, chemical and biological properties of soils which should be analyzed in connection with the study of the contaminated soil behavior under the microwave field action. The soil, like any other non-metallic material, can be heated through microwave energy absorption due to the dielectric losses, expressed by its dielectric complex constant. Oil polluted soil behaves differently in a microwave field depending on the nature, structure and amount of the polluting fuel. Decontamination is performed through volatilization and retrieval of organic contaminant volatile components. After decontamination only a soil fixed residue remains, which cannot penetrate the underground anymore. In carrying out the soil recovery process by means of this technology we should also consider the soil characteristics such as: the soil type, temperature, moisture.The first part of the paper presents the theoretical aspects relating to the behavior of the polluted soil samples in the microwave field, as well as their relating experimental data. The experimental data resulting from the analysis of soils with a different level of pollution point out that the degree of pollutant recovery is high, contributing to changing the initial classification of soils from the point of view of pollution. The paper graphically presents the levels of microwave generated and absorbed power in soil samples, soil temperature during experimentations, specific processing parameters in a microwave field. It also presents the constructive solution of the microwave equipment designed for the contaminated soil in situ treatment.
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-05
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO2(2+)-Sal1] and [UO2(2+)-Sal2]. Among them, [UO2(2+)-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO2(2+)-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO2(2+)-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57ngmL(-1), the linear regression equation was ΔF=438.0 c (ngmL(-1))+175.6 with the correlation coefficient r=0.9981. The limit of detection was 0.066ngmL(-1). The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Regmi, Abiral; Sarangadharan, Indu; Chen, Yen-Wen; Hsu, Chen-Pin; Lee, Geng-Yen; Chyi, Jen-Inn; Shiesh, Shu-Chu; Lee, Gwo-Bin; Wang, Yu-Lin
2017-08-01
Fibrinogen found in blood plasma is an important protein biomarker for potentially fatal diseases such as cardiovascular diseases. This study focuses on the development of an assay to detect plasmatic fibrinogen using electrical double layer gated AlGaN/GaN high electron mobility transistor biosensors without complex sample pre-treatment methods used in the traditional assays. The test results in buffer solution and clinical plasma samples show high sensitivity, specificity, and dynamic range. The sensor exhibits an ultra-low detection limit of 0.5 g/l and a detection range of 0.5-4.5 g/l in 1× PBS with 1% BSA. The concentration dependent sensor signal in human serum samples demonstrates the specificity to fibrinogen in a highly dense matrix of background proteins. The sensor does not require complicated automation, and quantitative results are obtained in 5 min with <5 μl sample volume. This sensing technique is ideal for speedy blood based diagnostics such as POC (point of care) tests, homecare tests, or personalized healthcare.
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
Ophus, Colin; Rasool, Haider I.; Linck, Martin; ...
2016-11-30
We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si 3N 4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and latticemore » fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ophus, Colin; Rasool, Haider I.; Linck, Martin
We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si 3N 4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and latticemore » fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves.« less
Grande, J A; Carro, B; Borrego, J; de la Torre, M L; Valente, T; Santisteban, M
2013-04-15
This study describes the spatial evolution of the hydrogeochemical parameters which characterise a strongly affected estuary by Acid Mine Drainage (AMD). The studied estuarine system receives AMD from the Iberian Pyrite Belt (SW Spain) and, simultaneously, is affected by the presence of an industrial chemical complex. Water sampling was performed in the year of 2008, comprising four sampling campaigns, in order to represent seasonality. The results show how the estuary can be divided into three areas of different behaviour in response to hydrogeochemical variables concentrations that define each sampling stations: on one hand, an area dominated by tidal influence; in the opposite end there is a second area including the points located in the two rivers headwaters that are not influenced by seawater; finally there is the area that can be defined as mixing zone. These areas are moved along the hydrological year due to seasonal chemical variations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hegde, Vibha; Murkey, Laxmi Suresh
2017-05-01
The purpose of an endodontic obturation is to obtain a fluid tight hermetic seal of the entire root canal system. There has been an evolution of different materials and techniques to achieve this desired gap free fluid tight seal due to presence of anatomic complexity of the root canal system. To compare the microgap occurring in root canals obturated with hydrophilic versus hydrophobic systems using scanning electron microscope. Sixty extracted human single-rooted premolars were decoronated, instrumented using NiTi rotary instruments. The samples (n=20) were divided into three groups and obturated with Group A - (control group) gutta-percha with AH Plus, Group B - C-point with Smartpaste Bio and Group C - gutta-percha with guttaflow 2. The samples were split longitudinally into two halves and microgap was observed under scanning electron microscope in the apical 3 mm of the root canal. Group A (control) showed a mean difference of 8.54 as compared to 5.76 in group C. Group B showed the lowest mean difference of 0.83 suggesting that the hydrophilic system (C-point/Smartpaste Bio) produced least microgap as compared to the hydrophobic groups. Novel hydrophilic obturating system (C-points/ Smart-paste Bio) showed better seal and least microgap as compared to gutta-percha/guttaflow 2 and gutta-percha/ AH plus which showed gap at the sealer dentin interface due to less penetration and bonding of these hydrophobic obturating system.
Sensor-Topology Based Simplicial Complex Reconstruction from Mobile Laser Scanning
NASA Astrophysics Data System (ADS)
Guinard, S.; Vallet, B.
2018-05-01
We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles) from 3D point clouds from Mobile Laser Scanning (MLS). Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects. Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length.
Method for visualization and presentation of priceless old prints based on precise 3D scan
NASA Astrophysics Data System (ADS)
Bunsch, Eryk; Sitnik, Robert
2014-02-01
Graphic prints and manuscripts constitute main part of the cultural heritage objects created by the most of the known civilizations. Their presentation was always a problem due to their high sensitivity to light and changes of external conditions (temperature, humidity). Today it is possible to use an advanced digitalization techniques for documentation and visualization of mentioned objects. In the situation when presentation of the original heritage object is impossible, there is a need to develop a method allowing documentation and then presentation to the audience of all the aesthetical features of the object. During the course of the project scans of several pages of one of the most valuable books in collection of Museum of Warsaw Archdiocese were performed. The book known as "Great Dürer Trilogy" consists of three series of woodcuts by the Albrecht Dürer. The measurement system used consists of a custom designed, structured light-based, high-resolution measurement head with automated digitization system mounted on the industrial robot. This device was custom built to meet conservators' requirements, especially the lack of ultraviolet or infrared radiation emission in the direction of measured object. Documentation of one page from the book requires about 380 directional measurements which constitute about 3 billion sample points. The distance between the points in the cloud is 20 μm. Provided that the measurement with MSD (measurement sampling density) of 2500 points makes it possible to show to the publicity the spatial structure of this graphics print. An important aspect is the complexity of the software environment created for data processing, in which massive data sets can be automatically processed and visualized. Very important advantage of the software which is using directly clouds of points is the possibility to manipulate freely virtual light source.
Spiraling in Urban Streams: A Novel Approach to Link Geomorphic Structure with Ecosystem Function
NASA Astrophysics Data System (ADS)
Bean, R. A.; Lafrenz, M. D.
2011-12-01
The goal of this study is to quantify the relationship between channel complexity and nutrient spiraling along several reaches of an urbanized watershed in Portland, Oregon. Much research points to the effect urbanization has on watershed hydrology and nutrient loading at the watershed scale for various sized catchments. However the flux of nutrients over short reaches within a stream channel has been less studied because of the effort and costs associated with fieldwork and subsequent laboratory analysis of both surface and hyporheic water samples. In this study we explore a novel approach at capturing connectivity though nutrient spiraling along several short reaches (less than 100-meter) within the highly urbanized Fanno Creek watershed (4400 hectares). We measure channel complexity-sinuosity, bed material texture, organic matter-and use these measurements to determine spatial autocorrelation of 50 reaches in Fanno Creek, a small, urban watershed in Portland, Oregon. Using ion-selective electrodes, the fluxes of nitrate and ammonia are measured within each reach, which when combined with channel geometry and velocity measurements allow us to transform the values of nitrate and ammonia fluxes into spiraling metrics. Along each sampled reach, we collected three surface water samples to characterize nutrient amounts at the upstream, midstream, and downstream position of the reach. Two additional water samples were taken from the left and right bank hyporheic zones at a depth just below the armor layer of the channel bed using mini-piezometers and a hand-pumped vacuum device, which we constructed for this purpose. Adjacent to the hyporheic samples soil cores were collected and analyzed for organic matter composition, bulk density, and texture. We hypothesize that spiral metrics will respond significantly to the measured channel complexity values and will be a more robust predictor of nutrient flux than land cover characteristics in the area draining to each reach. Initial results show significant differences in hyporheic and surface water concentrations within the same reach indicating that sources and sinks of mineral nitrogen can be found within stream channels over very short distances. The implication of this study is that channel complexity is an important driver of nutrient flux in a watershed, and that this technique can be applied in future studies to better characterize the ecosystem services of stream channels over short reaches to entire catchments.
An integrated paper-based sample-to-answer biosensor for nucleic acid testing at the point of care.
Choi, Jane Ru; Hu, Jie; Tang, Ruihua; Gong, Yan; Feng, Shangsheng; Ren, Hui; Wen, Ting; Li, XiuJun; Wan Abas, Wan Abu Bakar; Pingguan-Murphy, Belinda; Xu, Feng
2016-02-07
With advances in point-of-care testing (POCT), lateral flow assays (LFAs) have been explored for nucleic acid detection. However, biological samples generally contain complex compositions and low amounts of target nucleic acids, and currently require laborious off-chip nucleic acid extraction and amplification processes (e.g., tube-based extraction and polymerase chain reaction (PCR)) prior to detection. To the best of our knowledge, even though the integration of DNA extraction and amplification into a paper-based biosensor has been reported, a combination of LFA with the aforementioned steps for simple colorimetric readout has not yet been demonstrated. Here, we demonstrate for the first time an integrated paper-based biosensor incorporating nucleic acid extraction, amplification and visual detection or quantification using a smartphone. A handheld battery-powered heating device was specially developed for nucleic acid amplification in POC settings, which is coupled with this simple assay for rapid target detection. The biosensor can successfully detect Escherichia coli (as a model analyte) in spiked drinking water, milk, blood, and spinach with a detection limit of as low as 10-1000 CFU mL(-1), and Streptococcus pneumonia in clinical blood samples, highlighting its potential use in medical diagnostics, food safety analysis and environmental monitoring. As compared to the lengthy conventional assay, which requires more than 5 hours for the entire sample-to-answer process, it takes about 1 hour for our integrated biosensor. The integrated biosensor holds great potential for detection of various target analytes for wide applications in the near future.
Karreman, Matthia A.; Mercier, Luc; Schieber, Nicole L.; Shibue, Tsukasa; Schwab, Yannick; Goetz, Jacky G.
2014-01-01
Correlative microscopy combines the advantages of both light and electron microscopy to enable imaging of rare and transient events at high resolution. Performing correlative microscopy in complex and bulky samples such as an entire living organism is a time-consuming and error-prone task. Here, we investigate correlative methods that rely on the use of artificial and endogenous structural features of the sample as reference points for correlating intravital fluorescence microscopy and electron microscopy. To investigate tumor cell behavior in vivo with ultrastructural accuracy, a reliable approach is needed to retrieve single tumor cells imaged deep within the tissue. For this purpose, fluorescently labeled tumor cells were subcutaneously injected into a mouse ear and imaged using two-photon-excitation microscopy. Using near-infrared branding, the position of the imaged area within the sample was labeled at the skin level, allowing for its precise recollection. Following sample preparation for electron microscopy, concerted usage of the artificial branding and anatomical landmarks enables targeting and approaching the cells of interest while serial sectioning through the specimen. We describe here three procedures showing how three-dimensional (3D) mapping of structural features in the tissue can be exploited to accurately correlate between the two imaging modalities, without having to rely on the use of artificially introduced markers of the region of interest. The methods employed here facilitate the link between intravital and nanoscale imaging of invasive tumor cells, enabling correlating function to structure in the study of tumor invasion and metastasis. PMID:25479106
Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests
Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits. Although no interaction was detected between number of points and number of visits, when paired reciprocals were compared, more points invariably yielded a significantly greater cumulative number of species than more visits to a point. Still, 36 point counts per stand during each of two breeding seasons detected only 52 percent of the known available species pool in DEF.
Logic programming to infer complex RNA expression patterns from RNA-seq data.
Weirick, Tyler; Militello, Giuseppe; Ponomareva, Yuliya; John, David; Döring, Claudia; Dimmeler, Stefanie; Uchida, Shizuka
2018-03-01
To meet the increasing demand in the field, numerous long noncoding RNA (lncRNA) databases are available. Given many lncRNAs are specifically expressed in certain cell types and/or time-dependent manners, most lncRNA databases fall short of providing such profiles. We developed a strategy using logic programming to handle the complex organization of organs, their tissues and cell types as well as gender and developmental time points. To showcase this strategy, we introduce 'RenalDB' (http://renaldb.uni-frankfurt.de), a database providing expression profiles of RNAs in major organs focusing on kidney tissues and cells. RenalDB uses logic programming to describe complex anatomy, sample metadata and logical relationships defining expression, enrichment or specificity. We validated the content of RenalDB with biological experiments and functionally characterized two long intergenic noncoding RNAs: LOC440173 is important for cell growth or cell survival, whereas PAXIP1-AS1 is a regulator of cell death. We anticipate RenalDB will be used as a first step toward functional studies of lncRNAs in the kidney.
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding
2012-01-01
Report No: CG-D-15-13 Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding...Distribution Statement A: Approved for public release; distribution is unlimited. January 2012 Automated Protist Analysis of Complex Samples...Chelsea Street New London, CT 06320 Automated Protist Analysis of Complex Samples iii UNCLAS//PUBLIC | CG-926 R&DC | B. Nelson, et al
Electrochemical treatment of pharmaceutical wastewater by combining anodic oxidation with ozonation.
Menapace, Hannes M; Diaz, Nicolas; Weiss, Stefan
2008-07-01
Wastewater effluents from sewage treatment plants (STP) are important point sources for residues of pharmaceuticals and complexing agents in the aquatic environment. For this reason a research project, which started in December 2006, was established to eliminate pharmaceutical substances and complexing agents found in wastewater as micropollutants. For the treatment process a combination of anodic oxidation by boron-doped diamond (BDD) electrodes and ozonation is examined and presented. For the ozone production a non-conventional, separate reactor was used, in which ozone was generated by electrolysis with diamond electrodes For the determination of the achievable remediation rates four complexing agents (e.g., EDTA, NTA) and eight pharmaceutical substances (e.g., diazepam, carbamazepin) were analyzed in several test runs under different conditions (varied flux, varied current density for the diamond electrode and the ozone producing electrode of the ozone generator, different packing materials for the column in the ozone injection system). The flowrates of the treated water samples were varied from 3 L/h up to 26 L/h. For the anodic oxidation the influence of the current density was examined in the range between 22.7 and 45.5 mA/cm(2), for the ozone producing reactor two densities (1.8 a/cm(2) and 2.0 A/cm(2)) were tested. Matrix effects were investigated by test runs with samples from the effluent of an STP and synthetic waste water. Therefore the impact of the organic material in the samples could be determined by the comparison of the redox potential and the achievable elimination rates of the investigated substances. Comparing both technologies anodic oxidation seems to be superior to ozonation in each investigated area. With the used technology of anodic oxidation elimination rates up to 99% were reached for the investigated pharmaceutical substances at a current density of 45.5 mA/cm(2) and a maximum sample flux of 26 L/h.
Basic principles and recent observations of rotationally sampled wind
NASA Technical Reports Server (NTRS)
Connell, James R.
1995-01-01
The concept of rotationally sampled wind speed is described. The unusual wind characteristics that result from rotationally sampling the wind are shown first for early measurements made using an 8-point ring of anemometers on a vertical plane array of meteorological towers. Quantitative characterization of the rotationally sampled wind is made in terms of the power spectral density function of the wind speed. Verification of the importance of the new concept is demonstrated with spectral analyses of the response of the MOD-OA blade flapwise root bending moment and the corresponding rotational analysis of the wind measured immediately upwind of the MOD-OA using a 12-point ring of anemometers on a 7-tower vertical plane array. The Pacific Northwest Laboratory (PNL) theory of the rotationally sampled wind speed power spectral density function is tested successfully against the wind spectrum measured at the MOD-OA vertical plane array. A single-tower empirical model of the rotationally sampled wind speed is also successfully tested against the measurements from the full vertical plane array. Rotational measurements of the wind velocity with hotfilm anemometers attached to rotating blades are shown to be accurate and practical for research on winds at the blades of wind turbines. Some measurements at the rotor blade of a MOD-2 turbine using the hotfilm technique in a pilot research program are shown. They are compared and contrasted to the expectations based upon application of the PNL theory of rotationally sampled wind to the MOD-2 size and rotation rate but without teeter, blade bending, or rotor induction accounted for. Finally, the importance of temperature layering and of wind modifications due to flow over complex terrain is demonstrated by the use of hotfilm anemometer data, and meteorological tower and acoustic doppler sounder data from the MOD-2 site at Goodnoe Hills, Washington.
1983-10-01
types such as the Alberta, Plainview, Scotts Aluff, Eden Valley and Hell Gap ( Plano Complex) . A private collector from Sheyenne, North Dakota--on the...Grafton) (Michlovic 1979). An apparently early type point of the Plano Complex (Alberta point) was found net: the Manitoba community of Manitou (Pettipas...with the DL-S Burial Complex include miniature, smooth mortuary vessels, sometimes decorated with incised thunderbird designs and/or raised lizzards or
Exploring patient experiences of neo-adjuvant chemotherapy for breast cancer.
Beaver, Kinta; Williamson, Susan; Briggs, Jean
2016-02-01
Neo-adjuvant chemotherapy is recommended for 'inoperable' locally advanced and inflammatory breast cancers. For operable breast cancers, trials indicate no survival differences between chemotherapy given pre or post-surgery. Communicating evidence based information to patients is complex and studies examining patient experiences of neo-adjuvant chemotherapy are lacking. This study aims to explore the experiences of women who received neo-adjuvant chemotherapy for breast cancer. A qualitative approach using in-depth interviews with 20 women who had completed neo-adjuvant chemotherapy for breast cancer. Interview data were analysed using thematic analysis. The sample included a relatively young group of women, with caring responsibilities. Five main themes emerged: coping with the rapid transition from 'well' to 'ill', information needs and decision making, needing support and empathy, impact on family, and creating a new 'normal'. More support was needed towards the end of chemotherapy, when side effects were at their most toxic, and decisions about forthcoming surgery were being made. Some women were referred to psychological services, but usually when a crisis point had been reached. Information and support would have been beneficial at key time points. This information is vital in developing services and interventions to meet the complex needs of these patients and potentially prevent late referral to psychological services. Specialist oncology nurses are able to develop empathetic relationships with patients and have the experience, knowledge and skills to inform and support women experiencing neo-adjuvant chemotherapy. Targeting key time points and maintaining relationship throughout neo-adjuvant chemotherapy would be highly beneficial. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Systems Thinking Tools as Applied to Community-Based Participatory Research: A Case Study
ERIC Educational Resources Information Center
BeLue, Rhonda; Carmack, Chakema; Myers, Kyle R.; Weinreb-Welch, Laurie; Lengerich, Eugene J.
2012-01-01
Community-based participatory research (CBPR) is being used increasingly to address health disparities and complex health issues. The authors propose that CBPR can benefit from a systems science framework to represent the complex and dynamic characteristics of a community and identify intervention points and potential "tipping points."…
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.
Point of Injury Sampling Technology for Battlefield Molecular Diagnostics
2011-11-14
Injury" Sampling Technology for Battlefield Molecular Diagnostics November 14, 2011 Sponsored by Defense Advanced Research Projects Agency (DOD...Date of Contract: April 25, 2011 Short Title of Work: "Point of Injury" Sampling Technology for Battlefield Molecular Diagnostics " Contract...PHASE I FINAL REPORT: Point of Injury, Sampling Technology for Battlefield Molecular Diagnostics . W31P4Q-11-C-0222 (UNCLASSIFIED) P.I: Bernardo
Morar, Adriana; Sala, Claudia; Imre, Kálmán
2015-01-15
Reported human salmonellosis cases have increased in Romania. Antibiotic susceptibility testing of Salmonella strains isolated from pork and chicken meat indicate a worrying multidrug resistance pattern. This study aimed to investigate the occurrence of Salmonella and to evaluate the antibiotic resistance of Salmonella strains in a pig slaughterhouse-processing complex, which receives animals from 30% of the large industrialized swine farms in Romania. A total of 108 samples, including pork (n = 47), packaged pork products (n = 44), scald water sludge (n = 8), and detritus from the hair removal machine of the slaughterhouse (n = 9) were examined for the presence of Salmonella through standard methods. The antibiotic susceptibility of the isolated strains to 17 antibiotics was tested using the Vitek 2 system. Twenty-six (24.1%) samples were found to be Salmonella positive; this included 25.5% of meat samples and 15.9% of packaged products, as well as samples from two different points of the slaughter (41.2%). Resistance was observed against tetracycline (61.5%), ampicillin (50%), piperacillin (50%), trimethoprim-sulfamethoxazole (34.6%), amoxicillin/clavulanic acid (26.9%), nitrofurantion (23.1%), cefazolin (15.4%), piperacillin/tazobactam (7.7%), imipenem (3.8%), ciprofloxacin (3.8%), and norfloxacin (3.8%). No resistance towards cefoxitin, cefotaxime, ceftazidime, cefepime, amikacin, and gentamicin was found. Our study demonstrated the occurrence of multidrug-resistant Salmonella strains in the investigated pork production complex and highlighted it as a potential source of human infections. The results demonstrate the seriousness of antibiotic resistance of Salmonella in Romania, while providing a useful insight for the treatment of human salmonellosis by specialists.
Explicit analytical tuning rules for digital PID controllers via the magnitude optimum criterion.
Papadopoulos, Konstantinos G; Yadav, Praveen K; Margaris, Nikolaos I
2017-09-01
Analytical tuning rules for digital PID type-I controllers are presented regardless of the process complexity. This explicit solution allows control engineers 1) to make an accurate examination of the effect of the controller's sampling time to the control loop's performance both in the time and frequency domain 2) to decide when the control has to be I, PI and when the derivative, D, term has to be added or omitted 3) apply this control action to a series of stable benchmark processes regardless of their complexity. The former advantages are considered critical in industry applications, since 1) most of the times the choice of the digital controller's sampling time is based on heuristics and past criteria, 2) there is little a-priori knowledge of the controlled process making the choice of the type of the controller a trial and error exercise 3) model parameters change often depending on the control loop's operating point making in this way, the problem of retuning the controller's parameter a much challenging issue. Basis of the proposed control law is the principle of the PID tuning via the Magnitude Optimum criterion. The final control law involves the controller's sampling time T s within the explicit solution of the controller's parameters. Finally, the potential of the proposed method is justified by comparing its performance with the conventional PID tuning when controlling the same process. Further investigation regarding the choice of the controller's sampling time T s is also presented and useful conclusions for control engineers are derived. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Oba, Masaki; Miyabe, Masabumi; Akaoka, Katsuaki; Wakaida, Ikuo
2016-02-01
We used laser-induced fluorescence imaging with a varying beam focal point to observe ablation plumes from metal and oxide samples of gadolinium. The plumes expand vertically when the focal point is far from the sample surface. In contrast, the plume becomes hemispherical when the focal point is on the sample surface. In addition, the internal plume structure and the composition of the ablated atomic and ionic particles also vary significantly. The fluorescence intensity of a plume from a metal sample is greater than that from an oxide sample, which suggests that the number of monatomic species produced in each plume differs. For both the metal and oxide samples, the most intense fluorescence from atomic (ionic) species is observed with the beam focal point at 3-4 mm (2 mm) from the sample surface.
Roger, P-M; Tabutin, J; Blanc, V; Léotard, S; Brofferio, P; Léculé, F; Redréau, B; Bernard, E
2015-06-01
Care to patients with prosthetic joint infections (PJI) is provided after pluridisciplinary collaboration, in particular for complex presentations. Therefore, to carry out an audit in PJI justifies using pluridisciplinary criteria. We report an audit for hip or knee PJI, with emphasis on care homogeneity, length of hospital stay (LOS) and mortality. Fifteen criteria were chosen for quality of care: 5 diagnostic tools, 5 therapeutic aspects, and 5 pluridisciplinary criteria. Among these, 6 were chosen: surgical bacterial samples, surgical strategy, pluridisciplinary discussion, antibiotic treatment, monitoring of antibiotic toxicity, and prevention of thrombosis. They were scored on a scale to 20 points. We included PJI diagnosed between 2010 and 2012 from 6 different hospitals. PJI were defined as complex in case of severe comorbid conditions or multi-drug resistant bacteria, or the need for more than 1 surgery. Eighty-two PJI were included, 70 of which were complex (85%); the median score was 15, with a significant difference among hospitals: from 9 to 17.5 points, P < 0.001. The median LOS was 17 days, and not related to the criterion score; 16% of the patients required intensive care and 13% died. The cure rate was 41%, lost to follow-up 33%, and therapeutic failure 13%. Cure was associated with a higher score than an unfavorable outcome in the univariate analysis (median [range]): 16 [9-18] vs 13 [4-18], P = 0.002. Care to patients with PJI was heterogeneous, our quality criteria being correlated to the outcome. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
de Vries, R.
2004-02-01
Electrostatic complexation of flexible polyanions with the whey proteins α-lactalbumin and β-lactoglobulin is studied using Monte Carlo simulations. The proteins are considered at their respective isoelectric points. Discrete charges on the model polyelectrolytes and proteins interact through Debye-Hückel potentials. Protein excluded volume is taken into account through a coarse-grained model of the protein shape. Consistent with experimental results, it is found that α-lactalbumin complexes much more strongly than β-lactoglobulin. For α-lactalbumin, strong complexation is due to localized binding to a single large positive "charge patch," whereas for β-lactoglobulin, weak complexation is due to diffuse binding to multiple smaller charge patches.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
NASA Astrophysics Data System (ADS)
Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.
2017-09-01
This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.
Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O
2017-09-01
This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.
Numerical simulation of magnetic interactions in polycrystalline YFeO 3
NASA Astrophysics Data System (ADS)
Lima, E.; Martins, T. B.; Rechenberg, H. R.; Goya, G. F.; Cavelius, C.; Rapalaviciute, R.; Hao, S.; Mathur, S.
The magnetic behavior of polycrystalline yttrium orthoferrite was studied from the experimental and theoretical points of view. Magnetization measurements up to 170 kOe were carried out on a single-phase YFeO 3 sample synthesized from heterobimetallic alkoxides. The complex interplay between weak-ferromagnetic and antiferromagnetic interactions, observed in the experimental M( H) curves, was successfully simulated by locally minimizing the magnetic energy of two interacting Fe sublattices. The resulting values of exchange field ( HE=5590 kOe), anisotropy field ( HA=0.5 kOe) and Dzyaloshinsky-Moriya antisymmetric field ( HD=149 kOe) are in good agreement with previous reports on this system.
METHOD AND MEANS FOR RECOGNIZING COMPLEX PATTERNS
Hough, P.V.C.
1962-12-18
This patent relates to a method and means for recognizing a complex pattern in a picture. The picture is divided into framelets, each framelet being sized so that any segment of the complex pattern therewithin is essentially a straight line. Each framelet is scanned to produce an electrical pulse for each point scanned on the segment therewithin. Each of the electrical pulses of each segment is then transformed into a separate strnight line to form a plane transform in a pictorial display. Each line in the plane transform of a segment is positioned laterally so that a point on the line midway between the top and the bottom of the pictorial display occurs at a distance from the left edge of the pictorial display equal to the distance of the generating point in the segment from the left edge of the framelet. Each line in the plane transform of a segment is inclined in the pictorial display at an angle to the vertical whose tangent is proportional to the vertical displacement of the generating point in the segment from the center of the framelet. The coordinate position of the point of intersection of the lines in the pictorial display for each segment is determined and recorded. The sum total of said recorded coordinate positions being representative of the complex pattern. (AEC)
Symmetric and Asymmetric Tendencies in Stable Complex Systems
Tan, James P. L.
2016-01-01
A commonly used approach to study stability in a complex system is by analyzing the Jacobian matrix at an equilibrium point of a dynamical system. The equilibrium point is stable if all eigenvalues have negative real parts. Here, by obtaining eigenvalue bounds of the Jacobian, we show that stable complex systems will favor mutualistic and competitive relationships that are asymmetrical (non-reciprocative) and trophic relationships that are symmetrical (reciprocative). Additionally, we define a measure called the interdependence diversity that quantifies how distributed the dependencies are between the dynamical variables in the system. We find that increasing interdependence diversity has a destabilizing effect on the equilibrium point, and the effect is greater for trophic relationships than for mutualistic and competitive relationships. These predictions are consistent with empirical observations in ecology. More importantly, our findings suggest stabilization algorithms that can apply very generally to a variety of complex systems. PMID:27545722
Symmetric and Asymmetric Tendencies in Stable Complex Systems.
Tan, James P L
2016-08-22
A commonly used approach to study stability in a complex system is by analyzing the Jacobian matrix at an equilibrium point of a dynamical system. The equilibrium point is stable if all eigenvalues have negative real parts. Here, by obtaining eigenvalue bounds of the Jacobian, we show that stable complex systems will favor mutualistic and competitive relationships that are asymmetrical (non-reciprocative) and trophic relationships that are symmetrical (reciprocative). Additionally, we define a measure called the interdependence diversity that quantifies how distributed the dependencies are between the dynamical variables in the system. We find that increasing interdependence diversity has a destabilizing effect on the equilibrium point, and the effect is greater for trophic relationships than for mutualistic and competitive relationships. These predictions are consistent with empirical observations in ecology. More importantly, our findings suggest stabilization algorithms that can apply very generally to a variety of complex systems.
10. Photocopy of photograph (original photograph in possession of the ...
10. Photocopy of photograph (original photograph in possession of the Ralph M. Parsons Company, Los Angeles California). Photography by the United States Air Force, May 4, 1960. VIEW OF SOUTH FACE OF POINT ARGUELLO LAUNCH COMPLEX 1, PAD 1 (SLC-3) FROM TOP OF CONTROL CENTER (BLDG. 763). ATLAS D BOOSTER FOR THE FIRST SAMOS LAUNCH FROM POINT ARGUELLO LAUNCH COMPLEX 1 (SLC-3) ERECT IN THE SERVICE TOWER. - Vandenberg Air Force Base, Space Launch Complex 3, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
1983-10-01
possibly Midland (Folsom Complex) , and a var iet- f point types such as the Alberta, Plainview, Scotts Bluff, Eden Valley anj Hell Gap ( Plano Complex). A...Red River Valley near Glyndon, Minnesota (south and slightly east of Grafton) (Michlovic 1979). An apparently early type point of the Plano Complex... incised thunderbird designs and/or raised lizzards or salamanders; welk shell (marine snail) masks/gorgets; "cigar holder-shaped" tubular pipes; and
’Point of Injury’ Sampling Technology for Battlefield Molecular Diagnostics
2012-03-17
Injury" Sampling Technology for Battlefield Molecular Diagnostics March 17,2012 Sponsored by Defense Advanced Research Projects Agency (DOD) Defense...Contract: April 25, 2011 Short Title of Work: "Point of Injury" Sampling Technology for Battlefield Molecular Diagnostics " Contract Expiration Date...SBIR PHASE I OPTION REPORT: Point of Injury, Sampling Technology for Battlefield Molecular Diagnostics . W31P4Q-1 l-C-0222 (UNCLASSIFIED) P.I
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Haspel, Nurit; Geisbrecht, Brian V; Lambris, John; Kavraki, Lydia
2010-03-01
We present a novel multi-level methodology to explore and characterize the low energy landscape and the thermodynamics of proteins. Traditional conformational search methods typically explore only a small portion of the conformational space of proteins and are hard to apply to large proteins due to the large amount of calculations required. In our multi-scale approach, we first provide an initial characterization of the equilibrium state ensemble of a protein using an efficient computational conformational sampling method. We then enrich the obtained ensemble by performing short Molecular Dynamics (MD) simulations on selected conformations from the ensembles as starting points. To facilitate the analysis of the results, we project the resulting conformations on a low-dimensional landscape to efficiently focus on important interactions and examine low energy regions. This methodology provides a more extensive sampling of the low energy landscape than an MD simulation starting from a single crystal structure as it explores multiple trajectories of the protein. This enables us to obtain a broader view of the dynamics of proteins and it can help in understanding complex binding, improving docking results and more. In this work, we apply the methodology to provide an extensive characterization of the bound complexes of the C3d fragment of human Complement component C3 and one of its powerful bacterial inhibitors, the inhibitory domain of Staphylococcus aureus extra-cellular fibrinogen-binding domain (Efb-C) and two of its mutants. We characterize several important interactions along the binding interface and define low free energy regions in the three complexes. Proteins 2010. (c) 2009 Wiley-Liss, Inc.
Andanda, P A
2008-03-01
There are complex unresolved ethical, legal and social issues related to the use of human tissues obtained in the course of research or diagnostic procedures and retained for further use in research. The question of intellectual property rights over commercially viable products or procedures that are derived from these samples and the suitability or otherwise of participants relinquishing their rights to the samples needs urgent attention. The complexity of these matters lies in the fact that the relationship between intellectual property rights and ownership or rights pertaining to the samples on which the intellectual property right is based may either be overlooked or taken for granted. What equally makes the matter complex is that samples may be obtained from participants in developing countries and exported to developed countries for analysis and research. It is important for research ethics committees to tread carefully when reviewing research protocols that raise such issues for purposes of ensuring that appropriate benefit sharing agreements, particularly with developing countries, are in place. This paper attempts to analyse the key questions related to ownership and intellectual property rights in commercially viable products derived from human tissue samples. Patent law is used as a point of reference as opposed to other forms of intellectual property rights such as industrial designs because it is the right that most inventors apply for in respect of human tissue-related inventions. The key questions are formulated following a systematic analysis of peer reviewed journal articles that have reported original investigations into relevant issues in this field. Most of the cases and reported studies that are referred to in this paper do not directly deal with HIV/AIDS research but the underlying principles are helpful in HIV/AIDS research as well. Pertinent questions, which members of ethics review committees should focus on in this regard are discussed and suggestions on appropriate approaches to the issues are proposed in the form of specific questions that an ethics review committee should consider. Specific recommendations regarding areas for further research and action are equally proposed.
[Cytotoxicity and genotoxicity of drinking water of two networks supplied by surface water].
Pellacani, Claudia; Branchi, Elisa; Buschini, Annamaria; Furlini, Mariangela; Poli, Paola; Rossi, Carlo
2005-01-01
Evaluation of cytotoxic and genotoxic load of drinking water in relationship to the source of supplies, the disinfection process, and the piping system. Two treatment/distribution networks of drinking water, the first (#1) located near the source, the second (#2) located near the mouth of a river supplying the plants. Water samples were collected before (F) and after (A) the disinfection process and in two points (R1 and R2) of the piping system. The samples, concentrated on C18, were tested for DNA damage in human leukocytes by the Comet assay and for gene conversion, reversion and mitochondrial mutability in Saccharomyces cerevisiae D7 strain. The approach used in this study is able to identify genotoxic compounds at low concentration and evaluate their antagonism/synergism in complex mixtures. Comet assay results show that the raw water quality depends on the sampling point, suggesting that a high input of environmental pollutants occurred during river flowing; they also show that the disinfection process can both detoxify or enhance biological activity of raw water according to its quality and that the piping systems do not affect tap water cytotoxic/genotoxic load. The yeast tests indicate the presence of some disinfection by-products effective on mitochondrial DNA. The biological assays used in this study are proven to be able to detect the presence of low concentrations of toxic/genotoxic compounds and assess the sources of their origin/production.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skraba, Primoz; Rosen, Paul; Wang, Bei
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with amore » guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. Here, we apply our method to synthetic and simulation datasets to demonstrate its effectiveness.« less
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion.
Skraba, Primoz; Rosen, Paul; Wang, Bei; Chen, Guoning; Bhatia, Harsh; Pascucci, Valerio
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with a guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. We apply our method to synthetic and simulation datasets to demonstrate its effectiveness.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion
Skraba, Primoz; Rosen, Paul; Wang, Bei; ...
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with amore » guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. Here, we apply our method to synthetic and simulation datasets to demonstrate its effectiveness.« less
Confocal multispot microscope for fast and deep imaging in semicleared tissues
NASA Astrophysics Data System (ADS)
Adam, Marie-Pierre; Müllenbroich, Marie Caroline; Di Giovanna, Antonino Paolo; Alfieri, Domenico; Silvestri, Ludovico; Sacconi, Leonardo; Pavone, Francesco Saverio
2018-02-01
Although perfectly transparent specimens are imaged faster with light-sheet microscopy, less transparent samples are often imaged with two-photon microscopy leveraging its robustness to scattering; however, at the price of increased acquisition times. Clearing methods that are capable of rendering strongly scattering samples such as brain tissue perfectly transparent specimens are often complex, costly, and time intensive, even though for many applications a slightly lower level of tissue transparency is sufficient and easily achieved with simpler and faster methods. Here, we present a microscope type that has been geared toward the imaging of semicleared tissue by combining multispot two-photon excitation with rolling shutter wide-field detection to image deep and fast inside semicleared mouse brain. We present a theoretical and experimental evaluation of the point spread function and contrast as a function of shutter size. Finally, we demonstrate microscope performance in fixed brain slices by imaging dendritic spines up to 400-μm deep.
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
Quantitative proteome analysis using isobaric peptide termini labeling (IPTL).
Arntzen, Magnus O; Koehler, Christian J; Treumann, Achim; Thiede, Bernd
2011-01-01
The quantitative comparison of proteome level changes across biological samples has become an essential feature in proteomics that remains challenging. We have recently introduced isobaric peptide termini labeling (IPTL), a novel strategy for isobaric quantification based on the derivatization of peptide termini with complementary isotopically labeled reagents. Unlike non-isobaric quantification methods, sample complexity at the MS level is not increased, providing improved sensitivity and protein coverage. The distinguishing feature of IPTL when comparing it to more established isobaric labeling methods (iTRAQ and TMT) is the presence of quantification signatures in all sequence-determining ions in MS/MS spectra, not only in the low mass reporter ion region. This makes IPTL a quantification method that is accessible to mass spectrometers with limited capabilities in the low mass range. Also, the presence of several quantification points in each MS/MS spectrum increases the robustness of the quantification procedure.
IFSA: a microfluidic chip-platform for frit-based immunoassay protocols
NASA Astrophysics Data System (ADS)
Hlawatsch, Nadine; Bangert, Michael; Miethe, Peter; Becker, Holger; Gärtner, Claudia
2013-03-01
Point-of-care diagnostics (POC) is one of the key application fields for lab-on-a-chip devices. While in recent years much of the work has concentrated on integrating complex molecular diagnostic assays onto a microfluidic device, there is a need to also put comparatively simple immunoassay-type protocols on a microfluidic platform. In this paper, we present the development of a microfluidic cartridge using an immunofiltration approach. In this method, the sandwich immunoassay takes place in a porous frit on which the antibodies have immobilized. The device is designed to be able to handle three samples in parallel and up to four analytical targets per sample. In order to meet the critical cost targets for the diagnostic market, the microfluidic chip has been designed and manufactured using high-volume manufacturing technologies in mind. Validation experiments show comparable sensitivities in comparison with conventional immunofiltration kits.
Exploring the self-compassion of health-care social workers: How do they fare?
Lianekhammy, Joann; Miller, J Jay; Lee, Jacquelyn; Pope, Natalie; Barnhart, Sheila; Grise-Owens, Erlene
2018-05-03
Indubitably, the challenges facing health-care social workers are becoming increasingly complex. Whilst these problematic professional circumstances compound the need for self-compassion among health-care social workers, few studies, if any, have explicitly examined self-compassion among this practitioner group. This cross-sectional study explored self-compassion among a sample of practitioners (N = 138) in one southeastern state. Results indicate that health-care social workers in this sample engage in self-compassion only moderately. Further, occupational and demographic/life characteristics (e.g., age, years practicing social work, average hours worked per week, health status, and relationship status, among others) are able to predict self-compassion scores. After a terse review of relevant literature, this paper will explicate findings from this study, discuss relevant points derived from said findings, and identify salient implication for health-care social work praxis.
[Lab-on-a-chip systems in the point-of-care diagnostics].
Szabó, Barnabás; Borbíró, András; Fürjes, Péter
2015-12-27
The need in modern medicine for near-patient diagnostics being able to accelerate therapeutic decisions and possibly replacing laboratory measurements is significantly growing. Reliable and cost-effective bioanalytical measurement systems are required which - acting as a micro-laboratory - contain integrated biomolecular recognition, sensing, signal processing and complex microfluidic sample preparation modules. These micro- and nanofabricated Lab-on-a-chip systems open new perspectives in the diagnostic supply chain, since they are able even for quantitative, high-precision and immediate analysis of special disease specific molecular markers or their combinations from a single drop of sample. Accordingly, crucial requirements regarding the instruments and the analytical methods are the high selectivity, extremely low detection limit, short response time and integrability into the healthcare information networks. All these features can make the hierarchical examination chain shorten, and revolutionize laboratory diagnostics, evolving a brand new situation in therapeutic intervention.
NASA Astrophysics Data System (ADS)
Tabrizian, P.; Petrasova, A.; Baran, P.; Petras, V.; Mitasova, H.; Meentemeyer, R. K.
2017-12-01
Viewshed modelling- a process of defining, parsing and analysis of landscape visual space's structure within GIS- has been commonly used in applications ranging from landscape planning and ecosystem services assessment to geography and archaeology. However, less effort has been made to understand whether and to what extent these objective analyses predict actual on-the-ground perception of human observer. Moreover, viewshed modelling at the human-scale level require incorporation of fine-grained landscape structure (eg., vegetation) and patterns (e.g, landcover) that are typically omitted from visibility calculations or unrealistically simulated leading to significant error in predicting visual attributes. This poster illustrates how photorealistic Immersive Virtual Environments and high-resolution geospatial data can be used to integrate objective and subjective assessments of visual characteristics at the human-scale level. We performed viewshed modelling for a systematically sampled set of viewpoints (N=340) across an urban park using open-source GIS (GRASS GIS). For each point a binary viewshed was computed on a 3D surface model derived from high-density leaf-off LIDAR (QL2) points. Viewshed map was combined with high-resolution landcover (.5m) derived through fusion of orthoimagery, lidar vegetation, and vector data. Geo-statistics and landscape structure analysis was performed to compute topological and compositional metrics for visual-scale (e.g., openness), complexity (pattern, shape and object diversity), and naturalness. Based on the viewshed model output, a sample of 24 viewpoints representing the variation of visual characteristics were selected and geolocated. For each location, 360o imagery were captured using a DSL camera mounted on a GIGA PAN robot. We programmed a virtual reality application through which human subjects (N=100) immersively experienced a random representation of selected environments via a head-mounted display (Oculus Rift CV1), and rated each location on perceived openness, naturalness and complexity. Regression models were performed to correlate model outputs with participants' responses. The results indicated strong, significant correlations for openness, and naturalness and moderate correlation for complexity estimations.
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wild, M.; Rouhani, S.
1995-02-01
A typical site investigation entails extensive sampling and monitoring. In the past, sampling plans have been designed on purely ad hoc bases, leading to significant expenditures and, in some cases, collection of redundant information. In many instances, sampling costs exceed the true worth of the collected data. The US Environmental Protection Agency (EPA) therefore has advocated the use of geostatistics to provide a logical framework for sampling and analysis of environmental data. Geostatistical methodology uses statistical techniques for the spatial analysis of a variety of earth-related data. The use of geostatistics was developed by the mining industry to estimate oremore » concentrations. The same procedure is effective in quantifying environmental contaminants in soils for risk assessments. Unlike classical statistical techniques, geostatistics offers procedures to incorporate the underlying spatial structure of the investigated field. Sample points spaced close together tend to be more similar than samples spaced further apart. This can guide sampling strategies and determine complex contaminant distributions. Geostatistic techniques can be used to evaluate site conditions on the basis of regular, irregular, random and even spatially biased samples. In most environmental investigations, it is desirable to concentrate sampling in areas of known or suspected contamination. The rigorous mathematical procedures of geostatistics allow for accurate estimates at unsampled locations, potentially reducing sampling requirements. The use of geostatistics serves as a decision-aiding and planning tool and can significantly reduce short-term site assessment costs, long-term sampling and monitoring needs, as well as lead to more accurate and realistic remedial design criteria.« less
2. INTERIOR VIEW OF ENTRY CONTROL POINT (BLDG. 768) FROM ...
2. INTERIOR VIEW OF ENTRY CONTROL POINT (BLDG. 768) FROM SOUTHWEST CORNER - Vandenberg Air Force Base, Space Launch Complex 3, Entry Control Point, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Robustness of critical points in a complex adaptive system: Effects of hedge behavior
NASA Astrophysics Data System (ADS)
Liang, Yuan; Huang, Ji-Ping
2013-08-01
In our recent papers, we have identified a class of phase transitions in the market-directed resource-allocation game, and found that there exists a critical point at which the phase transitions occur. The critical point is given by a certain resource ratio. Here, by performing computer simulations and theoretical analysis, we report that the critical point is robust against various kinds of human hedge behavior where the numbers of herds and contrarians can be varied widely. This means that the critical point can be independent of the total number of participants composed of normal agents, herds and contrarians, under some conditions. This finding means that the critical points we identified in this complex adaptive system (with adaptive agents) may also be an intensive quantity, similar to those revealed in traditional physical systems (with non-adaptive units).
Geochemical and physical drivers of microbial community structure in hot spring ecosystems
NASA Astrophysics Data System (ADS)
Havig, J. R.; Hamilton, T. L.; Boyd, E. S.; Meyer-Dombard, D. R.; Shock, E.
2012-12-01
Microbial communities in natural systems are typically characterized using samples collected from a single time point, thereby neglecting the temporal dynamics that characterize natural systems. The composition of these communities obtained from single point samples is then related to the geochemistry and physical parameters of the environment. Since most microbial life is adapted to a relatively narrow ecological niche (multiplicity of physical and chemical parameters that characterize a local habitat), these assessments provide only modest insight into the controls on community composition. Temporal variation in temperature or geochemical composition would be expected to add another dimension to the complexity of niche space available to support microbial diversity, with systems that experience greater variation supporting a greater biodiversity until a point where the variability is too extreme. . Hot springs often exhibit significant temporal variation, both in physical as well as chemical characteristics. This is a result of subsurface processes including boiling, phase separation, and differential mixing of liquid and vapor phase constituents. These characteristics of geothermal systems, which vary significantly over short periods of time, provide ideal natural laboratories for investigating how i) the extent of microbial community biodiversity and ii) the composition of those communities are shaped by temporal fluctuations in geochemistry. Geochemical and molecular samples were collected from 17 temporally variable hot springs across Yellowstone National Park, Wyoming. Temperature measurements using data-logging thermocouples, allowing accurate determination of temperature maximums, minimums, and ranges for each collection site, were collected in parallel, along with multiple geochemical characterizations as conditions varied. There were significant variations in temperature maxima (54.5 to 90.5°C), minima (12.5 to 82.5°C), and range (3.5 to 77.5°C) for the hot spring environments that spanned ranges of pH values (2.2 to 9.0) and geochemical compositions. We characterized the abundance, composition, and phylogenetic diversity of bacterial and archaeal 16S rRNA gene assemblages in sediment/biofilm samples collected from each site. 16S data can be used as proxy for metabolic dissimilarity. We predict that temporally fluctuating environments should provide additional complexity to the system (additional niche space) capable of supporting additional taxa, which should lead to greater 16S rRNA gene diversity. However, systems with too much variability should collapse the diversity. Thus, one would expect an optimal system for variability, with respect to 16S phylogenetic diversity. Community ecology tools were then applied to model the relative influence of physical and chemical characteristics (including temperature dynamics) on the local biodiversity. The results reveal unique insight into the role of temporal environmental variation in the development of biodiverse communities and provide a platform for predicting the response of an ecosystem to temperature perturbation.
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Michael D.; Dawson, William A.; Hogg, David W.
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxymore » properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.« less
NASA Astrophysics Data System (ADS)
Mataloni, G.; Garraza, G. González; Bölter, M.; Convey, P.; Fermani, P.
2010-08-01
Three mineral soil and four ornithogenic soil sites were sampled during summer 2006 at Cierva Point (Antarctic Peninsula) to study their bacterial, microalgal and faunal communities in relation to abiotic and biotic features. Soil moisture, pH, conductivity, organic matter and nutrient contents were consistently lower and more homogeneous in mineral soils. Ornithogenic soils supported larger and more variable bacterial abundances than mineral ones. Algal communities from mineral soils were more diverse than those from ornithogenic soils, although chlorophyll- a concentrations were significantly higher in the latter. This parameter and bacterial abundance were correlated with nutrient and organic matter contents. The meiofauna obtained from mineral soils was homogeneous, with one nematode species dominating all samples. The fauna of ornithogenic soils varied widely in composition and abundance. Tardigrades and rotifers dominated the meiofauna at eutrophic O2, where they supported a large population of the predatory nematode Coomansus gerlachei. At site O3, high bacterial abundance was consistent with high densities of the bacterivorous nematodes Plectus spp. This study provides evidence that Antarctic soils are complex and diverse systems, and suggests that biotic interactions (e.g. competition and predation) may have a stronger and more direct influence on community variability in space and time than previously thought.
Complete p-type activation in vertical-gradient freeze GaAs co-implanted with gallium and carbon
NASA Astrophysics Data System (ADS)
Horng, S. T.; Goorsky, M. S.
1996-03-01
High-resolution triple-axis x-ray diffractometry and Hall-effect measurements were used to characterize damage evolution and electrical activation in gallium arsenide co-implanted with gallium and carbon ions. Complete p-type activation of GaAs co-implanted with 5×1014 Ga cm-2 and 5×1014 C cm-2 was achieved after rapid thermal annealing at 1100 °C for 10 s. X-ray diffuse scattering was found to increase after rapid thermal annealing at 600-900 °C due to the aggregation of implantation-induced point defects. In this annealing range, there was ˜10%-72% activation. After annealing at higher annealing temperatures, the diffuse scattered intensity decreased drastically; samples that had been annealed at 1000 °C (80% activated) and 1100 °C (˜100% activated) exhibited reciprocal space maps that were indicative of high crystallinity. The hole mobility was about 60 cm2/V s for all samples annealed at 800 °C and above, indicating that the crystal perfection influences dopant activation more strongly than it influences mobility. Since the high-temperature annealing simultaneously increases dopant activation and reduces x-ray diffuse scattering, we conclude that point defect complexes which form at lower annealing temperatures are responsible for both the diffuse scatter and the reduced activation.
Navigating complex sample analysis using national survey data.
Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo
2012-01-01
The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Symmetrical group theory for mathematical complexity reduction of digital holograms
NASA Astrophysics Data System (ADS)
Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.
2017-10-01
This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.
Characterization of Fe-leonardite complexes as novel natural iron fertilizers.
Kovács, Krisztina; Czech, Viktória; Fodor, Ferenc; Solti, Adam; Lucena, Juan J; Santos-Rosell, Sheila; Hernández-Apaolaza, Lourdes
2013-12-18
Water-soluble humic substances (denoted by LN) extracted at alkaline pH from leonardite are proposed to be used as complexing agents to overcome micronutrient deficiencies in plants such as iron chlorosis. LN presents oxidized functional groups that can bind Fe(2+) and Fe(3+). The knowledge of the environment of Fe in the Fe-LN complexes is a key point in the studies on their efficacy as Fe fertilizers. The aim of this work was to study the Fe(2+)/Fe(3+) species formed in Fe-LN complexes with (57)Fe Mössbauer spectroscopy under different experimental conditions in relation to the Fe-complexing capacities, chemical characteristics, and efficiency to provide iron in hydroponics. A high oxidation rate of Fe(2+) to Fe(3+) was found when samples were prepared with Fe(2+), although no well-crystalline magnetically ordered ferric oxide formation could be observed in slightly acidic or neutral media. It seems to be the case that the formation of Fe(3+)-LN compounds is favored over Fe(2+)-LN compounds, although at acidic pH no complex formation between Fe(3+) and LN occurred. The Fe(2+)/Fe(3+) speciation provided by the Mössbauer data showed that Fe(2+)-LN could be efficient in hydroponics while Fe(3+)-LN is suggested to be used more effectively under calcareous soil conditions. However, according to the biological assay, Fe(3+)-LN proved to be effective as a chlorosis corrector applied to iron-deficient cucumber in nutrient solution.
de Vries, R
2004-02-15
Electrostatic complexation of flexible polyanions with the whey proteins alpha-lactalbumin and beta-lactoglobulin is studied using Monte Carlo simulations. The proteins are considered at their respective isoelectric points. Discrete charges on the model polyelectrolytes and proteins interact through Debye-Huckel potentials. Protein excluded volume is taken into account through a coarse-grained model of the protein shape. Consistent with experimental results, it is found that alpha-lactalbumin complexes much more strongly than beta-lactoglobulin. For alpha-lactalbumin, strong complexation is due to localized binding to a single large positive "charge patch," whereas for beta-lactoglobulin, weak complexation is due to diffuse binding to multiple smaller charge patches. Copyright 2004 American Institute of Physics
Experimental study of the complex resistivity and dielectric constant of chrome-contaminated soil
NASA Astrophysics Data System (ADS)
Liu, Haorui; Yang, Heli; Yi, Fengyan
2016-08-01
Heavy metals such as arsenic and chromium often contaminate soils near industrialized areas. Soil samples, made with different water content and chromate pollutant concentrations, are often needed to test soil quality. Because complex resistivity and complex dielectric characteristics of these samples need to be measured, the relationship between these measurement results and chromium concentration as well as water content was studied. Based on soil sample observations, the amplitude of the sample complex resistivity decreased with an increase of contamination concentration and water content. The phase of complex resistivity takes on a tendency of initially decrease, and then increase with the increasing of contamination concentration and water content. For a soil sample with the same resistivity, the higher the amplitude of complex resistivity, the lower the water content and the higher the contamination concentration. The real and imaginary parts of the complex dielectric constant increase with an increase in contamination concentration and water content. Note that resistivity and complex resistivity methods are necessary to adequately evaluate pollution at various sites.
A New Source Biasing Approach in ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevill, Aaron M; Mosher, Scott W
2012-01-01
The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less
Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker
2016-01-01
Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertz, P.R.
Fluorescence spectroscopy is a highly sensitive and selective tool for the analysis of complex systems. In order to investigate the efficacy of several steady state and dynamic techniques for the analysis of complex systems, this work focuses on two types of complex, multicomponent samples: petrolatums and coal liquids. It is shown in these studies dynamic, fluorescence lifetime-based measurements provide enhanced discrimination between complex petrolatum samples. Additionally, improved quantitative analysis of multicomponent systems is demonstrated via incorporation of organized media in coal liquid samples. This research provides the first systematic studies of (1) multifrequency phase-resolved fluorescence spectroscopy for dynamic fluorescence spectralmore » fingerprinting of complex samples, and (2) the incorporation of bile salt micellar media to improve accuracy and sensitivity for characterization of complex systems. In the petroleum studies, phase-resolved fluorescence spectroscopy is used to combine spectral and lifetime information through the measurement of phase-resolved fluorescence intensity. The intensity is collected as a function of excitation and emission wavelengths, angular modulation frequency, and detector phase angle. This multidimensional information enhances the ability to distinguish between complex samples with similar spectral characteristics. Examination of the eigenvalues and eigenvectors from factor analysis of phase-resolved and steady state excitation-emission matrices, using chemometric methods of data analysis, confirms that phase-resolved fluorescence techniques offer improved discrimination between complex samples as compared with conventional steady state methods.« less
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision
NASA Astrophysics Data System (ADS)
Tsai, Yuan-Yu
2016-03-01
Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.
Remote temperature-set-point controller
Burke, W.F.; Winiecki, A.L.
1984-10-17
An instrument is described for carrying out mechanical strain tests on metallic samples with the addition of means for varying the temperature with strain. The instrument includes opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Remote temperature-set-point controller
Burke, William F.; Winiecki, Alan L.
1986-01-01
An instrument for carrying out mechanical strain tests on metallic samples with the addition of an electrical system for varying the temperature with strain, the instrument including opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Jianshan; Sheng Tianlu; Hu Shengmin
2012-08-15
Using aminocarboxylate derivates (S)-N-(4-cyanobenzoic)-glutamic acid (denoted as cbg, 1a) and (S)-N-(4-nitrobenzoic)-glutamic acid (denoted as nbg, 1b) as chiral ligands, five new homochiral coordination polymers formulated as [Cu(cbg)(H{sub 2}O){sub 2}]{sub n} (3), [Cu(cbop){sub 2}(4,4 Prime -bipy)(H{sub 2}O)]{sub n} (4) (cbop=(S)-N-(4-cyanobenzoic)-5-oxoproline, 4,4 Prime -bipy=4,4 Prime -bipyridine), {l_brace}[Cu(nbop){sub 2}(4,4 Prime -bipy)]{center_dot}4H{sub 2}O{r_brace}{sub n} (5) (nbop=(S)-N-(4-nitrobenzoic)-5-oxoproline), {l_brace}[Cd(nbop){sub 2}(4,4 Prime -bipy)]{center_dot}2H{sub 2}O{r_brace}{sub n} (6), and [Ni(nbop){sub 2}(4,4 Prime -bipy)(H{sub 2}O){sub 2}]{sub n} (7) have been hydrothermally synthesized and structurally characterized. Single-crystal X-ray diffraction study reveals that the original chirality of aminocarboxylate derivates is maintained in all these complexes. Complexes 3, 4, and 7 are one-dimensionalmore » infinite chain coordination polymers, while complexes 5 and 6 possess two-dimensional network structures. In situ cyclization of 1a and 1b was taken place in the formation of complexes 4-7, which may be due to the competition of 4,4 Prime -bipyridine with chiral ligands during the coordination process. Preliminary optical behavior investigation indicates that ligands 1a, 1b, and complexes 6, 7 are nonlinear optical active. - Graphical abstract: Using aminocarboxylate derivates as chiral ligands, five new homochiral coordination polymers possessing second harmonic generation activities have been hydrothermally synthesized. Highlights: Black-Right-Pointing-Pointer Two new chiral aminocarboxylate derivates were firstly synthesized. Black-Right-Pointing-Pointer Five new homochiral metal organic complexes were obtained hydrothermally based on these ligands. Black-Right-Pointing-Pointer Intramolecular amidation was taken place on the aminocarboxylate derivates during the formation of these complexes. Black-Right-Pointing-Pointer In situ amidation may be due to the impact of 4,4 Prime -bipyridine. Black-Right-Pointing-Pointer The homochiral complexes are nonlinear optical active.« less
Lehmann, Sylvain; Hoofnagle, Andrew; Hochstrasser, Denis; Brede, Cato; Glueckmann, Matthias; Cocho, José A; Ceglarek, Uta; Lenz, Christof; Vialaret, Jérôme; Scherl, Alexander; Hirtz, Christophe
2013-05-01
Proteomics studies typically aim to exhaustively detect peptides/proteins in a given biological sample. Over the past decade, the number of publications using proteomics methodologies has exploded. This was made possible due to the availability of high-quality genomic data and many technological advances in the fields of microfluidics and mass spectrometry. Proteomics in biomedical research was initially used in 'functional' studies for the identification of proteins involved in pathophysiological processes, complexes and networks. Improved sensitivity of instrumentation facilitated the analysis of even more complex sample types, including human biological fluids. It is at that point the field of clinical proteomics was born, and its fundamental aim was the discovery and (ideally) validation of biomarkers for the diagnosis, prognosis, or therapeutic monitoring of disease. Eventually, it was recognized that the technologies used in clinical proteomics studies [particularly liquid chromatography-tandem mass spectrometry (LC-MS/MS)] could represent an alternative to classical immunochemical assays. Prior to deploying MS in the measurement of peptides/proteins in the clinical laboratory, it seems likely that traditional proteomics workflows and data management systems will need to adapt to the clinical environment and meet in vitro diagnostic (IVD) regulatory constraints. This defines a new field, as reviewed in this article, that we have termed quantitative Clinical Chemistry Proteomics (qCCP).
The Jeanie Point complex revisited
Dumoulin, Julie A.; Miller, Martha L.
1984-01-01
The so-called Jeanie Point complex is a distinctive package of rocks within the Orca Group, a Tertiary turbidite sequence. The rocks crop out on the southeast coast of Montague Island, Prince William Sound, approximately 3 km northeast of Jeanie Point (loc. 7, fig. 44). These rocks consist dominantly of fine-grained limestone and lesser amounts of siliceous limestone, chert, tuff, mudstone, argillite, and sandstone (fig. 47). The Jeanie Point rocks also differ from those typical of the Orca Group in their fold style. Thus, the Orca Group of the area is isoclinally folded on a large scale (tens to hundreds of meters), whereas the Jeanie Point rocks are tightly folded on a 1- to 3- m-wavelength scale (differences in rock competency may be responsible for this variation in fold style).
Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.
Power, Stephanie M; Matic, Damir B
2013-03-01
Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P = .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
NASA Astrophysics Data System (ADS)
Yu, Xiaojun; Liu, Xinyu; Chen, Si; Wang, Xianghong; Liu, Linbo
2016-03-01
High-resolution optical coherence tomography (OCT) is of critical importance to disease diagnosis because it is capable of providing detailed microstructural information of the biological tissues. However, a compromise usually has to be made between its spatial resolutions and sensitivity due to the suboptimal spectral response of the system components, such as the linear camera, the dispersion grating, and the focusing lenses, etc. In this study, we demonstrate an OCT system that achieves both high spatial resolutions and enhanced sensitivity through utilizing a spectrally encoded source. The system achieves a lateral resolution of 3.1 μm and an axial resolution of 2.3 μm in air; when with a simple dispersive prism placed in the infinity space of the sample arm optics, the illumination beam on the sample is transformed into a line source with a visual angle of 10.3 mrad. Such an extended source technique allows a ~4 times larger maximum permissible exposure (MPE) than its point source counterpart, which thus improves the system sensitivity by ~6dB. In addition, the dispersive prism can be conveniently switched to a reflector. Such flexibility helps increase the penetration depth of the system without increasing the complexity of the current point source devices. We conducted experiments to characterize the system's imaging capability using the human fingertip in vivo and the swine eye optic never disc ex vivo. The higher penetration depth of such a system over the conventional point source OCT system is also demonstrated in these two tissues.
NASA Astrophysics Data System (ADS)
Goodall, H.; Gregory, L. C.; Wedmore, L.; Roberts, G.; Shanks, R. P.; McCaffrey, K. J. W.; Amey, R.; Hooper, A. J.
2017-12-01
The cosmogenic isotope chlorine-36 (36Cl) is increasingly used as a tool to investigate normal fault slip rates over the last 10-20 thousand years. These slip histories are being used to address complex questions, including investigating slip clustering and understanding local and large scale fault interaction. Measurements are time consuming and expensive, and as a result there has been little work done validating these 36Cl derived slip histories. This study aims to investigate if the results are repeatable and therefore reliable estimates of how normal faults have been moving in the past. Our approach is to test if slip histories derived from 36Cl are the same when measured at different points along the same fault. As normal fault planes are progressively exhumed from the surface they accumulate 36Cl. Modelling these 36Cl concentrations allows estimation of a slip history. In a previous study, samples were collected from four sites on the Magnola fault in the Italian Apennines. Remodelling of the 36Cl data using a Bayesian approach shows that the sites produced disparate slip histories, which we interpret as being due to variable site geomorphology. In this study, multiple sites have been sampled along the Campo Felice fault in the central Italian Apennines. Initial results show strong agreement between the sites we have processed so far and a previous study. This indicates that if sample sites are selected taking the geomorphology into account, then 36Cl derived slip histories will be highly similar when sampled at any point along the fault. Therefore our study suggests that 36Cl derived slip histories are a consistent record of fault activity in the past.
New constrains on the thermal history of the Miocene Jarando basin (Southern Serbia)
NASA Astrophysics Data System (ADS)
Andrić, Nevena; Životić, Dragana; Fügenschuh, Bernhard; Cvetković, Vladica
2013-04-01
The Jarando basin, located in the internal Dinarides, formed in the course of the Miocene extension affecting the whole Alpine-Carpathian-Dinaride system (Schmid et al., 2008). In the study area Miocene extension led to the formation of a core-complex in the Kopaonik area (Schefer et al., 2011) with the Jarando basin located in the hanging wall of the detachment fault. The Jarando basin is characterized by the presence of bituminous coals, whereas in the other intramontane basins in Serbia coalification did not exceed the subbituminous stage within the same stratigraphic level. Furthermore, the basin hosts boron mineralizations (borates and howlite) and a magnesite deposit, which again implies elevated temperatures. This thermal overprint is possibly due to post-magmatic activity related to the emplacement of Oligocene I-type Kopaonik and Miocene S-type Polumir granitoid (Schefer et al., 2011.). This research project is aimed at providing new information about the thermal history of the Jarando basin. Fifteen core samples from three boreholes and 10 samples from the surrounding outcrops were processed for apatite fission-track analysis. Additionally, vitrinite reflectance was measured for 11 core samples of shales from one borehole and 5 samples of coal from an underground mine. VR data of Early to Middle Miocene sediments reveal a strong post-depositional overprint. Values increase with the depth from 0.66-0.79% to 0.83-0.90%. Thus organic matter reached the bituminous stage and experienced temperatures of around 110-120˚C (Barker and Pawlewicz, 1994). FT single grain ages for apatite scatter between 45 Ma to 10 Ma with a general trend towards younger ages with depth. Both, the spread in single grain ages together with the bimodal track lengths distribution clearly point to partial annealing of the detrital apatites. With the temperature given from the VR values the partial annealing points to a rather short-lived thermal event. This is assisted by thermal modelling of our fission track data indicating that maximum temperatures of <120°C around 15-12 Ma. We correlate the thermal event with the extension and core-complex formation followed by the syn-extensional intrusion of the Polumir granite. Later cooling from 10 Ma onwards is related to basin inversion and erosion.
Social complexity beliefs predict posttraumatic growth in survivors of a natural disaster.
Nalipay, Ma Jenina N; Bernardo, Allan B I; Mordeno, Imelu G
2016-09-01
Most studies on posttraumatic growth (PTG) have focused on personal characteristics, interpersonal resources, and the immediate environment. There has been less attention on dynamic internal processes related to the development of PTG and on how these processes are affected by the broader culture. Calhoun and Tedeschi's (2006) model suggests a role of distal culture in PTG development, but empirical investigations on that point are limited. The present study investigated the role of social complexity-the generalized belief about changing social environments and inconsistency of human behavior-as a predictor of PTG. Social complexity was hypothesized to be associated with problem-solving approaches that are likely to give rise to cognitive processes that promote PTG. A sample of 446 survivors of Typhoon Haiyan, 1 of the strongest typhoons ever recorded at the time, answered self-report measures of social complexity, cognitive processing of trauma, and PTG. Structural equation modeling indicated a good fit between the data and the hypothesized model; belief in social complexity predicted stronger PTG, mediated by cognitive processing. The results provide evidence for how disaster survivors' beliefs about the changing nature of social environments and their corresponding behavior changes are predictors of PTG and suggest a psychological mechanism for how distal culture can influence PTG. Thus, assessing social complexity beliefs during early the phases of a postdisaster psychosocial intervention may provide useful information on who is likely to experience PTG. Trauma workers might consider culture-specific social themes related to social complexity in disaster-affected communities. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Correction for slope in point and transect relascope sampling of downed coarse woody debris
Goran Stahl; Anna Ringvall; Jeffrey H. Gove; Mark J. Ducey
2002-01-01
In this article, the effect of sloping terrain on estimates in point and transect relascope sampling (PRS and TRS, respectively) is studied. With these inventory methods, a wide angle relascope is used either from sample points (PRS) or along survey lines (TRS). Characteristics associated with line-shaped objects on the ground are assessed, e.g., the length or volume...
NASA Astrophysics Data System (ADS)
Chen, W.; Simonetti, A.
2012-12-01
A detailed radiometric investigation is currently underway focusing on U-bearing accessory minerals apatite, perovskite, and niocalite from the Oka Carbonatite Complex (Canada). One of the main objectives is to obtain a comparative chronology of melt crystallization for the complex. Unlike other commonly adopted U-bearing minerals (e.g., zircon, monazite) for in-situ dating investigations, apatite, perovskite, and niocalite contain relatively high contents of common Pb. Hence, careful assessment of the proportion and composition of the common Pb, and usage of appropriate matrix-matched external standards are imperative. The Madagascar apatite was utilized as the external standard for apatite dating, and the Emerald Lake and Durango apatites were adopted as secondary standards; the latter yield ages of 92.6 ±1.8 and 32.2 ±1.1 Ma, respectively, and these are identical to their accepted ages. Pb/U ages for apatite from Oka were obtained for different rock types, including 8 carbonatites, 4 okaites, 3 ijolites and 3 alnoites, and these define a range of ages between ~105 and ~135 Ma; this result suggests a protracted crystallization history. In total, 266 individual analyses define two peaks at ~115 and ~125Ma. For perovskite dating, the Ice River perovskite standard was utilized as the external standard. The perovskites from one okaite sample yield an age of 112.2 ±1.9 Ma, and is much younger than the previously reported U-Pb perovskite age of 131 ±7 Ma. Hence, the combined U-Pb perovskite ages also suggest a rather prolonged time of melt crystallization. Niocalite is a rare, accessory silicate mineral that occurs within the carbonatites at Oka. The international zircon standard BR266 was selected for use as the external standard and rastering was employed to minimize the Pb-U fractionation. Two niocalite samples give young ages at 110.6 ±1.2 and 115.0 ±1.9 Ma, and are identical to their respective apatite ages (given associated uncertainties) from the same sample. The niocalite for a carbonatite sample Oka153 defines a bimodal age distribution, with weighted average 206Pb/238U ages of 110.1 ±5.0 and 133.2 ±6.1 Ma. Apatite from the same sample also records a similar bimodal age distribution of 111.4 ±2.8 and 126.9 ±1.8 Ma. The combined in situ U-Pb dating results for apatite, pervoskite, niocalite from Oka clearly support a protracted history of magmatic activity (~30 Myr) for this carbonatite complex. Of importance, the U-Pb results from this study clearly indicate the significance of conducting a thorough geochronological investigation rather than defining the age of any one alkaline complex solely on the basis of a single radiometric age determination.
Therapeutic drug monitoring of flucytosine in serum using a SERS-active membrane system
NASA Astrophysics Data System (ADS)
Berger, Adam G.; White, Ian M.
2017-02-01
A need exists for near real-time therapeutic drug monitoring (TDM), in particular for antibiotics and antifungals in patient samples at the point-of-care. To truly fit the point-of-care need, techniques must be rapid and easy to use. Here we report a membrane system utilizing inkjet-fabricated surface enhanced Raman spectroscopy (SERS) sensors that allows sensitive and specific analysis despite the elimination of sophisticated chromatography equipment, expensive analytical instruments, and other systems relegated to the central lab. We utilize inkjet-fabricated paper SERS sensors as substrates for 5FC detection; the use of paper-based SERS substrates leverages the natural wicking ability and filtering properties of microporous membranes. We investigate the use of microporous membranes in the vertical flow assay to allow separation of the flucytosine from whole blood. The passive vertical flow assay serves as a valuable method for physical separation of target analytes from complex biological matrices. This work further establishes a platform for easy, sensitive, and specific TDM of 5FC from whole blood.
NASA Technical Reports Server (NTRS)
Boriakoff, Valentin
1994-01-01
The goal of this project was the feasibility study of a particular architecture of a digital signal processing machine operating in real time which could do in a pipeline fashion the computation of the fast Fourier transform (FFT) of a time-domain sampled complex digital data stream. The particular architecture makes use of simple identical processors (called inner product processors) in a linear organization called a systolic array. Through computer simulation the new architecture to compute the FFT with systolic arrays was proved to be viable, and computed the FFT correctly and with the predicted particulars of operation. Integrated circuits to compute the operations expected of the vital node of the systolic architecture were proven feasible, and even with a 2 micron VLSI technology can execute the required operations in the required time. Actual construction of the integrated circuits was successful in one variant (fixed point) and unsuccessful in the other (floating point).
Eta Carinae: Viewed from Multiple Vantage Points
NASA Technical Reports Server (NTRS)
Gull, Theodore
2007-01-01
The central source of Eta Carinae and its ejecta is a massive binary system buried within a massive interacting wind structure which envelops the two stars. However the hot, less massive companion blows a small cavity in the very massive primary wind, plus ionizes a portion of the massive wind just beyond the wind-wind boundary. We gain insight on this complex structure by examining the spatially-resolved Space Telescope Imaging Spectrograph (STIS) spectra of the central source (0.1") with the wind structure which extends out to nearly an arcsecond (2300AU) and the wind-blown boundaries, plus the ejecta of the Little Homunculus. Moreover, the spatially resolved Very Large Telescope/UltraViolet Echelle Spectrograph (VLT/UVES) stellar spectrum (one arcsecond) and spatially sampled spectra across the foreground lobe of the Homunculus provide us vantage points from different angles relative to line of sight. Examples of wind line profiles of Fe II, and the.highly excited [Fe III], [Ne III], [Ar III] and [S III)], plus other lines will be presented.
A marked point process approach for identifying neural correlates of tics in Tourette Syndrome.
Loza, Carlos A; Shute, Jonathan B; Principe, Jose C; Okun, Michael S; Gunduz, Aysegul
2017-07-01
We propose a novel interpretation of local field potentials (LFP) based on a marked point process (MPP) framework that models relevant neuromodulations as shifted weighted versions of prototypical temporal patterns. Particularly, the MPP samples are categorized according to the well known oscillatory rhythms of the brain in an effort to elucidate spectrally specific behavioral correlates. The result is a transient model for LFP. We exploit data-driven techniques to fully estimate the model parameters with the added feature of exceptional temporal resolution of the resulting events. We utilize the learned features in the alpha and beta bands to assess correlations to tic events in patients with Tourette Syndrome (TS). The final results show stronger coupling between LFP recorded from the centromedian-paraficicular complex of the thalamus and the tic marks, in comparison to electrocorticogram (ECoG) recordings from the hand area of the primary motor cortex (M1) in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) curve.
Generation of digitized microfluidic filling flow by vent control.
Yoon, Junghyo; Lee, Eundoo; Kim, Jaehoon; Han, Sewoon; Chung, Seok
2017-06-15
Quantitative microfluidic point-of-care testing has been translated into clinical applications to support a prompt decision on patient treatment. A nanointerstice-driven filling technique has been developed to realize the fast and robust filling of microfluidic channels with liquid samples, but it has failed to provide a consistent filling time owing to the wide variation in liquid viscosity, resulting in an increase in quantification errors. There is a strong demand for simple and quick flow control to ensure accurate quantification, without a serious increase in system complexity. A new control mechanism employing two-beam refraction and one solenoid valve was developed and found to successfully generate digitized filling flow, completely free from errors due to changes in viscosity. The validity of digitized filling flow was evaluated by the immunoassay, using liquids with a wide range of viscosity. This digitized microfluidic filling flow is a novel approach that could be applied in conventional microfluidic point-of-care testing. Copyright © 2016 Elsevier B.V. All rights reserved.
Interpolation of longitudinal shape and image data via optimal mass transport
NASA Astrophysics Data System (ADS)
Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen
2014-03-01
Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.
NASA Astrophysics Data System (ADS)
Sgambitterra, Emanuele; Piccininni, Antonio; Guglielmi, Pasquale; Ambrogio, Giuseppina; Fragomeni, Gionata; Villa, Tomaso; Palumbo, Gianfranco
2018-05-01
Cranial implants are custom prostheses characterized by quite high geometrical complexity and small thickness; at the same time aesthetic and mechanical requirements have to be met. Titanium alloys are largely adopted for such prostheses, as they can be processed via different manufacturing technologies. In the present work cranial prostheses have been manufactured by Super Plastic Forming (SPF) and Single Point Incremental Forming (SPIF). In order to assess the mechanical performance of the cranial prostheses, drop tests under different load conditions were conducted on flat samples to investigate the effect of the blank thickness. Numerical simulations were also run for comparison purposes. The mechanical performance of the cranial implants manufactured by SPF and SPIF could be predicted using drop test data and information about the thickness evolution of the formed parts: the SPIFed prosthesis revealed to have a lower maximum deflection and a higher maximum force, while the SPFed prostheses showed a lower absorbed energy.
A new paper-based platform technology for point-of-care diagnostics.
Gerbers, Roman; Foellscher, Wilke; Chen, Hong; Anagnostopoulos, Constantine; Faghri, Mohammad
2014-10-21
Currently, the Lateral flow Immunoassays (LFIAs) are not able to perform complex multi-step immunodetection tests because of their inability to introduce multiple reagents in a controlled manner to the detection area autonomously. In this research, a point-of-care (POC) paper-based lateral flow immunosensor was developed incorporating a novel microfluidic valve technology. Layers of paper and tape were used to create a three-dimensional structure to form the fluidic network. Unlike the existing LFIAs, multiple directional valves are embedded in the test strip layers to control the order and the timing of mixing for the sample and multiple reagents. In this paper, we report a four-valve device which autonomously directs three different fluids to flow sequentially over the detection area. As proof of concept, a three-step alkaline phosphatase based Enzyme-Linked ImmunoSorbent Assay (ELISA) protocol with Rabbit IgG as the model analyte was conducted to prove the suitability of the device for immunoassays. The detection limit of about 4.8 fm was obtained.
NASA Technical Reports Server (NTRS)
Hada, M.; Saganti, P. B.; Gersey, B.; Wilkins, R.; Cucinotta, F. A.; Wu, H.
2007-01-01
Most of the reported studies of break point distribution on the damaged chromosomes from radiation exposure were carried out with the G-banding technique or determined based on the relative length of the broken chromosomal fragments. However, these techniques lack the accuracy in comparison with the later developed multicolor banding in situ hybridization (mBAND) technique that is generally used for analysis of intrachromosomal aberrations such as inversions. Using mBAND, we studied chromosome aberrations in human epithelial cells exposed in vitro to both low or high dose rate gamma rays in Houston, low dose rate secondary neutrons at Los Alamos National Laboratory and high dose rate 600 MeV/u Fe ions at NASA Space Radiation Laboratory. Detailed analysis of the inversion type revealed that all of the three radiation types induced a low incidence of simple inversions. Half of the inversions observed after neutron or Fe ion exposure, and the majority of inversions in gamma-irradiated samples were accompanied by other types of intrachromosomal aberrations. In addition, neutrons and Fe ions induced a significant fraction of inversions that involved complex rearrangements of both inter- and intrachromosome exchanges. We further compared the distribution of break point on chromosome 3 for the three radiation types. The break points were found to be randomly distributed on chromosome 3 after neutrons or Fe ions exposure, whereas non-random distribution with clustering break points was observed for gamma-rays. The break point distribution may serve as a potential fingerprint of high-LET radiation exposure.
Non-uniform sampling: post-Fourier era of NMR data collection and processing.
Kazimierczuk, Krzysztof; Orekhov, Vladislav
2015-11-01
The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.
How PowerPoint Is Killing Education
ERIC Educational Resources Information Center
Isseks, Marc
2011-01-01
Although it is essential to incorporate new technologies into the classroom, says Isseks, one trend has negatively affected instruction--the misuse of PowerPoint presentations. The author describes how poorly designed PowerPoint presentations reduce complex thoughts to bullet points and reduce the act of learning to transferring text from slide to…
Bounds on the sample complexity for private learning and private data release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasiviswanathan, Shiva; Beime, Amos; Nissim, Kobbi
2009-01-01
Learning is a task that generalizes many of the analyses that are applied to collections of data, and in particular, collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. [Kasiviswanathan, Lee, Nissim, Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that, ignoring time complexity, every PAC learning task could be performed privately with polynomially many samples, and in many naturalmore » cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by [Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.« less
An Analytical Study on an Orthodontic Index: Index of Complexity, Outcome and Need (ICON)
Torkan, Sepide; Pakshir, Hamid Reza; Fattahi, Hamid Reza; Oshagh, Morteza; Momeni Danaei, Shahla; Salehi, Parisa; Hedayati, Zohreh
2015-01-01
Statement of the Problem The validity of the Index of Complexity, Outcome and Need (ICON) which is an orthodontic index developed and introduced in 2000 should be studied in different ethnic groups. Purpose The aim of this study was to perform an analysis on the ICON and to verify whether this index is valid for assessing both the need and complexity of orthodontic treatment in Iran. Materials and Method Five orthodontists were asked to score pre-treatment diagnostic records of 100 patients with a uniform distribution of different types of malocclusions determined by Dental Health Component of the Index of Treatment Need. A calibrated examiner also assessed the need for orthodontic treatment and complexity of the cases based on the ICON index as well as the Index of Orthodontic Treatment Need (IOTN). 10 days later, 25% of the cases were re-scored by the panel of experts and the calibrated orthodontist. Results The weighted kappa revealed the inter-examiner reliability of the experts to be 0.63 and 0.51 for the need and complexity components, respectively. ROC curve was used to assess the validity of the index. A new cut-off point was adjusted at 35 in lieu of 43 as the suggested cut-off point. This cut-off point showed the highest level of sensitivity and specificity in our society for orthodontic treatment need (0.77 and 0.78, respectively), but it failed to define definite ranges for the complexity of treatment. Conclusion ICON is a valid index in assessing the need for treatment in Iran when the cut-off point is adjusted to 35. As for complexity of treatment, the index is not validated for our society. It seems that ICON is a well-suited substitute for the IOTN index. PMID:26331142
Mort, Brendan C; Autschbach, Jochen
2006-08-09
Vibrational corrections (zero-point and temperature dependent) of the H-D spin-spin coupling constant J(HD) for six transition metal hydride and dihydrogen complexes have been computed from a vibrational average of J(HD) as a function of temperature. Effective (vibrationally averaged) H-D distances have also been determined. The very strong temperature dependence of J(HD) for one of the complexes, [Ir(dmpm)Cp*H2]2 + (dmpm = bis(dimethylphosphino)methane) can be modeled simply by the Boltzmann average of the zero-point vibrationally averaged JHD of two isomers. For this complex and four others, the vibrational corrections to JHD are shown to be highly significant and lead to improved agreement between theory and experiment in most cases. The zero-point vibrational correction is important for all complexes. Depending on the shape of the potential energy and J-coupling surfaces, for some of the complexes higher vibrationally excited states can also contribute to the vibrational corrections at temperatures above 0 K and lead to a temperature dependence. We identify different classes of complexes where a significant temperature dependence of J(HD) may or may not occur for different reasons. A method is outlined by which the temperature dependence of the HD spin-spin coupling constant can be determined with standard quantum chemistry software. Comparisons are made with experimental data and previously calculated values where applicable. We also discuss an example where a low-order expansion around the minimum of a complicated potential energy surface appears not to be sufficient for reproducing the experimentally observed temperature dependence.
Phosphorylation of human INO80 is involved in DNA damage tolerance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Dai; Waki, Mayumi; Umezawa, Masaki
Highlights: Black-Right-Pointing-Pointer Depletion of hINO80 significantly reduced PCNA ubiquitination. Black-Right-Pointing-Pointer Depletion of hINO80 significantly reduced nuclear dots intensity of RAD18 after UV irradiation. Black-Right-Pointing-Pointer Western blot analyses showed phosphorylated hINO80 C-terminus. Black-Right-Pointing-Pointer Overexpression of phosphorylation mutant hINO80 reduced PCNA ubiquitination. -- Abstract: Double strand breaks (DSBs) are the most serious type of DNA damage. DSBs can be generated directly by exposure to ionizing radiation or indirectly by replication fork collapse. The DNA damage tolerance pathway, which is conserved from bacteria to humans, prevents this collapse by overcoming replication blockages. The INO80 chromatin remodeling complex plays an important role in themore » DNA damage response. The yeast INO80 complex participates in the DNA damage tolerance pathway. The mechanisms regulating yINO80 complex are not fully understood, but yeast INO80 complex are necessary for efficient proliferating cell nuclear antigen (PCNA) ubiquitination and for recruitment of Rad18 to replication forks. In contrast, the function of the mammalian INO80 complex in DNA damage tolerance is less clear. Here, we show that human INO80 was necessary for PCNA ubiquitination and recruitment of Rad18 to DNA damage sites. Moreover, the C-terminal region of human INO80 was phosphorylated, and overexpression of a phosphorylation-deficient mutant of human INO80 resulted in decreased ubiquitination of PCNA during DNA replication. These results suggest that the human INO80 complex, like the yeast complex, was involved in the DNA damage tolerance pathway and that phosphorylation of human INO80 was involved in the DNA damage tolerance pathway. These findings provide new insights into the DNA damage tolerance pathway in mammalian cells.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwivedi, Nidhi; Mehta, Ashish; Yadav, Abhishek
Arsenicosis, due to contaminated drinking water, is a serious health hazard in terms of morbidity and mortality. Arsenic induced free radicals generated are known to cause cellular apoptosis through mitochondrial driven pathway. In the present study, we investigated the effect of arsenic interactions with various complexes of the electron transport chain and attempted to evaluate if there was any complex preference of arsenic that could trigger apoptosis. We also evaluated if chelation with monoisoamyl dimercaptosuccinic acid (MiADMSA) could reverse these detrimental effects. Our results indicate that arsenic exposure induced free radical generation in rat neuronal cells, which diminished mitochondrial potentialmore » and enzyme activities of all the complexes of the electron transport chain. Moreover, these complexes showed differential responses towards arsenic. These early events along with diminished ATP levels could be co-related with the later events of cytosolic migration of cytochrome c, altered bax/bcl{sub 2} ratio, and increased caspase 3 activity. Although MiADMSA could reverse most of these arsenic-induced altered variables to various extents, DNA damage remained unaffected. Our study for the first time demonstrates the differential effect of arsenic on the complexes leading to deficits in bioenergetics leading to apoptosis in rat brain. However, more in depth studies are warranted for better understanding of arsenic interactions with the mitochondria. -- Research highlights: Black-Right-Pointing-Pointer Arsenic impairs mitochondrial energy metabolism leading to neuronal apoptosis. Black-Right-Pointing-Pointer Arsenic differentially affects mitochondrial complexes, I - III and IV being more sensitive than complex II. Black-Right-Pointing-Pointer Arsenic-induced apoptosis initiates through ROS generation or impaired [Ca{sup 2+}]i homeostasis. Black-Right-Pointing-Pointer MiADMSA reverses arsenic toxicity via intracellular arsenic- chelation, antioxidant potential or both.« less
Sampling challenges in a study examining refugee resettlement
2011-01-01
Background As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment Methods A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. Results A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Conclusions Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation. PMID:21406104
Sampling challenges in a study examining refugee resettlement.
Sulaiman-Hill, Cheryl Mr; Thompson, Sandra C
2011-03-15
As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and break down barriers. Personal contact was critical for both recruitment and data quality, and highlighted the importance of interviewer cultural sensitivity. Cross-national comparative studies, particularly relating to refugee resettlement within different policy environments, also need to take into consideration the differing pre-migration experiences and time since arrival of refugee groups, as these can add additional layers of complexity to study design and interpretation.
Gasco, Jaime; Braun, Jonathan D; McCutcheon, Ian E; Black, Peter M
2011-01-01
To objectively compare the complexity and diversity of the certification process in neurological surgery in member societies of the World Federation of Neurosurgical Societies. This study centers in continental Asia. We provide here an analysis based on the responses provided to a 13-item survey. The data received were analyzed, and three Regional Complexity Scores (RCS) were designed. To compare national board experience, eligibility requirements for access to the certification process, and the obligatory nature of the examinations, an RCS-Organizational score was created (20 points maximum). To analyze the complexity of the examination, an RCS-Components score was designed (20 points maximum). The sum of both is presented in a Global RCS score. Only those countries that responded to the survey and presented nationwide homogeneity in the conduction of neurosurgery examinations could be included within the scoring system. In addition, a descriptive summary of the certification process per responding society is also provided. On the basis of the data provided by our RCS system, the highest global RCS was achieved by South Korea and Malaysia (21/40 points) followed by the joint examination of Singapore and Hong-Kong (FRCS-Ed) (20/40 points), Japan (17/40 points), the Philippines (15/40 points), and Taiwan (13 points). The experience from these leading countries should be of value to all countries within Asia. Copyright © 2011 Elsevier Inc. All rights reserved.
Sampling saddle points on a free energy surface
NASA Astrophysics Data System (ADS)
Samanta, Amit; Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark; E, Weinan
2014-04-01
Many problems in biology, chemistry, and materials science require knowledge of saddle points on free energy surfaces. These saddle points act as transition states and are the bottlenecks for transitions of the system between different metastable states. For simple systems in which the free energy depends on a few variables, the free energy surface can be precomputed, and saddle points can then be found using existing techniques. For complex systems, where the free energy depends on many degrees of freedom, this is not feasible. In this paper, we develop an algorithm for finding the saddle points on a high-dimensional free energy surface "on-the-fly" without requiring a priori knowledge the free energy function itself. This is done by using the general strategy of the heterogeneous multi-scale method by applying a macro-scale solver, here the gentlest ascent dynamics algorithm, with the needed force and Hessian values computed on-the-fly using a micro-scale model such as molecular dynamics. The algorithm is capable of dealing with problems involving many coarse-grained variables. The utility of the algorithm is illustrated by studying the saddle points associated with (a) the isomerization transition of the alanine dipeptide using two coarse-grained variables, specifically the Ramachandran dihedral angles, and (b) the beta-hairpin structure of the alanine decamer using 20 coarse-grained variables, specifically the full set of Ramachandran angle pairs associated with each residue. For the alanine decamer, we obtain a detailed network showing the connectivity of the minima obtained and the saddle-point structures that connect them, which provides a way to visualize the gross features of the high-dimensional surface.
Relationship between Composition and Toxicity of Motor Vehicle Emission Samples
McDonald, Jacob D.; Eide, Ingvar; Seagrave, JeanClare; Zielinska, Barbara; Whitney, Kevin; Lawson, Douglas R.; Mauderly, Joe L.
2004-01-01
In this study we investigated the statistical relationship between particle and semivolatile organic chemical constituents in gasoline and diesel vehicle exhaust samples, and toxicity as measured by inflammation and tissue damage in rat lungs and mutagenicity in bacteria. Exhaust samples were collected from “normal” and “high-emitting” gasoline and diesel light-duty vehicles. We employed a combination of principal component analysis (PCA) and partial least-squares regression (PLS; also known as projection to latent structures) to evaluate the relationships between chemical composition of vehicle exhaust and toxicity. The PLS analysis revealed the chemical constituents covarying most strongly with toxicity and produced models predicting the relative toxicity of the samples with good accuracy. The specific nitro-polycyclic aromatic hydrocarbons important for mutagenicity were the same chemicals that have been implicated by decades of bioassay-directed fractionation. These chemicals were not related to lung toxicity, which was associated with organic carbon and select organic compounds that are present in lubricating oil. The results demonstrate the utility of the PCA/PLS approach for evaluating composition–response relationships in complex mixture exposures and also provide a starting point for confirming causality and determining the mechanisms of the lung effects. PMID:15531438
Proximity Operations in Microgravity, a Robotic Solution for Maneuvering about an Asteroid Surface
NASA Astrophysics Data System (ADS)
Indyk, Stephen; Scheidt, David; Moses, Kenneth; Perry, Justin; Mike, Krystal
Asteroids remain some of the most under investigated bodies in the solar system. Addition-ally, there is a distinct lack of directly collected information. This is in part due to complex sampling and motion problems that must be overcome before more detailed missions can be formulated. The chief caveat lies in formulating a technique for precision operation in mi-crogravity. Locomotion, in addition to sample collection, involve forces significantly greater than the gravitational force keeping a robot on the surface. The design of a system that can successfully maneuver over unfamiliar surfaces void of natural anchor points is an incredible challenge. This problem was investigated at Johns Hopkins University Applied Physics Laboratory as part of the 2009 NASA Lunar and Planetary Academy. Examining the problem through a two-dimensional robotic simulation, a swarm robotics approach was applied. In simplest form, this was comprised of three grappling robots and one sampling robot. Connected by tethers, the grappling robots traverse a plane and reposition the sampling robot through tensioning the tethers. This presentation provides information on the design of the robotic system, as well as gait analysis and future considerations for a three dimensional system.
Stoliker, Deborah L.; Kent, Douglas B.; Zachara, John M.
2011-01-01
Uranium adsorption-desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500-1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO22+ + 2CO32- = >SOUO2(CO3HCO3)2-, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logKc) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logKc values. Using this approach, logKc values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (Kc uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
Sur, Maitreyi; Belthoff, James R.; Bjerre, Emily R.; Millsap, Brian A.; Katzner, Todd
2018-01-01
Wind energy development is rapidly expanding in North America, often accompanied by requirements to survey potential facility locations for existing wildlife. Within the USA, golden eagles (Aquila chrysaetos) are among the most high-profile species of birds that are at risk from wind turbines. To minimize golden eagle fatalities in areas proposed for wind development, modified point count surveys are usually conducted to estimate use by these birds. However, it is not always clear what drives variation in the relationship between on-site point count data and actual use by eagles of a wind energy project footprint. We used existing GPS-GSM telemetry data, collected at 15 min intervals from 13 golden eagles in 2012 and 2013, to explore the relationship between point count data and eagle use of an entire project footprint. To do this, we overlaid the telemetry data on hypothetical project footprints and simulated a variety of point count sampling strategies for those footprints. We compared the time an eagle was found in the sample plots with the time it was found in the project footprint using a metric we called “error due to sampling”. Error due to sampling for individual eagles appeared to be influenced by interactions between the size of the project footprint (20, 40, 90 or 180 km2) and the sampling type (random, systematic or stratified) and was greatest on 90 km2 plots. However, use of random sampling resulted in lowest error due to sampling within intermediate sized plots. In addition sampling intensity and sampling frequency both influenced the effectiveness of point count sampling. Although our work focuses on individual eagles (not the eagle populations typically surveyed in the field), our analysis shows both the utility of simulations to identify specific influences on error and also potential improvements to sampling that consider the context-specific manner that point counts are laid out on the landscape.
NASA Astrophysics Data System (ADS)
Zeng, Zhenxiang; Zheng, Huadong; Yu, Yingjie; Asundi, Anand K.
2017-06-01
A method for calculating off-axis phase-only holograms of three-dimensional (3D) object using accelerated point-based Fresnel diffraction algorithm (PB-FDA) is proposed. The complex amplitude of the object points on the z-axis in hologram plane is calculated using Fresnel diffraction formula, called principal complex amplitudes (PCAs). The complex amplitudes of those off-axis object points of the same depth can be obtained by 2D shifting of PCAs. In order to improve the calculating speed of the PB-FDA, the convolution operation based on fast Fourier transform (FFT) is used to calculate the holograms rather than using the point-by-point spatial 2D shifting of the PCAs. The shortest recording distance of the PB-FDA is analyzed in order to remove the influence of multiple-order images in reconstructed images. The optimal recording distance of the PB-FDA is also analyzed to improve the quality of reconstructed images. Numerical reconstructions and optical reconstructions with a phase-only spatial light modulator (SLM) show that holographic 3D display is feasible with the proposed algorithm. The proposed PB-FDA can also avoid the influence of the zero-order image introduced by SLM in optical reconstructed images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skala, Vaclav
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less
Defect states of complexes involving a vacancy on the boron site in boronitrene
NASA Astrophysics Data System (ADS)
Ngwenya, T. B.; Ukpong, A. M.; Chetty, N.
2011-12-01
First principles calculations have been performed to investigate the ground state properties of freestanding monolayer hexagonal boronitrene (h-BN). We have considered monolayers that contain native point defects and their complexes, which form when the point defects bind with the boron vacancy on the nearest-neighbor position. The changes in the electronic structure are analyzed to show the extent of localization of the defect-induced midgap states. The variations in formation energies suggest that defective h-BN monolayers that contain carbon substitutional impurities are the most stable structures, irrespective of the changes in growth conditions. The high energies of formation of the boron vacancy complexes suggest that they are less stable, and their creation by ion bombardment would require high-energy ions compared to point defects. Using the relative positions of the derived midgap levels for the double vacancy complex, it is shown that the quasi-donor-acceptor pair interpretation of optical transitions is consistent with stimulated transitions between electron and hole states in boronitrene.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Wang, Lu; Qu, Haibin
2016-03-01
A method combining solid phase extraction, high performance liquid chromatography, and ultraviolet/evaporative light scattering detection (SPE-HPLC-UV/ELSD) was developed according to Quality by Design (QbD) principles and used to assay nine bioactive compounds within a botanical drug, Shenqi Fuzheng Injection. Risk assessment and a Plackett-Burman design were utilized to evaluate the impact of 11 factors on the resolutions and signal-to-noise of chromatographic peaks. Multiple regression and Pareto ranking analysis indicated that the sorbent mass, sample volume, flow rate, column temperature, evaporator temperature, and gas flow rate were statistically significant (p < 0.05) in this procedure. Furthermore, a Box-Behnken design combined with response surface analysis was employed to study the relationships between the quality of SPE-HPLC-UV/ELSD analysis and four significant factors, i.e., flow rate, column temperature, evaporator temperature, and gas flow rate. An analytical design space of SPE-HPLC-UV/ELSD was then constructed by calculated Monte Carlo probability. In the presented approach, the operating parameters of sample preparation, chromatographic separation, and compound detection were investigated simultaneously. Eight terms of method validation, i.e., system-suitability tests, method robustness/ruggedness, sensitivity, precision, repeatability, linearity, accuracy, and stability, were accomplished at a selected working point. These results revealed that the QbD principles were suitable in the development of analytical procedures for samples in complex matrices. Meanwhile, the analytical quality and method robustness were validated by the analytical design space. The presented strategy provides a tutorial on the development of a robust QbD-compliant quantitative method for samples in complex matrices.
Lanthanide complexes as luminogenic probes to measure sulfide levels in industrial samples.
Thorson, Megan K; Ung, Phuc; Leaver, Franklin M; Corbin, Teresa S; Tuck, Kellie L; Graham, Bim; Barrios, Amy M
2015-10-08
A series of lanthanide-based, azide-appended complexes were investigated as hydrogen sulfide-sensitive probes. Europium complex 1 and Tb complex 3 both displayed a sulfide-dependent increase in luminescence, while Tb complex 2 displayed a decrease in luminescence upon exposure to NaHS. The utility of the complexes for monitoring sulfide levels in industrial oil and water samples was investigated. Complex 3 provided a sensitive measure of sulfide levels in petrochemical water samples (detection limit ∼ 250 nM), while complex 1 was capable of monitoring μM levels of sulfide in partially refined crude oil. Copyright © 2015 Elsevier B.V. All rights reserved.
Quantitative proteomics reveals the kinetics of trypsin-catalyzed protein digestion.
Pan, Yanbo; Cheng, Kai; Mao, Jiawei; Liu, Fangjie; Liu, Jing; Ye, Mingliang; Zou, Hanfa
2014-10-01
Trypsin is the popular protease to digest proteins into peptides in shotgun proteomics, but few studies have attempted to systematically investigate the kinetics of trypsin-catalyzed protein digestion in proteome samples. In this study, we applied quantitative proteomics via triplex stable isotope dimethyl labeling to investigate the kinetics of trypsin-catalyzed cleavage. It was found that trypsin cleaves the C-terminal to lysine (K) and arginine (R) residues with higher rates for R. And the cleavage sites surrounded by neutral residues could be quickly cut, while those with neighboring charged residues (D/E/K/R) or proline residue (P) could be slowly cut. In a proteome sample, a huge number of proteins with different physical chemical properties coexists. If any type of protein could be preferably digested, then limited digestion could be applied to reduce the sample complexity. However, we found that protein abundance and other physicochemical properties, such as molecular weight (Mw), grand average of hydropathicity (GRAVY), aliphatic index, and isoelectric point (pI) have no notable correlation with digestion priority of proteins.
The effect of stoichiometry on Cu-Zn ordering kinetics in Cu2ZnSnS4 thin films
NASA Astrophysics Data System (ADS)
Rudisch, Katharina; Davydova, Alexandra; Platzer-Björkman, Charlotte; Scragg, Jonathan
2018-04-01
Cu-Zn disorder in Cu2ZnSnS4 (CZTS) may be responsible for the large open circuit voltage deficit in CZTS based solar cells. In this study, it was investigated how composition-dependent defect complexes influence the order-disorder transition. A combinatorial CZTS thin film sample was produced with a cation composition gradient across the sample area. The graded sample was exposed to various temperature treatments and the degree of order was analyzed with resonant Raman spectroscopy for various compositions ranging from E- and A-type to B-, F-, and C-type CZTS. We observe that the composition has no influence on the critical temperature of the order-disorder transition, but strongly affects the activation energy. Reduced activation energy is achieved with compositions with Cu/Sn > 2 or Cu/Sn < 1.8 suggesting an acceleration of the cation ordering in the presence of vacancies or interstitials. This is rationalized with reference to the effect of point defects on exchange mechanisms. The implications for reducing disorder in CZTS thin films are discussed in light of the new findings.
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
Detection of cracks in shafts with the Approximated Entropy algorithm
NASA Astrophysics Data System (ADS)
Sampaio, Diego Luchesi; Nicoletti, Rodrigo
2016-05-01
The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.
NASA Astrophysics Data System (ADS)
Huynh, Toan; Daddysman, Matthew K.; Bao, Ying; Selewa, Alan; Kuznetsov, Andrey; Philipson, Louis H.; Scherer, Norbert F.
2017-05-01
Imaging specific regions of interest (ROIs) of nanomaterials or biological samples with different imaging modalities (e.g., light and electron microscopy) or at subsequent time points (e.g., before and after off-microscope procedures) requires relocating the ROIs. Unfortunately, relocation is typically difficult and very time consuming to achieve. Previously developed techniques involve the fabrication of arrays of features, the procedures for which are complex, and the added features can interfere with imaging the ROIs. We report the Fast and Accurate Relocation of Microscopic Experimental Regions (FARMER) method, which only requires determining the coordinates of 3 (or more) conspicuous reference points (REFs) and employs an algorithm based on geometric operators to relocate ROIs in subsequent imaging sessions. The 3 REFs can be quickly added to various regions of a sample using simple tools (e.g., permanent markers or conductive pens) and do not interfere with the ROIs. The coordinates of the REFs and the ROIs are obtained in the first imaging session (on a particular microscope platform) using an accurate and precise encoded motorized stage. In subsequent imaging sessions, the FARMER algorithm finds the new coordinates of the ROIs (on the same or different platforms), using the coordinates of the manually located REFs and the previously recorded coordinates. FARMER is convenient, fast (3-15 min/session, at least 10-fold faster than manual searches), accurate (4.4 μm average error on a microscope with a 100x objective), and precise (almost all errors are <8 μm), even with deliberate rotating and tilting of the sample well beyond normal repositioning accuracy. We demonstrate this versatility by imaging and re-imaging a diverse set of samples and imaging methods: live mammalian cells at different time points; fixed bacterial cells on two microscopes with different imaging modalities; and nanostructures on optical and electron microscopes. FARMER can be readily adapted to any imaging system with an encoded motorized stage and can facilitate multi-session and multi-platform imaging experiments in biology, materials science, photonics, and nanoscience.
Leaps and lulls in the developmental transcriptome of Dictyostelium discoideum.
Rosengarten, Rafael David; Santhanam, Balaji; Fuller, Danny; Katoh-Kurasawa, Mariko; Loomis, William F; Zupan, Blaz; Shaulsky, Gad
2015-04-13
Development of the soil amoeba Dictyostelium discoideum is triggered by starvation. When placed on a solid substrate, the starving solitary amoebae cease growth, communicate via extracellular cAMP, aggregate by tens of thousands and develop into multicellular organisms. Early phases of the developmental program are often studied in cells starved in suspension while cAMP is provided exogenously. Previous studies revealed massive shifts in the transcriptome under both developmental conditions and a close relationship between gene expression and morphogenesis, but were limited by the sampling frequency and the resolution of the methods. Here, we combine the superior depth and specificity of RNA-seq-based analysis of mRNA abundance with high frequency sampling during filter development and cAMP pulsing in suspension. We found that the developmental transcriptome exhibits mostly gradual changes interspersed by a few instances of large shifts. For each time point we treated the entire transcriptome as single phenotype, and were able to characterize development as groups of similar time points separated by gaps. The grouped time points represented gradual changes in mRNA abundance, or molecular phenotype, and the gaps represented times during which many genes are differentially expressed rapidly, and thus the phenotype changes dramatically. Comparing developmental experiments revealed that gene expression in filter developed cells lagged behind those treated with exogenous cAMP in suspension. The high sampling frequency revealed many genes whose regulation is reproducibly more complex than indicated by previous studies. Gene Ontology enrichment analysis suggested that the transition to multicellularity coincided with rapid accumulation of transcripts associated with DNA processes and mitosis. Later development included the up-regulation of organic signaling molecules and co-factor biosynthesis. Our analysis also demonstrated a high level of synchrony among the developing structures throughout development. Our data describe D. discoideum development as a series of coordinated cellular and multicellular activities. Coordination occurred within fields of aggregating cells and among multicellular bodies, such as mounds or migratory slugs that experience both cell-cell contact and various soluble signaling regimes. These time courses, sampled at the highest temporal resolution to date in this system, provide a comprehensive resource for studies of developmental gene expression.
Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis
Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm3 and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers. PMID:25206325
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis.
Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm(3) and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers.
Tribological behaviour and statistical experimental design of sintered iron-copper based composites
NASA Astrophysics Data System (ADS)
Popescu, Ileana Nicoleta; Ghiţă, Constantin; Bratu, Vasile; Palacios Navarro, Guillermo
2013-11-01
The sintered iron-copper based composites for automotive brake pads have a complex composite composition and should have good physical, mechanical and tribological characteristics. In this paper, we obtained frictional composites by Powder Metallurgy (P/M) technique and we have characterized them by microstructural and tribological point of view. The morphology of raw powders was determined by SEM and the surfaces of obtained sintered friction materials were analyzed by ESEM, EDS elemental and compo-images analyses. One lot of samples were tested on a "pin-on-disc" type wear machine under dry sliding conditions, at applied load between 3.5 and 11.5 × 10-1 MPa and 12.5 and 16.9 m/s relative speed in braking point at constant temperature. The other lot of samples were tested on an inertial test stand according to a methodology simulating the real conditions of dry friction, at a contact pressure of 2.5-3 MPa, at 300-1200 rpm. The most important characteristics required for sintered friction materials are high and stable friction coefficient during breaking and also, for high durability in service, must have: low wear, high corrosion resistance, high thermal conductivity, mechanical resistance and thermal stability at elevated temperature. Because of the tribological characteristics importance (wear rate and friction coefficient) of sintered iron-copper based composites, we predicted the tribological behaviour through statistical analysis. For the first lot of samples, the response variables Yi (represented by the wear rate and friction coefficient) have been correlated with x1 and x2 (the code value of applied load and relative speed in braking points, respectively) using a linear factorial design approach. We obtained brake friction materials with improved wear resistance characteristics and high and stable friction coefficients. It has been shown, through experimental data and obtained linear regression equations, that the sintered composites wear rate increases with increasing applied load and relative speed, but in the same conditions, the frictional coefficients slowly decrease.
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
A form of two-phase sampling utilizing regression analysis
Michael A. Fiery; John R. Brooks
2007-01-01
A two-phase sampling technique was introduced and tested on several horizontal point sampling inventories of hardwood tracts located in northern West Virginia and western Maryland. In this sampling procedure species and dbh are recorded for all âin-treesâ on all sample points. Sawlog merchantable height was recorded on a subsample of intensively measured (second phase...
Estimating the carbon in coarse woody debris with perpendicular distance sampling. Chapter 6
Harry T. Valentine; Jeffrey H. Gove; Mark J. Ducey; Timothy G. Gregoire; Michael S. Williams
2008-01-01
Perpendicular distance sampling (PDS) is a design for sampling the population of pieces of coarse woody debris (logs) in a forested tract. In application, logs are selected at sample points with probability proportional to volume. Consequently, aggregate log volume per unit land area can be estimated from tallies of logs at sample points. In this chapter we provide...
Accuracy assessment with complex sampling designs
Raymond L. Czaplewski
2010-01-01
A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...
Vázquez, Cristina; Maier, Marta S; Parera, Sara D; Yacobaccio, Hugo; Solá, Patricia
2008-06-01
Archaeological samples are complex in composition since they generally comprise a mixture of materials submitted to deterioration factors largely dependent on the environmental conditions. Therefore, the integration of analytical tools such as TXRF, FT-IR and GC-MS can maximize the amount of information provided by the sample. Recently, two black rock art samples of camelid figures at Alero Hornillos 2, an archaeological site located near the town of Susques (Jujuy Province, Argentina), were investigated. TXRF, selected for inorganic information, showed the presence of manganese and iron among other elements, consistent with an iron and manganese oxide as the black pigment. Aiming at the detection of any residual organic compounds, the samples were extracted with a chloroform-methanol mixture and the extracts were analyzed by FT-IR, showing the presence of bands attributable to lipids. Analysis by GC-MS of the carboxylic acid methyl esters prepared from the sample extracts, indicated that the main organic constituents were saturated (C(16:0) and C(18:0)) fatty acids in relative abundance characteristic of degraded animal fat. The presence of minor C(15:0) and C(17:0) fatty acids and branched-chain iso-C(16:0) pointed to a ruminant animal source.
Sonoda, T; Ona, T; Yokoi, H; Ishida, Y; Ohtani, H; Tsuge, S
2001-11-15
Detailed quantitative analysis of lignin monomer composition comprising p-coumaryl, coniferyl, and sinapyl alcohol and p-coumaraldehyde, coniferaldehyde, and sinapaldehyde in plant has not been studied from every point mainly because of artifact formation during the lignin isolation procedure, partial loss of the lignin components inherent in the chemical degradative methods, and difficulty in the explanation of the complex spectra generally observed for the lignin components. Here we propose a new method to quantify lignin monomer composition in detail by pyrolysis-gas chromatography (Py-GC) using acetylated lignin samples. The lignin acetylation procedure would contribute to prevent secondary formation of cinnamaldehydes from the corresponding alcohol forms during pyrolysis, which are otherwise unavoidable in conventional Py-GC process to some extent. On the basis of the characteristic peaks on the pyrograms of the acetylated sample, lignin monomer compositions in various dehydrogenative polymers (DHP) as lignin model compounds were determined, taking even minor components such as cinnamaldehydes into consideration. The observed compositions by Py-GC were in good agreement with the supplied lignin monomer contents on DHP synthesis. The new Py-GC method combined with sample preacetylation allowed us an accurate quantitative analysis of detailed lignin monomer composition using a microgram order of extractive-free plant samples.
de Mattos, Ivanildo Luiz; Zagal, José Heraclito
2010-01-01
A flow injection system using an unmodified gold screen-printed electrode was employed for total phenol determination in black and green teas. In order to avoid passivation of the electrode surface due to the redox reaction, preoxidation of the sample was realized by hexacyanoferrate(III) followed by addition of an EDTA solution. The complex formed in the presence of EDTA minimizes or avoids polymerization of the oxidized phenols. The previously filtered tea sample and hexacyanoferrate(III) reagent were introduced simultaneously into two-carrier streams producing two reproducible zones. At confluence point, the pre-oxidation of the phenolic compounds occurs while this zone flows through the coiled reactor and receives the EDTA solution before phenol detection. The consumption of ferricyanide was monitorized at 360 mV versus Ag/AgCl and reflected the total amount of phenolic compounds present in the sample. Results were reported as gallic acid equivalents (GAEs). The proposed system is robust, versatile, environmentally-friendly (since the reactive is used only in the presence of the sample), and allows the analysis of about 35–40 samples per hour with detection limit = 1 mg/L without the necessity for surface cleaning after each measurement. Precise results are in agreement with those obtained by the Folin-Ciocalteu method. PMID:21461407
Explosive genetic evidence for explosive human population growth
Gao, Feng; Keinan, Alon
2016-01-01
The advent of next-generation sequencing technology has allowed the collection of vast amounts of genetic variation data. A recurring discovery from studying larger and larger samples of individuals had been the extreme, previously unexpected, excess of very rare genetic variants, which has been shown to be mostly due to the recent explosive growth of human populations. Here, we review recent literature that inferred recent changes in population size in different human populations and with different methodologies, with many pointing to recent explosive growth, especially in European populations for which more data has been available. We also review the state-of-the-art methods and software for the inference of historical population size changes that lead to these discoveries. Finally, we discuss the implications of recent population growth on personalized genomics, on purifying selection in the non-equilibrium state it entails and, as a consequence, on the genetic architecture underlying complex disease and the performance of mapping methods in discovering rare variants that contribute to complex disease risk. PMID:27710906
Oxide perovskite crystals for HTSC film substrates microwave applications
NASA Technical Reports Server (NTRS)
Bhalla, A. S.; Guo, Ruyan
1995-01-01
The research focused upon generating new substrate materials for the deposition of superconducting yttrium barium cuprate (YBCO) has yielded several new hosts in complex perovskites, modified perovskites, and other structure families. New substrate candidates such as Sr(Al(1/2)Ta(1/2))O3 and Sr(Al(1/2)Nb(1/2))O3, Ba(Mg(1/3)Ta(2/3))O3 in complex oxide perovskite structure family and their solid solutions with ternary perovskite LaAlO3 and NdGaO3 are reported. Conventional ceramic processing techniques were used to fabricate dense ceramic samples. A laser heated molten zone growth system was utilized for the test-growth of these candidate materials in single crystal fiber form to determine crystallographic structure, melting point, thermal, and dielectric properties as well as to make positive identification of twin free systems. Some of those candidate materials present an excellent combination of properties suitable for microwave HTSC substrate applications.
Predicting Human Preferences Using the Block Structure of Complex Social Networks
Guimerà, Roger; Llorente, Alejandro; Moro, Esteban; Sales-Pardo, Marta
2012-01-01
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups. PMID:22984533
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Richard
As defined in the preamble of the final rule, the entire DOE facility on the Oak Ridge Reservation (ORR) must meet the 10 mrem/yr ED standard.1 In other words, the combined ED from all radiological air emission sources from Y-12 National Security Complex (Y-12 Complex), Oak Ridge National Laboratory (ORNL), East Tennessee Technology Park (ETTP), Oak Ridge Institute for Science and Education (ORISE) and any other DOE operation on the reservation must meet the 10 mrem/yr standard. Compliance with the standard is demonstrated through emission sampling, monitoring, calculations and radiation dose modeling in accordance with approved EPA methodologies and procedures.more » DOE estimates the ED to many individuals or receptor points in the vicinity of ORR, but it is the dose to the maximally exposed individual (MEI) that determines compliance with the standard.« less
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
Acousto-defect interaction in irradiated and non-irradiated silicon n+-p structures
NASA Astrophysics Data System (ADS)
Olikh, O. Ya.; Gorb, A. M.; Chupryna, R. G.; Pristay-Fenenkov, O. V.
2018-04-01
The influence of ultrasound on current-voltage characteristics of non-irradiated silicon n+-p structures as well as silicon structures exposed to reactor neutrons or 60Co gamma radiation has been investigated experimentally. It has been found that the ultrasound loading of the n+-p structure leads to the reversible change of shunt resistance, carrier lifetime, and ideality factor. Specifically, considerable acoustically induced alteration of the ideality factor and the space charge region lifetime was observed in the irradiated samples. The experimental results were described by using the models of coupled defect level recombination, Shockley-Read-Hall recombination, and dislocation-induced impedance. The experimentally observed phenomena are associated with the increase in the distance between coupled defects as well as the extension of the carrier capture coefficient of complex point defects and dislocations. It has been shown that divacancies and vacancy-interstitial oxygen pairs are effectively modified by ultrasound in contrast to interstitial carbon-interstitial oxygen complexes.
Cloitre, Marylene; Stolbach, Bradley C; Herman, Judith L; van der Kolk, Bessel; Pynoos, Robert; Wang, Jing; Petkova, Eva
2009-10-01
Exposure to multiple traumas, particularly in childhood, has been proposed to result in a complex of symptoms that includes posttraumatic stress disorder (PTSD) as well as a constrained, but variable group of symptoms that highlight self-regulatory disturbances. The relationship between accumulated exposure to different types of traumatic events and total number of different types of symptoms (symptom complexity) was assessed in an adult clinical sample (N = 582) and a child clinical sample (N = 152). Childhood cumulative trauma but not adulthood trauma predicted increasing symptom complexity in adults. Cumulative trauma predicted increasing symptom complexity in the child sample. Results suggest that Complex PTSD symptoms occur in both adult and child samples in a principled, rule-governed way and that childhood experiences significantly influenced adult symptoms. Copyright © 2009 International Society for Traumatic Stress Studies.
Annealing study of poly(etheretherketone)
NASA Technical Reports Server (NTRS)
Cebe, Peggy
1988-01-01
Annealing of PEEK has been studied for two materials cold-crystallized from the rubbery amorphous state. The first material is a low molecular weight PEEK; the second is commercially available neat resin. Differential scanning calorimetry was used to monitor the melting behavior of annealed samples. The effect of thermal history on melting behavior is very complex and depends upon annealing temperature, residence time at the annealing temperature, and subsequent scanning rate. Thermal stability of both materials is improved by annealing, and for an annealing temperature near the melting point, the polymer can be stabilized against reorganization during the scan. Variations of density, degree of crystallinity, and X-ray long period were studied as a function of annealing temperature for the commercial material.
NASA Astrophysics Data System (ADS)
Levit, Creon; Gazis, P.
2006-06-01
The graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform (windows, linux, Apple OSX) application which leverages some of the power latent in the GPU to enable smooth interactive exploration and analysis of large high-dimensional data using a variety of classical and recent techniques. The targeted application area is the interactive analysis of complex, multivariate space science and astrophysics data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 10^6-10^8.
Fent, János; Bihari, Péter; Vippola, Minnamari; Sarlin, Essi; Lakatos, Susan
2015-08-01
Surface modification of single-walled carbon nanotubes (SWCNTs) such as carboxylation, amidation, hydroxylation and pegylation is used to reduce the nanotube toxicity and render them more suitable for biomedical applications than their pristine counterparts. Toxicity can be manifested in platelet activation as it has been shown for SWCNTs. However, the effect of various surface modifications on the platelet activating potential of SWCNTs has not been tested yet. In vitro platelet activation (CD62P) as well as the platelet-granulocyte complex formation (CD15/CD41 double positivity) in human whole blood were measured by flow cytometry in the presence of 0.1mg/ml of pristine or various surface modified SWCNTs. The effect of various SWCNTs was tested by whole blood impedance aggregometry, too. All tested SWCNTs but the hydroxylated ones activate platelets and promote platelet-granulocyte complex formation in vitro. Carboxylated, pegylated and pristine SWCNTs induce whole blood aggregation as well. Although pegylation is preferred from biomedical point of view, among the samples tested by us pegylated SWCNTs induced far the most prominent activation and a well detectable aggregation of platelets in whole blood. Copyright © 2015 Elsevier Ltd. All rights reserved.
Screenometer: a device for sampling vegetative screening in forested areas
Victor A. Rudis
1985-01-01
A-device for estimating the degree to which vegetation and other obstructions screen forested areas has been adapted to an extensive sampling design for forest surveys. Procedures are recommended to assure that uniform measurements can be made. Examination of sources of sampling variation (observers, points within sampled locations, series of observations within points...
Yang, Jae-Hyuk; Lim, Hong Chul; Bae, Ji Hoon; Fernandez, Harry; Bae, Tae Soo; Wang, Joon Ho
2011-10-01
Descriptive laboratory study. The femoral anatomic insertion site and the optimal isometric point of popliteus tendon for posterolateral reconstruction are not well known. Purpose of this study was to determine the relative relationship between the femoral anatomic insertion and isometric point of popliteus muscle-tendon complex with the lateral epicondyle of femur. Thirty unpaired cadaveric knees were dissected to determine the anatomic femoral insertion of the popliteus tendon. The distance and the angle from the lateral epicondyle of femur to the center of the anatomic insertion of the popliteus tendon were measured using digital caliper and goniometer. Eight unpaired fresh cadaveric knees were examined to determine the optimal isometric point of femoral insertion of popliteus tendon using computer-controlled motion capture analysis system (Motion Analysis, CA, USA). Distances from targeted tibial tunnel for popliteus tendon reconstruction to the 35 points gained on the lateral surface of femur were recorded at 0, 30, 60, 90, and 120° knee flexion. A point with the least excursion (<2.0 mm) was determined as the isometric point. The center of anatomic insertion points and the optimal isometric point for the main fibers of popliteus tendon were found to be posterior and distal to the lateral epicondyle of femur. The distance from the lateral epicondyle of femur to the center of anatomic femoral insertion of popliteus tendon was 11.3 ± 1.2 mm (mean ± SD). The angle between long axis of femur and the line from lateral epicondyle of femur to anatomic femoral insertion of popliteus tendon was 31.4 ± 5.3°. The isometric points for the femoral insertion of popliteus muscle-tendon complex were situated posterior and distal to the lateral epicondyle in all 8 knees. The distance between the least excursion point and the lateral epicondyle was calculated as 10.4 ± 1.7 mm. The angle between the long axis of femur and the line from lateral epicondyle of femur to optimum isometric point of popliteus tendon was calculated as 41.3 ± 14.9°. The optimal isometric point for the femoral insertion of popliteus muscle-tendon complex is situated posterior and distal to the lateral epicondyle of femur. Femoral tunnel for "posterolateral corner sling procedure" should be placed at this point to achieve least amount of graft excursion during knee motion.
Woodruff, L.G.; Attig, J.W.; Cannon, W.F.
2004-01-01
Geochemical exploration in northern Wisconsin has been problematic because of thick glacial overburden and complex stratigraphic record of glacial history. To assess till geochemical exploration in an area of thick glacial cover and complex stratigraphy samples of glacial materials were collected from cores from five rotasonic boreholes near a known massive sulfide deposit, the Bend deposit in north-central Wisconsin. Diamond drilling in the Bend area has defined a long, thin zone of mineralization at least partly intersected at the bedrock surface beneath 30-40 m of unconsolidated glacial sediments. The bedrock surface has remnant regolith and saprolite resulting from pre-Pleistocene weathering. Massive sulfide and mineralized rock collected from diamond drill core from the deposit contain high (10s to 10,000s ppm) concentrations of Ag, As, Au, Bi, Cu, Hg, Se, Te, and Tl. Geochemical properties of the glacial stratigraphic units helped clarify the sequence and source areas of several glacial ice advances preserved in the section. At least two till sheets are recognized. Over the zone of mineralization, saprolite and preglacial alluvial and lacustrine samples are preserved on the bedrock surface in a paleoriver valley. The overlying till sheet is a gray, silty carbonate till with a source hundreds of kilometers to the northwest of the study area. This gray till is overlain by red, sandy till with a source to the north in Proterozoic rocks of the Lake Superior area. The complex glacial stratigraphy confounds down-ice geochemical till exploration. The presence of remnant saprolite, preglacial sediment, and far-traveled carbonate till minimized glacial erosion of mineralized material. As a result, little evidence of down-ice glacial dispersion of lithologic or mineralogic indicators of Bend massive sulfide mineralization was found in the samples from the rotasonic cores. This study points out the importance of determining glacial stratigraphy and history, and identifying favorable lithologies required for geochemical exploration. Drift prospecting in Wisconsin and other areas near the outer limits of the Pleistocene ice sheets may not be unsuccessful, in part, because of complex stratigraphic sequences of multiple glaciations where deposition dominates over erosion. ?? 2004 Elsevier B.V. All rights reserved.
Monte Carlo isotopic inventory analysis for complex nuclear systems
NASA Astrophysics Data System (ADS)
Phruksarojanakun, Phiphat
Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zheng; Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Dadao Bei, Guangzhou 510515; Zhou, Yuning
Highlights: Black-Right-Pointing-Pointer Rictor associates with FBXW7 to form an E3 complex. Black-Right-Pointing-Pointer Knockdown of rictor decreases ubiquitination of c-Myc and cylin E. Black-Right-Pointing-Pointer Knockdown of rictor increases protein levels of c-Myc and cylin E. Black-Right-Pointing-Pointer Overexpression of rictor induces the degradation of c-Myc and cyclin E proteins. Black-Right-Pointing-Pointer Rictor regulation of c-Myc and cyclin E requires FBXW7. -- Abstract: Rictor (Rapamycin-insensitive companion of mTOR) forms a complex with mTOR and phosphorylates and activates Akt. Activation of Akt induces expression of c-Myc and cyclin E, which are overexpressed in colorectal cancer and play an important role in colorectal cancer cell proliferation.more » Here, we show that rictor associates with FBXW7 to form an E3 complex participating in the regulation of c-Myc and cyclin E degradation. The Rictor-FBXW7 complex is biochemically distinct from the previously reported mTORC2 and can be immunoprecipitated independently of mTORC2. Moreover, knocking down of rictor in serum-deprived colorectal cancer cells results in the decreased ubiquitination and increased protein levels of c-Myc and cyclin E while overexpression of rictor induces the degradation of c-Myc and cyclin E proteins. Genetic knockout of FBXW7 blunts the effects of rictor, suggesting that rictor regulation of c-Myc and cyclin E requires FBXW7. Our findings identify rictor as an important component of FBXW7 E3 ligase complex participating in the regulation of c-Myc and cyclin E protein ubiquitination and degradation. Importantly, our results suggest that elevated growth factor signaling may contribute to decrease rictor/FBXW7-mediated ubiquitination of c-Myc and cyclin E, thus leading to accumulation of cyclin E and c-Myc in colorectal cancer cells.« less
Hongyi Xu; Barbic, Jernej
2017-01-01
We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.
Field programmable gate array-assigned complex-valued computation and its limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com; Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien; Zwick, Wolfgang
We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.
NASA Astrophysics Data System (ADS)
Espinosa-Garcia, J.
Ab initio molecular orbital theory was used to study parts of the reaction between the CH2Br radical and the HBr molecule, and two possibilities were analysed: attack on the hydrogen and attack on the bromine of the HBr molecule. Optimized geometries and harmonic vibrational frequencies were calculated at the second-order Moller-Plesset perturbation theory levels, and comparison with available experimental data was favourable. Then single-point calculations were performed at several higher levels of calculation. In the attack on the hydrogen of HBr, two stationary points were located on the direct hydrogen abstraction reaction path: a very weak hydrogen bonded complex of reactants, C···HBr, close to the reactants, followed by the saddle point (SP). The effects of level of calculation (method + basis set), spin projection, zeropoint energy, thermal corrections (298K), spin-orbit coupling and basis set superposition error (BSSE) on the energy changes were analysed. Taking the reaction enthalpy (298K) as reference, agreement with experiment was obtained only when high correlation energy and large basis sets were used. It was concluded that at room temperature (i.e., with zero-point energy and thermal corrections), when the BSSE was included, the complex disappears and the activation enthalpy (298K) ranges from 0.8kcal mol-1 to 1.4kcal mol-1 above the reactants, depending on the level of calculation. It was concluded also that this result is the balance of a complicated interplay of many factors, which are affected by uncertainties in the theoretical calculations. Finally, another possible complex (X complex), which involves the alkyl radical being attracted to the halogen end of HBr (C···BrH), was explored also. It was concluded that this X complex does not exist at room temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Qun; Liu Shuxia, E-mail: liusx@nenu.edu.cn; Liang Dadong
2012-06-15
A series of lanthanide-organic complexes based on polyoxometalates (POMs) [Ln{sub 2}(DNBA){sub 4}(DMF){sub 8}][W{sub 6}O{sub 19}] (Ln=La(1), Ce(2), Sm(3), Eu(4), Gd(5); DNBA=3,5-dinitrobenzoate; DMF=N,N-dimethylformamide) has been synthesized. These complexes consist of [W{sub 6}O{sub 19}]{sup 2-} and dimeric [Ln{sub 2}(DNBA){sub 4}(DMF){sub 8}]{sup 2+} cations. The luminescence properties of 4 are measured in solid state and different solutions, respectively. Notably, the emission intensity increases gradually with the increase of solvent permittivity, and this solvent effect can be directly observed by electrospray mass spectrometry (ESI-MS). The analyses of ESI-MS show that the eight coordinated solvent DMF units of dimeric cation are active. They can movemore » away from dimeric cations and exchange with solvent molecules. Although the POM anions escape from 3D supramolecular network, the dimeric state structure of [Ln{sub 2}(DNBA){sub 4}]{sup 2+} remains unchanged in solution. The conservation of red luminescence is attributed to the maintenance of the aggregated state structures of dimeric cations. - Graphical abstract: 3D POMs-based lanthanide-organic complexes performed the solvent effect on the luminescence property. The origin of such solvent effect can be understood and explained on the basis of the existence of coordinated active sites by the studies of ESI-MS. Highlights: Black-Right-Pointing-Pointer The solvent effect on the luminescence property of POMs-based lanthanide-organic complexes. Black-Right-Pointing-Pointer ESI-MS analyses illuminate the correlation between the structure and luminescence property. Black-Right-Pointing-Pointer The dimeric cations have eight active sites of solvent coordination. Black-Right-Pointing-Pointer The aggregated state structure of dimer cation remains unchanged in solution. Black-Right-Pointing-Pointer Luminescence associating with ESI-MS is a new method for investigating the interaction of complex and solvent.« less
Boiling point measurement of a small amount of brake fluid by thermocouple and its application.
Mogami, Kazunari
2002-09-01
This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.
NASA Astrophysics Data System (ADS)
Shepard, R.
2008-12-01
Microbial communities are architects of incredibly complex and diverse morphological structures. Each morphology is a snapshot that reflects the complex interactions within the microbial community and between the community and its environment. Characterizing morphology as an emergent property of microbial communities is thus relevant to understanding the evolution of multicellularity and complexity in developmental systems, to the identification of biosignatures, and to furthering our understanding of modern and ancient microbial ecology. Recently discovered cyanobacterial mats in Pavilion Lake, British Columbia construct unusual complex architecture on the scale of decimeters that incorporates significant void space. Fundamental mesoscale morphological elements include terraces, arches, bridges, depressions, domes, and pillars. The mats themselves also exhibit several microscale morphologies, with reticulate structures being the dominant example. The reticulate structures exhibit a diverse spectrum of morphologies with endmembers characterized by either angular or curvilinear ridges. In laboratory studies, aggregation into reticulate structures occurs as a result of the random gliding and colliding among motile cyanobacterial filaments. Likewise, when Pavilion reticulate mats were sampled and brought to the surface, cyanobacteria invariably migrated out of the mat onto surrounding surfaces. Filaments were observed to move rapidly in clumps, preferentially following paths of previous filaments. The migrating filaments organized into new angular and ropey reticulate biofilms within hours of sampling, demonstrating that cell motility is responsible for the reticulate patterns. Because the morphogenesis of reticulate structures can be linked to motility behaviors of filamentous cyanobacteria, the Willow Point mats provide a unique natural laboratory in which to elucidate the connections between a specific microbial behavior and the construction of complex microbial community morphology. To this end, we identified and characterized fundamental building blocks of the mesoscale morphologies, including bridges, anchors, and curved edges. These morphological building blocks were compared with the suite of motility behaviors and patterns observed in reticulate morphogenesis. Results of this comparison suggest that cyanobacterial motility plays a significant and often dominant role in the morphogenesis of the entire suite of morphologies observed in the microbial mats of Pavilion Lake.
SAS procedures for designing and analyzing sample surveys
Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.
2003-01-01
Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).
Mapping of bird distributions from point count surveys
Sauer, J.R.; Pendleton, G.W.; Orsillo, Sandra; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.
Donnenwerth, Michael P; Roukis, Thomas S
2013-04-01
Failed total ankle replacement is a complex problem that should only be treated by experienced foot and ankle surgeons. Significant bone loss can preclude revision total ankle replacement and obligate revision though a complex tibio-talo-calcaneal arthrodesis. A systematic review of the world literature reveals a nonunion rate of 24.2%. A weighted mean of modified American Orthopaedic Foot and Ankle Society Ankle and Hindfoot Scale demonstrated fair patient outcomes of 58.1 points on an 86-point scale (67.6 points on a 100-point scale). Complications were observed in 38 of 62 (62.3%) patients reviewed, with the most common complication being nonunion. Copyright © 2013 Elsevier Inc. All rights reserved.
John C. Brissette; Mark J. Ducey; Jeffrey H. Gove
2003-01-01
We field tested a new method for sampling down coarse woody material (CWM) using an angle gauge and compared it with the more traditional line intersect sampling (LIS) method. Permanent sample locations in stands managed with different silvicultural treatments within the Penobscot Experimental Forest (Maine, USA) were used as the sampling locations. Point relascope...
NASA Astrophysics Data System (ADS)
Gomes, Ana; Boski, Tomasz; Moura, Delminda; Szkornik, Katie; Witkowski, Andrzej; Connor, Simon; Laut, Lazaro; Sobrinho, Frederico; Oliveira, Sónia
2017-04-01
Diatoms are unicellular algae that live in saline, brackish and freshwater environments, either floating in the water column or associated with various substrates (e.g., muddy and sandy sediments). Diatoms are sensitive to changes in environmental variables such as salinity, sediment texture, nutrient availability, light and temperature. This characteristic, along with their short lifespan, allows diatoms to quickly respond to environmental changes. Since the beginning of the 20th century, diatoms have been widely used to study the Holocene evolution of estuaries worldwide, particularly to reconstruct ecological responses to sea-level and climate changes. However, diatoms have been poorly studied in estuarine intertidal zones, due to the complexity of these environments, which have both fluvial and marine influences. The aim of this study was to understand diatom diversity and spatial distribution in intertidal zones from two geomorphologically and hydrologically distinct estuaries. Sediment samples were collected from within the intertidal zones along the Arade and Guadiana River estuaries in southern Iberia. The sampling points embraced almost all the tidal and salinity gradients of both estuaries, capturing the highest possible environmental variability and hence of diatom assemblages. At each sampling point, the salinity and pH of the sediment interstitial water were measured. The sediment samples were subdivided for diatom identification, textural analysis and organic matter determination. All sampling points were georeferenced by DGPS and the duration of tidal inundation was calculated for each site. Following diatom identification, the data were analysed statistically (i.e. cluster analysis, PCA, DCA and RDA). The present study revealed that there is a great diatom diversity in both estuaries (418 species), with several species new to science. The most important diatom species (with abundances higher or equal to 5%) occur in five ecological groups, which are associated to five distinct environments: lower estuary sandflats, lower estuary mudflats, middle to upper estuary mudflats, lower estuary salt marshes and middle estuary salt marshes. This study allowed us to establish modern analogues that are essential for developing transfer functions (quantitative palaeoenvironmental estimates). These methods will enable more accurate Holocene paleoenvironmental reconstructions on the southern Iberian coast and will improve knowledge about the evolution of estuarine environments globally . The work was supported by the SFRH/BD/62405/2009 fellowship, funded by the Portuguese Foundation for Science and Technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornwell, Paris A; Bunn, Jeffrey R; Schmidlin, Joshua E
The December 2010 version of the guide, ORNL/TM-2008/159, by Jeff Bunn, Josh Schmidlin, Camden Hubbard, and Paris Cornwell, has been further revised due to a major change in the GeoMagic Studio software for constructing a surface model. The Studio software update also includes a plug-in module to operate the FARO Scan Arm. Other revisions for clarity were also made. The purpose of this revision document is to guide the reader through the process of laser alignment used by NRSF2 at HFIR and VULCAN at SNS. This system was created to increase the spatial accuracy of the measurement points in amore » sample, reduce the use of neutron time used for alignment, improve experiment planning, and reduce operator error. The need for spatial resolution has been driven by the reduction in gauge volumes to the sub-millimeter level, steep strain gradients in some samples, and requests to mount multiple samples within a few days for relating data from each sample to a common sample coordinate system. The first step in this process involves mounting the sample on an indexer table in a laboratory set up for offline sample mounting and alignment in the same manner it would be mounted at either instrument. In the shared laboratory, a FARO ScanArm is used to measure the coordinates of points on the sample surface ('point cloud'), specific features and fiducial points. A Sample Coordinate System (SCS) needs to be established first. This is an advantage of the technique because the SCS can be defined in such a way to facilitate simple definition of measurement points within the sample. Next, samples are typically mounted to a frame of 80/20 and fiducial points are attached to the sample or frame then measured in the established sample coordinate system. The laser scan probe on the ScanArm can then be used to scan in an 'as-is' model of the sample as well as mounting hardware. GeoMagic Studio 12 is the software package used to construct the model from the point cloud the scan arm creates. Once a model, fiducial, and measurement files are created, a special program, called SScanSS combines the information and by simulation of the sample on the diffractometer can help plan the experiment before using neutron time. Finally, the sample is mounted on the relevant stress measurement instrument and the fiducial points are measured again. In the HFIR beam room, a laser tracker is used in conjunction with a program called CAM2 to measure the fiducial points in the NRSF2 instrument's sample positioner coordinate system. SScanSS is then used again to perform a coordinate system transformation of the measurement file locations to the sample positioner coordinate system. A procedure file is then written with the coordinates in the sample positioner coordinate system for the desired measurement locations. This file is often called a script or command file and can be further modified using excel. It is very important to note that this process is not a linear one, but rather, it often is iterative. Many of the steps in this guide are interdependent on one another. It is very important to discuss the process as it pertains to the specific sample being measured. What works with one sample may not necessarily work for another. This guide attempts to provide a typical work flow that has been successful in most cases.« less
Teasing Apart Complex Motions using VideoPoint
NASA Astrophysics Data System (ADS)
Fischer, Mark
2002-10-01
Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.
Defect interactions in GaAs single crystals
NASA Technical Reports Server (NTRS)
Gatos, H. C.; Lagowski, J.
1984-01-01
The two-sublattice structural configuration of GaAs and deviations from stoichiometry render the generation and interaction of electrically active point defects (and point defect complexes) critically important for device applications and very complex. Of the defect-induced energy levels, those lying deep into the energy band are very effective lifetime ""killers". The level 0.82 eV below the condition band, commonly referred to as EL2, is a major deep level, particularly in melt-grown GaAs. This level is associated with an antisite defect complex (AsGa - VAS). Possible mechanisms of its formation and its annihilation were further developed.
Kolpin, Dana W.; Schenzel, Judith; Meyer, Michael T.; Phillips, Patrick J.; Hubbard, Laura E.; Scott, Tia-Marie; Bucheli, Thomas D.
2014-01-01
To determine the prevalence of mycotoxins in streams, 116 water samples from 32 streams and three wastewater treatment plant effluents were collected in 2010 providing the broadest investigation on the spatial and temporal occurrence of mycotoxins in streams conducted in the United States to date. Out of the 33 target mycotoxins measured, nine were detected at least once during this study. The detections of mycotoxins were nearly ubiquitous during this study even though the basin size spanned four orders of magnitude. At least one mycotoxin was detected in 94% of the 116 samples collected. Deoxynivalenol was the most frequently detected mycotoxin (77%), followed by nivalenol (59%), beauvericin (43%), zearalenone (26%), β-zearalenol (20%), 3-acetyl-deoxynivalenol (16%), α-zearalenol (10%), diacetoxyscirpenol (5%), and verrucarin A (1%). In addition, one or more of the three known estrogenic compounds (i.e. zearalenone, α-zearalenol, and β-zearalenol) were detected in 43% of the samples, with maximum concentrations substantially higher than observed in previous research. While concentrations were generally low (i.e. < 50 ng/L) during this study, concentrations exceeding 1000 ng/L were measured during spring snowmelt conditions in agricultural settings and in wastewater treatment plant effluent. Results of this study suggest that both diffuse (e.g. release from infected plants and manure applications from exposed livestock) and point (e.g. wastewater treatment plants and food processing plants) sources are important environmental pathways for mycotoxin transport to streams. The ecotoxicological impacts from the long-term, low-level exposures to mycotoxins alone or in combination with complex chemical mixtures are unknown
Dobson, Ian; Carreras, Benjamin A; Lynch, Vickie E; Newman, David E
2007-06-01
We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.
Faridounnia, Maryam; Wienk, Hans; Kovačič, Lidija; Folkers, Gert E.; Jaspers, Nicolaas G. J.; Kaptein, Robert; Hoeijmakers, Jan H. J.; Boelens, Rolf
2015-01-01
The ERCC1-XPF heterodimer, a structure-specific DNA endonuclease, is best known for its function in the nucleotide excision repair (NER) pathway. The ERCC1 point mutation F231L, located at the hydrophobic interaction interface of ERCC1 (excision repair cross-complementation group 1) and XPF (xeroderma pigmentosum complementation group F), leads to severe NER pathway deficiencies. Here, we analyze biophysical properties and report the NMR structure of the complex of the C-terminal tandem helix-hairpin-helix domains of ERCC1-XPF that contains this mutation. The structures of wild type and the F231L mutant are very similar. The F231L mutation results in only a small disturbance of the ERCC1-XPF interface, where, in contrast to Phe231, Leu231 lacks interactions stabilizing the ERCC1-XPF complex. One of the two anchor points is severely distorted, and this results in a more dynamic complex, causing reduced stability and an increased dissociation rate of the mutant complex as compared with wild type. These data provide a biophysical explanation for the severe NER deficiencies caused by this mutation. PMID:26085086
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
Software Techniques for Non-Von Neumann Architectures
1990-01-01
Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Using complex auditory-visual samples to produce emergent relations in children with autism.
Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P
2010-03-01
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.
Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi
2015-09-07
Photosystem II (PS II) is a protein complex which evolves oxygen and drives charge separation for photosynthesis employing electron and excitation-energy transfer processes over a wide timescale range from picoseconds to milliseconds. While the fluorescence emitted by the antenna pigments of this complex is known as an important indicator of the activity of photosynthesis, its interpretation was difficult because of the complexity of PS II. In this study, an extensive kinetic model which describes the complex and multi-timescale characteristics of PS II is analyzed through the use of the hierarchical coarse-graining method proposed in the authors׳ earlier work. In this coarse-grained analysis, the reaction center (RC) is described by two states, open and closed RCs, both of which consist of oxidized and neutral special pairs being in quasi-equilibrium states. Besides, the PS II model at millisecond scale with three-state RC, which was studied previously, could be derived by suitably adjusting the kinetic parameters of electron transfer between tyrosine and RC. Our novel coarse-grained model of PS II can appropriately explain the light-intensity dependent change of the characteristic patterns of fluorescence induction kinetics from O-J-I-P, which shows two inflection points, J and I, between initial point O and peak point P, to O-J-D-I-P, which shows a dip D between J and I inflection points. Copyright © 2015 Elsevier Ltd. All rights reserved.
Complex sample survey estimation in static state-space
Raymond L. Czaplewski
2010-01-01
Increased use of remotely sensed data is a key strategy adopted by the Forest Inventory and Analysis Program. However, multiple sensor technologies require complex sampling units and sampling designs. The Recursive Restriction Estimator (RRE) accommodates this complexity. It is a design-consistent Empirical Best Linear Unbiased Prediction for the state-vector, which...
Sampling from complex networks using distributed learning automata
NASA Astrophysics Data System (ADS)
Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza
2014-02-01
A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.
Allometry and Ecology of the Bilaterian Gut Microbiome.
Sherrill-Mix, Scott; McCormick, Kevin; Lauder, Abigail; Bailey, Aubrey; Zimmerman, Laurie; Li, Yingying; Django, Jean-Bosco N; Bertolani, Paco; Colin, Christelle; Hart, John A; Hart, Terese B; Georgiev, Alexander V; Sanz, Crickette M; Morgan, David B; Atencia, Rebeca; Cox, Debby; Muller, Martin N; Sommer, Volker; Piel, Alexander K; Stewart, Fiona A; Speede, Sheri; Roman, Joe; Wu, Gary; Taylor, Josh; Bohm, Rudolf; Rose, Heather M; Carlson, John; Mjungu, Deus; Schmidt, Paul; Gaughan, Celeste; Bushman, Joyslin I; Schmidt, Ella; Bittinger, Kyle; Collman, Ronald G; Hahn, Beatrice H; Bushman, Frederic D
2018-03-27
Classical ecology provides principles for construction and function of biological communities, but to what extent these apply to the animal-associated microbiota is just beginning to be assessed. Here, we investigated the influence of several well-known ecological principles on animal-associated microbiota by characterizing gut microbial specimens from bilaterally symmetrical animals ( Bilateria ) ranging from flies to whales. A rigorously vetted sample set containing 265 specimens from 64 species was assembled. Bacterial lineages were characterized by 16S rRNA gene sequencing. Previously published samples were also compared, allowing analysis of over 1,098 samples in total. A restricted number of bacterial phyla was found to account for the great majority of gut colonists. Gut microbial composition was associated with host phylogeny and diet. We identified numerous gut bacterial 16S rRNA gene sequences that diverged deeply from previously studied taxa, identifying opportunities to discover new bacterial types. The number of bacterial lineages per gut sample was positively associated with animal mass, paralleling known species-area relationships from island biogeography and implicating body size as a determinant of community stability and niche complexity. Samples from larger animals harbored greater numbers of anaerobic communities, specifying a mechanism for generating more-complex microbial environments. Predictions for species/abundance relationships from models of neutral colonization did not match the data set, pointing to alternative mechanisms such as selection of specific colonists by environmental niche. Taken together, the data suggest that niche complexity increases with gut size and that niche selection forces dominate gut community construction. IMPORTANCE The intestinal microbiome of animals is essential for health, contributing to digestion of foods, proper immune development, inhibition of pathogen colonization, and catabolism of xenobiotic compounds. How these communities assemble and persist is just beginning to be investigated. Here we interrogated a set of gut samples from a wide range of animals to investigate the roles of selection and random processes in microbial community construction. We show that the numbers of bacterial species increased with the weight of host organisms, paralleling findings from studies of island biogeography. Communities in larger organisms tended to be more anaerobic, suggesting one mechanism for niche diversification. Nonselective processes enable specific predictions for community structure, but our samples did not match the predictions of the neutral model. Thus, these findings highlight the importance of niche selection in community construction and suggest mechanisms of niche diversification. Copyright © 2018 Sherrill-Mix et al.
The effect of different control point sampling sequences on convergence of VMAT inverse planning
NASA Astrophysics Data System (ADS)
Pardo Montero, Juan; Fenwick, John D.
2011-04-01
A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.
Leherte, Laurence; Vercauteren, Daniel P
2014-02-01
Reduced point charge models of amino acids are designed, (i) from local extrema positions in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions, and (ii) from local maxima positions in promolecular electron density distribution functions. Corresponding charge values are fitted versus all-atom Amber99 MEPs. To easily generate reduced point charge models for protein structures, libraries of amino acid templates are built. The program GROMACS is used to generate stable Molecular Dynamics trajectories of an Ubiquitin-ligand complex (PDB: 1Q0W), under various implementation schemes, solvation, and temperature conditions. Point charges that are not located on atoms are considered as virtual sites with a nul mass and radius. The results illustrate how the intra- and inter-molecular H-bond interactions are affected by the degree of reduction of the point charge models and give directions for their implementation; a special attention to the atoms selected to locate the virtual sites and to the Coulomb-14 interactions is needed. Results obtained at various temperatures suggest that the use of reduced point charge models allows to probe local potential hyper-surface minima that are similar to the all-atom ones, but are characterized by lower energy barriers. It enables to generate various conformations of the protein complex more rapidly than the all-atom point charge representation. Copyright © 2013 Elsevier Inc. All rights reserved.