Sample records for optimized field sampling

  1. Numerical study of ultra-low field nuclear magnetic resonance relaxometry utilizing a single axis magnetometer for signal detection.

    PubMed

    Vogel, Michael W; Vegh, Viktor; Reutens, David C

    2013-05-01

    This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.

  2. Optimal tumor sampling for immunostaining of biomarkers in breast carcinoma

    PubMed Central

    2011-01-01

    Introduction Biomarkers, such as Estrogen Receptor, are used to determine therapy and prognosis in breast carcinoma. Immunostaining assays of biomarker expression have a high rate of inaccuracy; for example, estimates are as high as 20% for Estrogen Receptor. Biomarkers have been shown to be heterogeneously expressed in breast tumors and this heterogeneity may contribute to the inaccuracy of immunostaining assays. Currently, no evidence-based standards exist for the amount of tumor that must be sampled in order to correct for biomarker heterogeneity. The aim of this study was to determine the optimal number of 20X fields that are necessary to estimate a representative measurement of expression in a whole tissue section for selected biomarkers: ER, HER-2, AKT, ERK, S6K1, GAPDH, Cytokeratin, and MAP-Tau. Methods Two collections of whole tissue sections of breast carcinoma were immunostained for biomarkers. Expression was quantified using the Automated Quantitative Analysis (AQUA) method of quantitative immunofluorescence. Simulated sampling of various numbers of fields (ranging from one to thirty five) was performed for each marker. The optimal number was selected for each marker via resampling techniques and minimization of prediction error over an independent test set. Results The optimal number of 20X fields varied by biomarker, ranging between three to fourteen fields. More heterogeneous markers, such as MAP-Tau protein, required a larger sample of 20X fields to produce representative measurement. Conclusions The optimal number of 20X fields that must be sampled to produce a representative measurement of biomarker expression varies by marker with more heterogeneous markers requiring a larger number. The clinical implication of these findings is that breast biopsies consisting of a small number of fields may be inadequate to represent whole tumor biomarker expression for many markers. Additionally, for biomarkers newly introduced into clinical use, especially if therapeutic response is dictated by level of expression, the optimal size of tissue sample must be determined on a marker-by-marker basis. PMID:21592345

  3. Singular values behaviour optimization in the diagnosis of feed misalignments in radioastronomical reflectors

    NASA Astrophysics Data System (ADS)

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro

    2016-07-01

    The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.

  4. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.

  5. Application of an Optimal Search Strategy for the DNAPL Source Identification to a Field Site in Nanjing, China

    NASA Astrophysics Data System (ADS)

    Longting, M.; Ye, S.; Wu, J.

    2014-12-01

    Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.

  6. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  7. Damage identification in beams using speckle shearography and an optimal spatial sampling

    NASA Astrophysics Data System (ADS)

    Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.

    2016-10-01

    Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.

  8. Hybrid Optimal Design of the Eco-Hydrological Wireless Sensor Network in the Middle Reach of the Heihe River Basin, China

    PubMed Central

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-01-01

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762

  9. Hybrid optimal design of the eco-hydrological wireless sensor network in the middle reach of the Heihe River Basin, China.

    PubMed

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-10-14

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.

  10. Optimization of the GBMV2 implicit solvent force field for accurate simulation of protein conformational equilibria.

    PubMed

    Lee, Kuo Hao; Chen, Jianhan

    2017-06-15

    Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Comparison of Adsorbed Mercury Screening Method With Cold-Vapor Atomic Absorption Spectrophotometry for Determination of Mercury in Soil

    NASA Technical Reports Server (NTRS)

    Easterling, Donald F.; Hovanitz, Edward S.; Street, Kenneth W.

    2000-01-01

    A field screening method for the determination of elemental mercury in environmental soil samples involves the thermal desorption of the mercury from the sample onto gold and then the thermal desorption from the gold to a gold-film mercury vapor analyzer. This field screening method contains a large number of conditions that could be optimized for the various types of soils encountered. In this study, the conditions were optimized for the determination of mercury in silty clay materials, and the results were comparable to the cold-vapor atomic absorption spectrophotometric method of determination. This paper discusses the benefits and disadvantages of employing the field screening method and provides the sequence of conditions that must be optimized to employ this method of determination on other soil types.

  12. Serum Dried Samples to Detect Dengue Antibodies: A Field Study.

    PubMed

    Maldonado-Rodríguez, Angelica; Rojas-Montes, Othon; Vazquez-Rosales, Guillermo; Chavez-Negrete, Adolfo; Rojas-Uribe, Magdalena; Posadas-Mondragon, Araceli; Aguilar-Faisal, Leopoldo; Cevallos, Ana Maria; Xoconostle-Cazares, Beatriz; Lira, Rosalia

    2017-01-01

    Dried blood and serum samples are useful resources for detecting antiviral antibodies. The conditions for elution of the sample need to be optimized for each disease. Dengue is a widespread disease in Mexico which requires continuous surveillance. In this study, we standardized and validated a protocol for the specific detection of dengue antibodies from dried serum spots (DSSs). Paired serum and DSS samples from 66 suspected cases of dengue were collected in a clinic in Veracruz, Mexico. Samples were sent to our laboratory, where the conditions for optimal elution of DSSs were established. The presence of anti-dengue antibodies was determined in the paired samples. DSS elution conditions were standardized as follows: 1 h at 4°C in 200  µ l of DNase-, RNase-, and protease-free PBS (1x). The optimal volume of DSS eluate to be used in the IgG assay was 40  µ l. Sensitivity of 94%, specificity of 93.3%, and kappa concordance of 0.87 were obtained when comparing the antidengue reactivity between DSSs and serum samples. DSS samples are useful for detecting anti-dengue IgG antibodies in the field.

  13. Serum Dried Samples to Detect Dengue Antibodies: A Field Study

    PubMed Central

    Maldonado-Rodríguez, Angelica; Rojas-Montes, Othon; Chavez-Negrete, Adolfo; Rojas-Uribe, Magdalena; Posadas-Mondragon, Araceli; Aguilar-Faisal, Leopoldo; Xoconostle-Cazares, Beatriz

    2017-01-01

    Background Dried blood and serum samples are useful resources for detecting antiviral antibodies. The conditions for elution of the sample need to be optimized for each disease. Dengue is a widespread disease in Mexico which requires continuous surveillance. In this study, we standardized and validated a protocol for the specific detection of dengue antibodies from dried serum spots (DSSs). Methods Paired serum and DSS samples from 66 suspected cases of dengue were collected in a clinic in Veracruz, Mexico. Samples were sent to our laboratory, where the conditions for optimal elution of DSSs were established. The presence of anti-dengue antibodies was determined in the paired samples. Results DSS elution conditions were standardized as follows: 1 h at 4°C in 200 µl of DNase-, RNase-, and protease-free PBS (1x). The optimal volume of DSS eluate to be used in the IgG assay was 40 µl. Sensitivity of 94%, specificity of 93.3%, and kappa concordance of 0.87 were obtained when comparing the antidengue reactivity between DSSs and serum samples. Conclusion DSS samples are useful for detecting anti-dengue IgG antibodies in the field. PMID:28630868

  14. Optimizing the multicycle subrotational internal cooling of diatomic molecules

    NASA Astrophysics Data System (ADS)

    Aroch, A.; Kallush, S.; Kosloff, R.

    2018-05-01

    Subrotational cooling of the AlH+ ion to the miliKelvin regime, using optimally shaped pulses, is computed. The coherent electromagnetic fields induce purity-conserved transformations and do not change the sample temperature. A decrease in a sample temperature, manifested by an increase of purity, is achieved by the complementary uncontrolled spontaneous emission which changes the entropy of the system. We employ optimal control theory to find a pulse that stirs the system into a population configuration that will result in cooling, upon multicycle excitation-emission steps. The obtained optimal transformation was shown capable to cool molecular ions to the subkelvins regime.

  15. Low NOx combustion and SCR flow field optimization in a low volatile coal fired boiler.

    PubMed

    Liu, Xing; Tan, Houzhang; Wang, Yibin; Yang, Fuxin; Mikulčić, Hrvoje; Vujanović, Milan; Duić, Neven

    2018-08-15

    Low NO x burner redesign and deep air staging have been carried out to optimize the poor ignition and reduce the NO x emissions in a low volatile coal fired 330 MW e boiler. Residual swirling flow in the tangentially-fired furnace caused flue gas velocity deviations at furnace exit, leading to flow field unevenness in the SCR (selective catalytic reduction) system and poor denitrification efficiency. Numerical simulations on the velocity field in the SCR system were carried out to determine the optimal flow deflector arrangement to improve flow field uniformity of SCR system. Full-scale experiment was performed to investigate the effect of low NO x combustion and SCR flow field optimization. Compared with the results before the optimization, the NO x emissions at furnace exit decreased from 550 to 650 mg/Nm³ to 330-430 mg/Nm³. The sample standard deviation of the NO x emissions at the outlet section of SCR decreased from 34.8 mg/Nm³ to 7.8 mg/Nm³. The consumption of liquid ammonia reduced from 150 to 200 kg/h to 100-150 kg/h after optimization. Copyright © 2018. Published by Elsevier Ltd.

  16. Steering Electromagnetic Fields in MRI: Investigating Radiofrequency Field Interactions with Endogenous and External Dielectric Materials for Improved Coil Performance at High Field

    NASA Astrophysics Data System (ADS)

    Vaidya, Manushka

    Although 1.5 and 3 Tesla (T) magnetic resonance (MR) systems remain the clinical standard, the number of 7 T MR systems has increased over the past decade because of the promise of higher signal-to-noise ratio (SNR), which can translate to images with higher resolution, improved image quality and faster acquisition times. However, there are a number of technical challenges that have prevented exploiting the full potential of ultra-high field (≥ 7 T) MR imaging (MRI), such as the inhomogeneous distribution of the radiofrequency (RF) electromagnetic field and specific energy absorption rate (SAR), which can compromise image quality and patient safety. To better understand the origin of these issues, we first investigated the dependence of the spatial distribution of the magnetic field associated with a surface RF coil on the operating frequency and electrical properties of the sample. Our results demonstrated that the asymmetries between the transmit (B1+) and receive (B 1-) circularly polarized components of the magnetic field, which are in part responsible for RF inhomogeneity, depend on the electric conductivity of the sample. On the other hand, when sample conductivity is low, a high relative permittivity can result in an inhomogeneous RF field distribution, due to significant constructive and destructive interference patterns between forward and reflected propagating magnetic field within the sample. We then investigated the use of high permittivity materials (HPMs) as a method to alter the field distribution and improve transmit and receive coil performance in MRI. We showed that HPM placed at a distance from an RF loop coil can passively shape the field within the sample. Our results showed improvement in transmit and receive sensitivity overlap, extension of coil field-of-view, and enhancement in transmit/receive efficiency. We demonstrated the utility of this concept by employing HPM to improve performance of an existing commercial head coil for the inferior regions of the brain, where the specific coil's imaging efficiency was inherently poor. Results showed a gain in SNR, while the maximum local and head SAR values remained below the prescribed limits. We showed that increasing coil performance with HPM could improve detection of functional MR activation during a motor-based task for whole brain fMRI. Finally, to gain an intuitive understanding of how HPM improves coil performance, we investigated how HPM separately affects signal and noise sensitivity to improve SNR. For this purpose, we employed a theoretical model based on dyadic Green's functions to compare the characteristics of current patterns, i.e. the optimal spatial distribution of coil conductors, that would either maximize SNR (ideal current patterns), maximize signal reception (signal-only optimal current patterns), or minimize sample noise (dark mode current patterns). Our results demonstrated that the presence of a lossless HPM changed the relative balance of signal-only optimal and dark mode current patterns. For a given relative permittivity, increasing the thickness of the HPM altered the magnitude of the currents required to optimize signal sensitivity at the voxel of interest as well as decreased the net electric field in the sample, which is associated, via reciprocity, to the noise received from the sample. Our results also suggested that signal-only current patterns could be used to identify HPM configurations that lead to high SNR gain for RF coil arrays. We anticipate that physical insights from this work could be utilized to build the next generation of high performing RF coils integrated with HPM.

  17. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    PubMed

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  18. Fast imaging of live organisms with sculpted light sheets

    NASA Astrophysics Data System (ADS)

    Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.

    2015-04-01

    Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.

  19. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  20. Vortex pinning properties in Fe-chalcogenides

    NASA Astrophysics Data System (ADS)

    Leo, A.; Grimaldi, G.; Guarino, A.; Avitabile, F.; Nigro, A.; Galluzzi, A.; Mancusi, D.; Polichetti, M.; Pace, S.; Buchkov, K.; Nazarova, E.; Kawale, S.; Bellingeri, E.; Ferdeghini, C.

    2015-12-01

    Among the families of iron-based superconductors, the 11-family is one of the most attractive for high field applications at low temperatures. Optimization of the fabrication processes for bulk, crystalline and/or thin film samples is the first step in producing wires and/or tapes for practical high power conductors. Here we present the results of a comparative study of pinning properties in iron-chalcogenides, investigating the flux pinning mechanisms in optimized Fe(Se{}1-xTe x ) and FeSe samples by current-voltage characterization, magneto-resistance and magnetization measurements. In particular, from Arrhenius plots in magnetic fields up to 9 T, the activation energy is derived as a function of the magnetic field, {U}0(H), whereas the activation energy as a function of temperature, U(T), is derived from relaxation magnetization curves. The high pinning energies, high upper critical field versus temperature slopes near critical temperatures, and highly isotropic pinning properties make iron-chalcogenide superconductors a technological material which could be a real competitor to cuprate high temperature superconductors for high field applications.

  1. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    EPA Science Inventory

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  2. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    NASA Astrophysics Data System (ADS)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  3. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  4. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  5. Intense Terahertz Fields for Fast Energy Release

    DTIC Science & Technology

    2016-11-01

    could allow us to monitor shock propagation in the sample and observe any effects of THz irradiation . In order to optimize the system, we moved a...the response and access for the THz light needed to simultaneously irradiate the sample. Preliminary measurements of sample responses to each of the

  6. Optimal Magnetic Sensor Vests for Cardiac Source Imaging

    PubMed Central

    Lau, Stephan; Petković, Bojana; Haueisen, Jens

    2016-01-01

    Magnetocardiography (MCG) non-invasively provides functional information about the heart. New room-temperature magnetic field sensors, specifically magnetoresistive and optically pumped magnetometers, have reached sensitivities in the ultra-low range of cardiac fields while allowing for free placement around the human torso. Our aim is to optimize positions and orientations of such magnetic sensors in a vest-like arrangement for robust reconstruction of the electric current distributions in the heart. We optimized a set of 32 sensors on the surface of a torso model with respect to a 13-dipole cardiac source model under noise-free conditions. The reconstruction robustness was estimated by the condition of the lead field matrix. Optimization improved the condition of the lead field matrix by approximately two orders of magnitude compared to a regular array at the front of the torso. Optimized setups exhibited distributions of sensors over the whole torso with denser sampling above the heart at the front and back of the torso. Sensors close to the heart were arranged predominantly tangential to the body surface. The optimized sensor setup could facilitate the definition of a standard for sensor placement in MCG and the development of a wearable MCG vest for clinical diagnostics. PMID:27231910

  7. An effective parameter optimization technique for vibration flow field characterization of PP melts via LS-SVM combined with SALS in an electromagnetism dynamic extruder

    NASA Astrophysics Data System (ADS)

    Xian, Guangming

    2018-03-01

    A method for predicting the optimal vibration field parameters by least square support vector machine (LS-SVM) is presented in this paper. One convenient and commonly used technique for characterizing the the vibration flow field of polymer melts films is small angle light scattering (SALS) in a visualized slit die of the electromagnetism dynamic extruder. The optimal value of vibration vibration frequency, vibration amplitude, and the maximum light intensity projection area can be obtained by using LS-SVM for prediction. For illustrating this method and show its validity, the flowing material is used with polypropylene (PP) and fifteen samples are tested at the rotation speed of screw at 36rpm. This paper first describes the apparatus of SALS to perform the experiments, then gives the theoretical basis of this new method, and detail the experimental results for parameter prediction of vibration flow field. It is demonstrated that it is possible to use the method of SALS and obtain detailed information on optimal parameter of vibration flow field of PP melts by LS-SVM.

  8. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    PubMed Central

    2013-01-01

    Background Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light traps to sample specimens from the Culicoides obsoletus species complex on a 14 hectare field during 16 nights in 2009. Findings The large number of traps and catch nights enabled us to simulate a series of samples consisting of different numbers of traps (1-15) on each night. We also varied the number of catch nights when simulating the sampling, and sampled with increasing minimum distances between traps. We used resampling to generate a distribution of different mean and median abundance in each sample. Finally, we used the hypergeometric distribution to estimate the probability of falsely detecting absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. Conclusions Despite spatial clustering in vector abundance, we found no effect of increasing the distance between traps. We found that 18 traps were generally required to reach 90% probability of a true positive catch when sampling just one night. But when sampling over two nights the same probability level was obtained with just three traps per night. The results are useful for the design of vector monitoring programmes on fields with grazing animals. PMID:23705770

  9. Optimal sampling with prior information of the image geometry in microfluidic MRI.

    PubMed

    Han, S H; Cho, H; Paulsen, J L

    2015-03-01

    Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. The Use of Handheld X-Ray Fluorescence (XRF) Technology in Unraveling the Eruptive History of the San Francisco Volcanic Field, Arizona

    NASA Technical Reports Server (NTRS)

    Young, Kelsey E.; Evans, C. A.; Hodges, K. V.

    2012-01-01

    While traditional geologic mapping includes the examination of structural relationships between rock units in the field, more advanced technology now enables us to simultaneously collect and combine analytical datasets with field observations. Information about tectonomagmatic processes can be gleaned from these combined data products. Historically, construction of multi-layered field maps that include sample data has been accomplished serially (first map and collect samples, analyze samples, combine data, and finally, readjust maps and conclusions about geologic history based on combined data sets). New instruments that can be used in the field, such as a handheld xray fluorescence (XRF) unit, are now available. Targeted use of such instruments enables geologists to collect preliminary geochemical data while in the field so that they can optimize scientific data return from each field traverse. Our study tests the application of this technology and projects the benefits gained by real-time geochemical data in the field. The integrated data set produces a richer geologic map and facilitates a stronger contextual picture for field geologists when collecting field observations and samples for future laboratory work. Real-time geochemical data on samples also provide valuable insight regarding sampling decisions by the field geologist

  11. Focusing light through dynamical samples using fast continuous wavefront optimization.

    PubMed

    Blochet, B; Bourdieu, L; Gigan, S

    2017-12-01

    We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO 2 particles in glycerol with tunable temporal stability.

  12. Optimization and performance of the Robert Stobie Spectrograph Near-InfraRed detector system

    NASA Astrophysics Data System (ADS)

    Mosby, Gregory; Indahl, Briana; Eggen, Nathan; Wolf, Marsha; Hooper, Eric; Jaehnig, Kurt; Thielman, Don; Burse, Mahesh

    2018-01-01

    At the University of Wisconsin-Madison, we are building and testing the near-infrared (NIR) spectrograph for the Southern African Large Telescope-RSS-NIR. RSS-NIR will be an enclosed cooled integral field spectrograph. The RSS-NIR detector system uses a HAWAII-2RG (H2RG) HgCdTe detector from Teledyne controlled by the SIDECAR ASIC and an Inter-University Centre for Astronomy and Astrophysics (IUCCA) ISDEC card. We have successfully characterized and optimized the detector system and report on the optimization steps and performance of the system. We have reduced the CDS read noise to ˜20 e- for 200 kHz operation by optimizing ASIC settings. We show an additional factor of 3 reduction of read noise using Fowler sampling techniques and a factor of 2 reduction using up-the-ramp group sampling techniques. We also provide calculations to quantify the conditions for sky-limited observations using these sampling techniques.

  13. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  14. Information-Theoretic Assessment of Sample Imaging Systems

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Park, Stephen K.; Rahman, Zia-ur

    1999-01-01

    By rigorously extending modern communication theory to the assessment of sampled imaging systems, we develop the formulations that are required to optimize the performance of these systems within the critical constraints of image gathering, data transmission, and image display. The goal of this optimization is to produce images with the best possible visual quality for the wide range of statistical properties of the radiance field of natural scenes that one normally encounters. Extensive computational results are presented to assess the performance of sampled imaging systems in terms of information rate, theoretical minimum data rate, and fidelity. Comparisons of this assessment with perceptual and measurable performance demonstrate that (1) the information rate that a sampled imaging system conveys from the captured radiance field to the observer is closely correlated with the fidelity, sharpness and clarity with which the observed images can be restored and (2) the associated theoretical minimum data rate is closely correlated with the lowest data rate with which the acquired signal can be encoded for efficient transmission.

  15. Active vortex sampling system for remote contactless survey of surfaces by laser-based field asymmetrical ion mobility spectrometer

    NASA Astrophysics Data System (ADS)

    Akmalov, Artem E.; Chistyakov, Alexander A.; Kotkovskii, Gennadii E.; Sychev, Alexei V.

    2017-10-01

    The ways for increasing the distance of non-contact sampling up to 40 cm for a field asymmetric ion mobility (FAIM) spectrometer are formulated and implemented by the use of laser desorption and active shaper of the vortex flow. Numerical modeling of air sampling flows was made and the sampling device for a laser-based FAIM spectrometer on the basis of high speed rotating impeller, located coaxial with the ion source, was designed. The dependence of trinitrotoluene vapors signal on the rotational speed and the optimization of the value of the sampling flow were obtained. The effective distance of sampling is increased up to 28 cm for trinitrotoluene vapors detection by a FAIM spectrometer with a rotating impeller. The distance is raised up to 40 cm using laser irradiation of traces of explosives. It is shown that efficient desorption of low-volatile explosives is achieved at laser intensity 107 W / cm2 , wavelength λ=266 nm, pulse energy about 1mJ and pulse frequency not less than 10 Hz under ambient conditions. The ways of optimization of internal gas flows of a FAIM spectrometer for the work at increased sampling distances are discussed.

  16. Effect of plot and sample size on timing and precision of urban forest assessments

    Treesearch

    David J. Nowak; Jeffrey T. Walton; Jack C. Stevens; Daniel E. Crane; Robert E. Hoehn

    2008-01-01

    Accurate field data can be used to assess ecosystem services from trees and to improve urban forest management, yet little is known about the optimization of field data collection in the urban environment. Various field and Geographic Information System (GIS) tests were performed to help understand how time costs and precision of tree population estimates change with...

  17. Rapid wide-field Mueller matrix polarimetry imaging based on four photoelastic modulators with no moving parts.

    PubMed

    Alali, Sanaz; Gribble, Adam; Vitkin, I Alex

    2016-03-01

    A new polarimetry method is demonstrated to image the entire Mueller matrix of a turbid sample using four photoelastic modulators (PEMs) and a charge coupled device (CCD) camera, with no moving parts. Accurate wide-field imaging is enabled with a field-programmable gate array (FPGA) optical gating technique and an evolutionary algorithm (EA) that optimizes imaging times. This technique accurately and rapidly measured the Mueller matrices of air, polarization elements, and turbid phantoms. The system should prove advantageous for Mueller matrix analysis of turbid samples (e.g., biological tissues) over large fields of view, in less than a second.

  18. Human breath metabolomics using an optimized noninvasive exhaled breath condensate sampler

    PubMed Central

    Zamuruyev, Konstantin O.; Aksenov, Alexander A.; Pasamontes, Alberto; Brown, Joshua F.; Pettit, Dayna R.; Foutouhi, Soraya; Weimer, Bart C.; Schivo, Michael; Kenyon, Nicholas J.; Delplanque, Jean-Pierre; Davis, Cristina E.

    2017-01-01

    Exhaled breath condensate (EBC) analysis is a developing field with tremendous promise to advance personalized, non-invasive health diagnostics as new analytical instrumentation platforms and detection methods are developed. Multiple commercially-available and researcher-built experimental samplers are reported in the literature. However, there is very limited information available to determine an effective breath sampling approach, especially regarding the dependence of breath sample metabolomic content on the collection device design and sampling methodology. This lack of an optimal standard procedure results in a range of reported results that are sometimes contradictory. Here, we present a design of a portable human EBC sampler optimized for collection and preservation of the rich metabolomic content of breath. The performance of the engineered device is compared to two commercially available breath collection devices: the RTube™ and TurboDECCS. A number of design and performance parameters are considered, including: condenser temperature stability during sampling, collection efficiency, condenser material choice, and saliva contamination in the collected breath samples. The significance of the biological content of breath samples, collected with each device, is evaluated with a set of mass spectrometry methods and was the primary factor for evaluating device performance. The design includes an adjustable mass-size threshold for aerodynamic filtering of saliva droplets from the breath flow. Engineering an inexpensive device that allows efficient collection of metalomic-rich breath samples is intended to aid further advancement in the field of breath analysis for non-invasive health diagnostic. EBC sampling from human volunteers was performed under UC Davis IRB protocol 63701-3 (09/30/2014-07/07/2017). PMID:28004639

  19. Human breath metabolomics using an optimized non-invasive exhaled breath condensate sampler.

    PubMed

    Zamuruyev, Konstantin O; Aksenov, Alexander A; Pasamontes, Alberto; Brown, Joshua F; Pettit, Dayna R; Foutouhi, Soraya; Weimer, Bart C; Schivo, Michael; Kenyon, Nicholas J; Delplanque, Jean-Pierre; Davis, Cristina E

    2016-12-22

    Exhaled breath condensate (EBC) analysis is a developing field with tremendous promise to advance personalized, non-invasive health diagnostics as new analytical instrumentation platforms and detection methods are developed. Multiple commercially-available and researcher-built experimental samplers are reported in the literature. However, there is very limited information available to determine an effective breath sampling approach, especially regarding the dependence of breath sample metabolomic content on the collection device design and sampling methodology. This lack of an optimal standard procedure results in a range of reported results that are sometimes contradictory. Here, we present a design of a portable human EBC sampler optimized for collection and preservation of the rich metabolomic content of breath. The performance of the engineered device is compared to two commercially available breath collection devices: the RTube ™ and TurboDECCS. A number of design and performance parameters are considered, including: condenser temperature stability during sampling, collection efficiency, condenser material choice, and saliva contamination in the collected breath samples. The significance of the biological content of breath samples, collected with each device, is evaluated with a set of mass spectrometry methods and was the primary factor for evaluating device performance. The design includes an adjustable mass-size threshold for aerodynamic filtering of saliva droplets from the breath flow. Engineering an inexpensive device that allows efficient collection of metalomic-rich breath samples is intended to aid further advancement in the field of breath analysis for non-invasive health diagnostic. EBC sampling from human volunteers was performed under UC Davis IRB protocol 63701-3 (09/30/2014-07/07/2017).

  20. OPTIMIZED DETERMINATION OF TRACE JET FUEL VOLATILE ORGANIC COMPOUNDS IN HUMAN BLOOD USING IN-FIELD LIQUID-LIQUID EXTRACTION WITH SUBSEQUENT LABORATORY GAS CHROMATOGRAPHIC-MASS SPECTROMETRIC ANALYSIS AND ON-COLUMN LARGE VOLUME INJECTION

    EPA Science Inventory

    A practical and sensitive method to assess volatile organic compounds (VOCs) from JP-8 jet fuel in human whole blood was developed by modifying previously established liquid-liquid extraction procedures, optimizing extraction times, solvent volume, specific sample processing te...

  1. Practical Cost-Optimization of Characterization and Remediation Decisions at DNAPL Sites with Consideration of Prediction Uncertainty

    DTIC Science & Technology

    2011-05-01

    well] TR GWsampC sampling and analysis cost per groundwater sample [$K/sample] i TR boreC cost per soil boring [$K/boring] TR SOILsampC cost per... soil sample analyzed [$K/sample] d annual discount rate [-] DNAPL dense nonaqueous phase liquid (E0, N0) raw easting and northing field...kg] fE fraction of non-monitoring variable costs attributable to energy use [-] Fi total soil and/or groundwater samples divided by pre

  2. Detecting glaucomatous change in visual fields: Analysis with an optimization framework.

    PubMed

    Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher

    2015-12-01

    Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jun -Sang; Ray, Atish K.; Dawson, Paul R.

    A shrink-fit sample is manufactured with a Ti-8Al-1Mo-1V alloy to introduce a multiaxial residual stress field in the disk of the sample. A set of strain and orientation pole figures are measured at various locations across the disk using synchrotron high-energy X-ray diffraction. Two approaches—the traditional sin 2Ψ method and the bi-scale optimization method—are taken to determine the stresses in the disk based on the measured strain and orientation pole figures, to explore the range of solutions that are possible for the stress field within the disk. While the stress components computed using the sin 2Ψ method and the bi-scalemore » optimization method have similar trends, their magnitudes are significantly different. Lastly, it is suspected that the local texture variation in the material is the cause of this discrepancy.« less

  4. Finite element design for the HPHT synthesis of diamond

    NASA Astrophysics Data System (ADS)

    Li, Rui; Ding, Mingming; Shi, Tongfei

    2018-06-01

    The finite element method is used to simulate the steady-state temperature field in diamond synthesis cell. The 2D and 3D models of the China-type cubic press with large deformation of the synthesis cell was established successfully, which has been verified by situ measurements of synthesis cell. The assembly design, component design and process design for the HPHT synthesis of diamond based on the finite element simulation were presented one by one. The temperature field in a high-pressure synthetic cavity for diamond production is optimized by adjusting the cavity assembly. A series of analysis about the influence of the pressure media parameters on the temperature field are examined through adjusting the model parameters. Furthermore, the formation mechanism of wasteland was studied in detail. It indicates that the wasteland is inevitably exists in the synthesis sample, the distribution of growth region of the diamond with hex-octahedral is move to the center of the synthesis sample from near the heater as the power increasing, and the growth conditions of high quality diamond is locating at the center of the synthesis sample. These works can offer suggestion and advice to the development and optimization of a diamond production process.

  5. Optimization of groundwater sampling approach under various hydrogeological conditions using a numerical simulation model

    NASA Astrophysics Data System (ADS)

    Qi, Shengqi; Hou, Deyi; Luo, Jian

    2017-09-01

    This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.

  6. Fractionating power and outlet stream polydispersity in asymmetrical flow field-flow fractionation. Part I: isocratic operation.

    PubMed

    Williams, P Stephen

    2016-05-01

    Asymmetrical flow field-flow fractionation (As-FlFFF) has become the most commonly used of the field-flow fractionation techniques. However, because of the interdependence of the channel flow and the cross flow through the accumulation wall, it is the most difficult of the techniques to optimize, particularly for programmed cross flow operation. For the analysis of polydisperse samples, the optimization should ideally be guided by the predicted fractionating power. Many experimentalists, however, neglect fractionating power and rely on light scattering detection simply to confirm apparent selectivity across the breadth of the eluted peak. The size information returned by the light scattering software is assumed to dispense with any reliance on theory to predict retention, and any departure of theoretical predictions from experimental observations is therefore considered of no importance. Separation depends on efficiency as well as selectivity, however, and efficiency can be a strong function of retention. The fractionation of a polydisperse sample by field-flow fractionation never provides a perfectly separated series of monodisperse fractions at the channel outlet. The outlet stream has some residual polydispersity, and it will be shown in this manuscript that the residual polydispersity is inversely related to the fractionating power. Due to the strong dependence of light scattering intensity and its angular distribution on the size of the scattering species, the outlet polydispersity must be minimized if reliable size data are to be obtained from the light scattering detector signal. It is shown that light scattering detection should be used with careful control of fractionating power to obtain optimized analysis of polydisperse samples. Part I is concerned with isocratic operation of As-FlFFF, and part II with programmed operation.

  7. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Evaluation and optimization of a commercial blocking ELISA for detecting antibodies to influenza A virus for research and surveillance of mallards.

    PubMed

    Shriner, Susan A; VanDalen, Kaci K; Root, J Jeffrey; Sullivan, Heather J

    2016-02-01

    The availability of a validated commercial assay is an asset for any wildlife investigation. However, commercial products are often developed for use in livestock and are not optimized for wildlife. Consequently, it is incumbent upon researchers and managers to apply commercial products appropriately to optimize program outcomes. We tested more than 800 serum samples from mallards for antibodies to influenza A virus with the IDEXX AI MultiS-Screen Ab test to evaluate assay performance. Applying the test per manufacturer's recommendations resulted in good performance with 84% sensitivity and 100% specificity. However, performance was improved to 98% sensitivity and 98% specificity by increasing the recommended cut-off. Using this alternative threshold for identifying positive and negative samples would greatly improve sample classification, especially for field samples collected months after infection when antibody titers have waned from the initial primary immune response. Furthermore, a threshold that balances sensitivity and specificity reduces estimation bias in seroprevalence estimates. Published by Elsevier B.V.

  9. Capillary Electrophoresis Analysis of Organic Amines and Amino Acids in Saline and Acidic Samples Using the Mars Organic Analyzer

    NASA Astrophysics Data System (ADS)

    Stockton, Amanda M.; Chiesl, Thomas N.; Lowenstein, Tim K.; Amashukeli, Xenia; Grunthaner, Frank; Mathies, Richard A.

    2009-11-01

    The Mars Organic Analyzer (MOA) has enabled the sensitive detection of amino acid and amine biomarkers in laboratory standards and in a variety of field sample tests. However, the MOA is challenged when samples are extremely acidic and saline or contain polyvalent cations. Here, we have optimized the MOA analysis, sample labeling, and sample dilution buffers to handle such challenging samples more robustly. Higher ionic strength buffer systems with pKa values near pH 9 were developed to provide better buffering capacity and salt tolerance. The addition of ethylaminediaminetetraacetic acid (EDTA) ameliorates the negative effects of multivalent cations. The optimized protocol utilizes a 75 mM borate buffer (pH 9.5) for Pacific Blue labeling of amines and amino acids. After labeling, 50 mM (final concentration) EDTA is added to samples containing divalent cations to ameliorate their effects. This optimized protocol was used to successfully analyze amino acids in a saturated brine sample from Saline Valley, California, and a subcritical water extract of a highly acidic sample from the Río Tinto, Spain. This work expands the analytical capabilities of the MOA and increases its sensitivity and robustness for samples from extraterrestrial environments that may exhibit pH and salt extremes as well as metal ions.

  10. Capillary electrophoresis analysis of organic amines and amino acids in saline and acidic samples using the Mars organic analyzer.

    PubMed

    Stockton, Amanda M; Chiesl, Thomas N; Lowenstein, Tim K; Amashukeli, Xenia; Grunthaner, Frank; Mathies, Richard A

    2009-11-01

    The Mars Organic Analyzer (MOA) has enabled the sensitive detection of amino acid and amine biomarkers in laboratory standards and in a variety of field sample tests. However, the MOA is challenged when samples are extremely acidic and saline or contain polyvalent cations. Here, we have optimized the MOA analysis, sample labeling, and sample dilution buffers to handle such challenging samples more robustly. Higher ionic strength buffer systems with pK(a) values near pH 9 were developed to provide better buffering capacity and salt tolerance. The addition of ethylaminediaminetetraacetic acid (EDTA) ameliorates the negative effects of multivalent cations. The optimized protocol utilizes a 75 mM borate buffer (pH 9.5) for Pacific Blue labeling of amines and amino acids. After labeling, 50 mM (final concentration) EDTA is added to samples containing divalent cations to ameliorate their effects. This optimized protocol was used to successfully analyze amino acids in a saturated brine sample from Saline Valley, California, and a subcritical water extract of a highly acidic sample from the Río Tinto, Spain. This work expands the analytical capabilities of the MOA and increases its sensitivity and robustness for samples from extraterrestrial environments that may exhibit pH and salt extremes as well as metal ions.

  11. Dynamic nuclear polarization using frequency modulation at 3.34 T.

    PubMed

    Hovav, Y; Feintuch, A; Vega, S; Goldfarb, D

    2014-01-01

    During dynamic nuclear polarization (DNP) experiments polarization is transferred from unpaired electrons to their neighboring nuclear spins, resulting in dramatic enhancement of the NMR signals. While in most cases this is achieved by continuous wave (cw) irradiation applied to samples in fixed external magnetic fields, here we show that DNP enhancement of static samples can improve by modulating the microwave (MW) frequency at a constant field of 3.34 T. The efficiency of triangular shaped modulation is explored by monitoring the (1)H signal enhancement in frozen solutions containing different TEMPOL radical concentrations at different temperatures. The optimal modulation parameters are examined experimentally and under the most favorable conditions a threefold enhancement is obtained with respect to constant frequency DNP in samples with low radical concentrations. The results are interpreted using numerical simulations on small spin systems. In particular, it is shown experimentally and explained theoretically that: (i) The optimal modulation frequency is higher than the electron spin-lattice relaxation rate. (ii) The optimal modulation amplitude must be smaller than the nuclear Larmor frequency and the EPR line-width, as expected. (iii) The MW frequencies corresponding to the enhancement maxima and minima are shifted away from one another when using frequency modulation, relative to the constant frequency experiments. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Optimizing variable radius plot size and LiDAR resolution to model standing volume in conifer forests

    Treesearch

    Ram Kumar Deo; Robert E. Froese; Michael J. Falkowski; Andrew T. Hudak

    2016-01-01

    The conventional approach to LiDAR-based forest inventory modeling depends on field sample data from fixed-radius plots (FRP). Because FRP sampling is cost intensive, combining variable-radius plot (VRP) sampling and LiDAR data has the potential to improve inventory efficiency. The overarching goal of this study was to evaluate the integration of LiDAR and VRP data....

  13. Impact of magnetic fields on the morphology of hybrid perovskite films for solar cells

    NASA Astrophysics Data System (ADS)

    Corpus-Mendoza, Asiel N.; Moreno-Romero, Paola M.; Hu, Hailin

    2018-05-01

    The impact of magnetic fields on the morphology of hybrid perovskite films is assessed via scanning electron microscopy and X-ray diffraction. Small-grain non-uniform perovskite films are obtained when a large magnetic flux density is applied to the sample during reaction of PbI2 and methylammonium iodide (chloride). Similarly, X-ray diffraction reveals a change of preferential crystalline planes when large magnetic fields are applied. Furthermore, we experimentally demonstrate that the quality of the perovskite film is affected by the magnetic field induced by the magnetic stirring system of the hot plate where the samples are annealed. As a consequence, optimization of the perovskite layer varies with magnetic field and annealing temperature. Finally, we prove that uncontrolled magnetic fields on the environment of preparation can severely influence the reproducibility of results.

  14. Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning

    NASA Astrophysics Data System (ADS)

    Li, Jun-Bao; Liu, Jing; Pan, Jeng-Shyang; Yao, Hongxun

    2017-06-01

    Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.

  15. Delineating high-density areas in spatial Poisson fields from strip-transect sampling using indicator geostatistics: application to unexploded ordnance removal.

    PubMed

    Saito, Hirotaka; McKenna, Sean A

    2007-07-01

    An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.

  16. Combining configurational energies and forces for molecular force field optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  17. Combining configurational energies and forces for molecular force field optimization

    DOE PAGES

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    2017-07-21

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  18. Goal-oriented Site Characterization in Hydrogeological Applications: An Overview

    NASA Astrophysics Data System (ADS)

    Nowak, W.; de Barros, F.; Rubin, Y.

    2011-12-01

    In this study, we address the importance of goal-oriented site characterization. Given the multiple sources of uncertainty in hydrogeological applications, information needs of modeling, prediction and decision support should be satisfied with efficient and rational field campaigns. In this work, we provide an overview of an optimal sampling design framework based on Bayesian decision theory, statistical parameter inference and Bayesian model averaging. It optimizes the field sampling campaign around decisions on environmental performance metrics (e.g., risk, arrival times, etc.) while accounting for parametric and model uncertainty in the geostatistical characterization, in forcing terms, and measurement error. The appealing aspects of the framework lie on its goal-oriented character and that it is directly linked to the confidence in a specified decision. We illustrate how these concepts could be applied in a human health risk problem where uncertainty from both hydrogeological and health parameters are accounted.

  19. Determination of residual stress in a microtextured α titanium component using high-energy synchrotron X-rays

    DOE PAGES

    Park, Jun -Sang; Ray, Atish K.; Dawson, Paul R.; ...

    2016-05-02

    A shrink-fit sample is manufactured with a Ti-8Al-1Mo-1V alloy to introduce a multiaxial residual stress field in the disk of the sample. A set of strain and orientation pole figures are measured at various locations across the disk using synchrotron high-energy X-ray diffraction. Two approaches—the traditional sin 2Ψ method and the bi-scale optimization method—are taken to determine the stresses in the disk based on the measured strain and orientation pole figures, to explore the range of solutions that are possible for the stress field within the disk. While the stress components computed using the sin 2Ψ method and the bi-scalemore » optimization method have similar trends, their magnitudes are significantly different. Lastly, it is suspected that the local texture variation in the material is the cause of this discrepancy.« less

  20. Quantitative comparison of bright field and annular bright field imaging modes for characterization of oxygen octahedral tilts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young-Min; Pennycook, Stephen J.; Borisevich, Albina Y.

    Octahedral tilt behavior is increasingly recognized as an important contributing factor to the physical behavior of perovskite oxide materials and especially their interfaces, necessitating the development of high-resolution methods of tilt mapping. There are currently two major approaches for quantitative imaging of tilts in scanning transmission electron microscopy (STEM), bright field (BF) and annular bright field (ABF). In this study, we show that BF STEM can be reliably used for measurements of oxygen octahedral tilts. While optimal conditions for BF imaging are more restricted with respect to sample thickness and defocus, we find that BF imaging with an aberration-corrected microscopemore » with the accelerating voltage of 300 kV gives us the most accurate quantitative measurement of the oxygen column positions. Using the tilted perovskite structure of BiFeO 3 (BFO) as our test sample, we simulate BF and ABF images in a wide range of conditions, identifying the optimal imaging conditions for each mode. Finally, we show that unlike ABF imaging, BF imaging remains directly quantitatively interpretable for a wide range of the specimen mistilt, suggesting that it should be preferable to the ABF STEM imaging for quantitative structure determination.« less

  1. Quantitative comparison of bright field and annular bright field imaging modes for characterization of oxygen octahedral tilts

    DOE PAGES

    Kim, Young-Min; Pennycook, Stephen J.; Borisevich, Albina Y.

    2017-04-29

    Octahedral tilt behavior is increasingly recognized as an important contributing factor to the physical behavior of perovskite oxide materials and especially their interfaces, necessitating the development of high-resolution methods of tilt mapping. There are currently two major approaches for quantitative imaging of tilts in scanning transmission electron microscopy (STEM), bright field (BF) and annular bright field (ABF). In this study, we show that BF STEM can be reliably used for measurements of oxygen octahedral tilts. While optimal conditions for BF imaging are more restricted with respect to sample thickness and defocus, we find that BF imaging with an aberration-corrected microscopemore » with the accelerating voltage of 300 kV gives us the most accurate quantitative measurement of the oxygen column positions. Using the tilted perovskite structure of BiFeO 3 (BFO) as our test sample, we simulate BF and ABF images in a wide range of conditions, identifying the optimal imaging conditions for each mode. Finally, we show that unlike ABF imaging, BF imaging remains directly quantitatively interpretable for a wide range of the specimen mistilt, suggesting that it should be preferable to the ABF STEM imaging for quantitative structure determination.« less

  2. Chapter 8 optimized test design for identification of the variation of elastic stiffness properties of Loblolly Pine (Pinus taeda) pith to bark

    Treesearch

    David Kretschmann; John Considine; F. Pierron

    2016-01-01

    This article presents the design optimization of an un-notched Iosipescu test specimen whose goal is the characterization of the material elastic stiffnesses of a Loblolly (Pinus taeda) or Lodgepole pine (Pinus contorta) sample in one single test. A series of finite element (FE) and grid simulations were conducted to determine displacement and strain fields for various...

  3. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  4. Rapid and direct determination of glyphosate, glufosinate, and aminophosphonic acid by online preconcentration CE with contactless conductivity detection.

    PubMed

    See, Hong Heng; Hauser, Peter C; Ibrahim, Wan Aini Wan; Sanagi, Mohd Marsin

    2010-01-01

    Rapid and direct online preconcentration followed by CE with capacitively coupled contactless conductivity detection (CE-C(4)D) is evaluated as a new approach for the determination of glyphosate, glufosinate (GLUF), and aminophosphonic acid (AMPA) in drinking water. Two online preconcentration techniques, namely large volume sample stacking without polarity switching and field-enhanced sample injection, coupled with CE-C(4)D were successfully developed and optimized. Under optimized conditions, LODs in the range of 0.01-0.1 microM (1.7-11.1 microg/L) and sensitivity enhancements of 48- to 53-fold were achieved with the large volume sample stacking-CE-C(4)D method. By performing the field-enhanced sample injection-CE-C(4)D procedure, excellent LODs down to 0.0005-0.02 microM (0.1-2.2 microg/L) as well as sensitivity enhancements of up to 245- to 1002-fold were obtained. Both techniques showed satisfactory reproducibility with RSDs of peak height of better than 10%. The newly established approaches were successfully applied to the analysis of glyphosate, glufosinate, and aminophosphonic acid in spiked tap drinking water.

  5. Combination of micelle collapse and field-amplified sample stacking in capillary electrophoresis for determination of trimethoprim and sulfamethoxazole in animal-originated foodstuffs.

    PubMed

    Liu, Lihong; Wan, Qian; Xu, Xiaoying; Duan, Shunshan; Yang, Chunli

    2017-03-15

    An on-line preconcentration method combining micelle to solvent stacking (MSS) with field-amplified sample stacking (FASS) was employed for the analysis of trimethoprim (TMP) and sulfamethoxazole (SMZ) by capillary zone electrophoresis (CZE). The optimized experimental conditions were as followings: (1) sample matrix, 10.0mM SDS-5% (v/v) methanol; (2) trapping solution (TS), 35mM H 3 PO 4 -60% acetonitrile (CH 3 CN); (3) running buffer, 30mM Na 2 HPO 4 (pH=7.3); (4) sample solution volume, 168nL; TS volume, 168nL; and (5) 9kV voltage, 214nm UV detection. Under the optimized conditions, the limits of detection (LODs) for SMZ and TMP were 7.7 and 8.5ng/mL, and they were 301 and 329 times better compared to a typical injection, respectively. The contents of TMP and SMZ in animal foodstuffs such as dairy products, eggs and honey were analyzed, too. Recoveries of 80-104% were acquired with relative standard deviations of 0.5-5.4%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Gradient Material Strategies for Hydrogel Optimization in Tissue Engineering Applications

    PubMed Central

    2018-01-01

    Although a number of combinatorial/high-throughput approaches have been developed for biomaterial hydrogel optimization, a gradient sample approach is particularly well suited to identify hydrogel property thresholds that alter cellular behavior in response to interacting with the hydrogel due to reduced variation in material preparation and the ability to screen biological response over a range instead of discrete samples each containing only one condition. This review highlights recent work on cell–hydrogel interactions using a gradient material sample approach. Fabrication strategies for composition, material and mechanical property, and bioactive signaling gradient hydrogels that can be used to examine cell–hydrogel interactions will be discussed. The effects of gradients in hydrogel samples on cellular adhesion, migration, proliferation, and differentiation will then be examined, providing an assessment of the current state of the field and the potential of wider use of the gradient sample approach to accelerate our understanding of matrices on cellular behavior. PMID:29485612

  7. Minimum-fuel, 3-dimensional flightpath guidance of transfer jets

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Kreindler, E.

    1984-01-01

    Minimum fuel, three dimensional flightpaths for commercial jet aircraft are discussed. The theoretical development is divided into two sections. In both sections, the necessary conditions of optimal control, including singular arcs and state constraints, are used. One section treats the initial and final portions (below 10,000 ft) of long optimal flightpaths. Here all possible paths can be derived by generating fields of extremals. Another section treats the complete intermediate length, three dimensional terminal area flightpaths. Here only representative sample flightpaths can be computed. Sufficient detail is provided to give the student of optimal control a complex example of a useful application of optimal control theory.

  8. Treatment planning systems for external whole brain radiation therapy: With and without MLC (multi leaf collimator) optimization

    NASA Astrophysics Data System (ADS)

    Budiyono, T.; Budi, W. S.; Hidayanto, E.

    2016-03-01

    Radiation therapy for brain malignancy is done by giving a dose of radiation to a whole volume of the brain (WBRT) followed by a booster at the primary tumor with more advanced techniques. Two external radiation fields given from the right and left side. Because the shape of the head, there will be an unavoidable hotspot radiation dose of greater than 107%. This study aims to optimize planning of radiation therapy using field in field multi-leaf collimator technique. A study of 15 WBRT samples with CT slices is done by adding some segments of radiation in each field of radiation and delivering appropriate dose weighting using a TPS precise plan Elekta R 2.15. Results showed that this optimization a more homogeneous radiation on CTV target volume, lower dose in healthy tissue, and reduced hotspots in CTV target volume. Comparison results of field in field multi segmented MLC technique with standard conventional technique for WBRT are: higher average minimum dose (77.25% ± 0:47%) vs (60% ± 3:35%); lower average maximum dose (110.27% ± 0.26%) vs (114.53% ± 1.56%); lower hotspot volume (5.71% vs 27.43%); and lower dose on eye lenses (right eye: 9.52% vs 18.20%); (left eye: 8.60% vs 16.53%).

  9. Design and Modelling of a Microfluidic Electro-Lysis Device with Controlling Plates

    NASA Technical Reports Server (NTRS)

    Jenkins, A.; Chen, C. P.; Spearing, S.; Monaco, L. A.; Steele, A.; Flores, G.

    2006-01-01

    Many Lab-on-Chip applications require sample pre-treatment systems. Using electric fields to perform cell-lysis in bio-MEMS systems has provided a powerful tool which can be integrated into Lab-on-a-Chip platforms. The major design considerations for electro-lysis devices include optimal geometry and placement of micro-electrodes, cell concentration, flow rates, optimal electric field (e.g. pulsed DC vs. AC), etc. To avoid electrolysis of the flowing solution at the exposed electrode surfaces, magnitudes and the applied voltages and duration of the DC pulse, or the AC frequency of the AC, have to be optimized for a given configuration. Using simulation tools for calculation of electric fields has proved very useful, for exploring alternative configurations and operating conditions for achieving electro cell-lysis. To alleviate the problem associated with low electric fields within the microfluidics channel and the high voltage demand on the contact electrode strips, two "control plates" are added to the microfluidics configuration. The principle of placing the two controlling plate-electrodes is based on the electric fields generated by a combined insulator/dielectric (gladwater) media. Surface charges are established at the insulator/dielectric interface. This paper discusses the effects of this interface charge on the modification of the electric field of the flowing liquid/cell solution.

  10. Design and Modelling of a Microfluidic Electro-Lysis Device with Controlling Plates

    NASA Astrophysics Data System (ADS)

    Jenkins, A.; Chen, C. P.; Spearing, S.; Monaco, L. A.; Steele, A.; Flores, G.

    2006-04-01

    Many Lab-on-Chip applications require sample pre-treatment systems. Using electric fields to perform cell lysis in bio-MEMS systems has provided a powerful tool which can be integrated into Lab-on-a- Chip platforms. The major design considerations for electro-lysis devices include optimal geometry and placement of micro-electrodes, cell concentration, flow rates, optimal electric field (e.g. pulsed DC vs. AC), etc. To avoid electrolysis of the flowing solution at the exposed electrode surfaces, magnitudes and the applied voltages and duration of the DC pulse, or the AC frequency of the AC, have to be optimized for a given configuration. Using simulation tools for calculation of electric fields has proved very useful, for exploring alternative configurations and operating conditions for achieving electro cell-lysis. To alleviate the problem associated with low electric fields within the microfluidics channel and the high voltage demand on the contact electrode strips, two ''control plates'' are added to the microfluidics configuration. The principle of placing the two controlling plate-electrodes is based on the electric fields generated by a combined insulator/dielectric (glass/water) media. Surface charges are established at the insulator/dielectric interface. This paper discusses the effects of this interface charge on the modification of the electric field of the flowing liquid/cell solution.

  11. Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis

    NASA Astrophysics Data System (ADS)

    Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert

    2009-05-01

    We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.

  12. Software/hardware optimization for attenuation-based microtomography using SR at PETRA III (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Beckmann, Felix

    2016-10-01

    The Helmholtz-Zentrum Geesthacht, Germany, is operating the user experiments for microtomography at the beamlines P05 and P07 using synchrotron radiation produced in the storage ring PETRA III at DESY, Hamburg, Germany. In recent years the software pipeline, sample changing hardware for performing high throughput experiments were developed. In this talk the current status of the beamlines will be given. Furthermore, optimisation and automatisation of scanning techniques, will be presented. These are required to scan samples which are larger than the field of view defined by the X-ray beam. The integration into an optimized reconstruction pipeline will be shown.

  13. Optimization, Characterization and Commissioning of a Novel Uniform Scanning Proton Beam Delivery System

    NASA Astrophysics Data System (ADS)

    Mascia, Anthony Edward

    Purpose: To develop and characterize the required detectors for uniform scanning optimization and characterization, and to develop the methodology and assess their efficacy for optimizing, characterizing and commissioning a novel proton beam uniform scanning system. Methods and Materials: The Multi Layer Ion Chamber (MLIC), a 1D array of vented parallel plate ion chambers, was developed in-house for measurement of longitudinal profiles. The Matrixx detector (IBA Dosimetry, Germany) and XOmat V film (Kodak, USA) were characterized for measurement of transverse profiles. The architecture of the uniform scanning system was developed and then optimized and characterized for clinical proton radiotherapy. Results: The MLIC detector significantly increased data collection efficiency without sacrificing data quality. The MLIC was capable of integrating an entire scanned and layer stacked proton field with one measurement, producing results with the equivalent spatial sampling of 1.0mm. The Matrixx detector and modified 1D water phantom jig improved data acquisition efficiency and complemented the film measurements. The proximal, central and distal proton field planes were measured using these methods, yielding better than 3% uniformity. The binary range modulator was programmed, optimized and characterized such that the proton field ranges were separated by approximately 5.0mm modulation width and delivered with an accuracy of 1.0mm in water. Several wobbling magnet scan patterns were evaluated and the raster pattern, spot spacing, scan amplitude and overscan margin were optimized for clinical use. Conclusion: Novel detectors and methods are required for clinically efficient optimization and characterization of proton beam scanning systems. Uniform scanning produces proton beam fields that are suited for clinical proton radiotherapy.

  14. Analysis models for the estimation of oceanic fields

    NASA Technical Reports Server (NTRS)

    Carter, E. F.; Robinson, A. R.

    1987-01-01

    A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.

  15. Variable-Field Analytical Ultracentrifugation: I. Time-Optimized Sedimentation Equilibrium

    PubMed Central

    Ma, Jia; Metrick, Michael; Ghirlando, Rodolfo; Zhao, Huaying; Schuck, Peter

    2015-01-01

    Sedimentation equilibrium (SE) analytical ultracentrifugation (AUC) is a gold standard for the rigorous determination of macromolecular buoyant molar masses and the thermodynamic study of reversible interactions in solution. A significant experimental drawback is the long time required to attain SE, which is usually on the order of days. We have developed a method for time-optimized SE (toSE) with defined time-varying centrifugal fields that allow SE to be attained in a significantly (up to 10-fold) shorter time than is usually required. To achieve this, numerical Lamm equation solutions for sedimentation in time-varying fields are computed based on initial estimates of macromolecular transport properties. A parameterized rotor-speed schedule is optimized with the goal of achieving a minimal time to equilibrium while limiting transient sample preconcentration at the base of the solution column. The resulting rotor-speed schedule may include multiple over- and underspeeding phases, balancing the formation of gradients from strong sedimentation fluxes with periods of high diffusional transport. The computation is carried out in a new software program called TOSE, which also facilitates convenient experimental implementation. Further, we extend AUC data analysis to sedimentation processes in such time-varying centrifugal fields. Due to the initially high centrifugal fields in toSE and the resulting strong migration, it is possible to extract sedimentation coefficient distributions from the early data. This can provide better estimates of the size of macromolecular complexes and report on sample homogeneity early on, which may be used to further refine the prediction of the rotor-speed schedule. In this manner, the toSE experiment can be adapted in real time to the system under study, maximizing both the information content and the time efficiency of SE experiments. PMID:26287634

  16. QM/MM Geometry Optimization on Extensive Free-Energy Surfaces for Examination of Enzymatic Reactions and Design of Novel Functional Properties of Proteins.

    PubMed

    Hayashi, Shigehiko; Uchida, Yoshihiro; Hasegawa, Taisuke; Higashi, Masahiro; Kosugi, Takahiro; Kamiya, Motoshi

    2017-05-05

    Many remarkable molecular functions of proteins use their characteristic global and slow conformational dynamics through coupling of local chemical states in reaction centers with global conformational changes of proteins. To theoretically examine the functional processes of proteins in atomic detail, a methodology of quantum mechanical/molecular mechanical (QM/MM) free-energy geometry optimization is introduced. In the methodology, a geometry optimization of a local reaction center is performed with a quantum mechanical calculation on a free-energy surface constructed with conformational samples of the surrounding protein environment obtained by a molecular dynamics simulation with a molecular mechanics force field. Geometry optimizations on extensive free-energy surfaces by a QM/MM reweighting free-energy self-consistent field method designed to be variationally consistent and computationally efficient have enabled examinations of the multiscale molecular coupling of local chemical states with global protein conformational changes in functional processes and analysis and design of protein mutants with novel functional properties.

  17. Improvements in pollutant monitoring: optimizing silicone for co-deployment with polyethylene passive sampling devices.

    PubMed

    O'Connell, Steven G; McCartney, Melissa A; Paulik, L Blair; Allan, Sarah E; Tidwell, Lane G; Wilson, Glenn; Anderson, Kim A

    2014-10-01

    Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2-5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. IMPROVEMENTS IN POLLUTANT MONITORING: OPTIMIZING SILICONE FOR CO-DEPLOYMENT WITH POLYETHYLENE PASSIVE SAMPLING DEVICES

    PubMed Central

    O’Connell, Steven G.; McCartney, Melissa A.; Paulik, L. Blair; Allan, Sarah E.; Tidwell, Lane G.; Wilson, Glenn; Anderson, Kim A.

    2014-01-01

    Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2–5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960

  19. QM/MM Geometry Optimization on Extensive Free-Energy Surfaces for Examination of Enzymatic Reactions and Design of Novel Functional Properties of Proteins

    NASA Astrophysics Data System (ADS)

    Hayashi, Shigehiko; Uchida, Yoshihiro; Hasegawa, Taisuke; Higashi, Masahiro; Kosugi, Takahiro; Kamiya, Motoshi

    2017-05-01

    Many remarkable molecular functions of proteins use their characteristic global and slow conformational dynamics through coupling of local chemical states in reaction centers with global conformational changes of proteins. To theoretically examine the functional processes of proteins in atomic detail, a methodology of quantum mechanical/molecular mechanical (QM/MM) free-energy geometry optimization is introduced. In the methodology, a geometry optimization of a local reaction center is performed with a quantum mechanical calculation on a free-energy surface constructed with conformational samples of the surrounding protein environment obtained by a molecular dynamics simulation with a molecular mechanics force field. Geometry optimizations on extensive free-energy surfaces by a QM/MM reweighting free-energy self-consistent field method designed to be variationally consistent and computationally efficient have enabled examinations of the multiscale molecular coupling of local chemical states with global protein conformational changes in functional processes and analysis and design of protein mutants with novel functional properties.

  20. Air core notch-coil magnet with variable geometry for fast-field-cycling NMR.

    PubMed

    Kruber, S; Farrher, G D; Anoardo, E

    2015-10-01

    In this manuscript we present details on the optimization, construction and performance of a wide-bore (71 mm) α-helical-cut notch-coil magnet with variable geometry for fast-field-cycling NMR. In addition to the usual requirements for this kind of magnets (high field-to-power ratio, good magnetic field homogeneity, low inductance and resistance values) a tunable homogeneity and a more uniform heat dissipation along the magnet body are considered. The presented magnet consists of only one machined metallic cylinder combined with two external movable pieces. The optimal configuration is calculated through an evaluation of the magnetic flux density within the entire volume of interest. The magnet has a field-to-current constant of 0.728 mT/A, allowing to switch from zero to 0.125 T in less than 3 ms without energy storage assistance. For a cylindrical sample volume of 35 cm(3) the effective magnet homogeneity is lower than 130 ppm. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Modeling and design of a vibration energy harvester using the magnetic shape memory effect

    NASA Astrophysics Data System (ADS)

    Saren, A.; Musiienko, D.; Smith, A. R.; Tellinen, J.; Ullakko, K.

    2015-09-01

    In this study, a vibration energy harvester is investigated which uses a Ni-Mn-Ga sample that is mechanically strained between 130 and 300 Hz while in a constant biasing magnetic field. The crystallographic reorientation of the sample during mechanical actuation changes its magnetic properties due to the magnetic shape memory (MSM) effect. This leads to an oscillation of the magnetic flux in the yoke which generates electrical energy by inducing an alternating current within the pick-up coils. A power of 69.5 mW (with a corresponding power density of 1.37 mW mm-3 compared to the active volume of the MSM element) at 195 Hz was obtained by optimizing the biasing magnetic field, electrical resistance and electrical resonance. The optimization of the electrical resonance increased the energy generated by nearly a factor of four when compared to a circuit with no resonance. These results are strongly supported by a theoretical model and simulation which gives corresponding values with an error of approximately 20% of the experimental data. This model will be used in the design of future MSM energy harvesters and their optimization for specific frequencies and power outputs.

  2. Micrometer-scale magnetic imaging of geological samples using a quantum diamond microscope

    NASA Astrophysics Data System (ADS)

    Glenn, D. R.; Fu, R. R.; Kehayias, P.; Le Sage, D.; Lima, E. A.; Weiss, B. P.; Walsworth, R. L.

    2017-08-01

    Remanent magnetization in geological samples may record the past intensity and direction of planetary magnetic fields. Traditionally, this magnetization is analyzed through measurements of the net magnetic moment of bulk millimeter to centimeter sized samples. However, geological samples are often mineralogically and texturally heterogeneous at submillimeter scales, with only a fraction of the ferromagnetic grains carrying the remanent magnetization of interest. Therefore, characterizing this magnetization in such cases requires a technique capable of imaging magnetic fields at fine spatial scales and with high sensitivity. To address this challenge, we developed a new instrument, based on nitrogen-vacancy centers in diamond, which enables direct imaging of magnetic fields due to both remanent and induced magnetization, as well as optical imaging, of room-temperature geological samples with spatial resolution approaching the optical diffraction limit. We describe the operating principles of this device, which we call the quantum diamond microscope (QDM), and report its optimized image-area-normalized magnetic field sensitivity (20 µTṡµm/Hz1/2), spatial resolution (5 µm), and field of view (4 mm), as well as trade-offs between these parameters. We also perform an absolute magnetic field calibration for the device in different modes of operation, including three-axis (vector) and single-axis (projective) magnetic field imaging. Finally, we use the QDM to obtain magnetic images of several terrestrial and meteoritic rock samples, demonstrating its ability to resolve spatially distinct populations of ferromagnetic carriers.

  3. Advantages of Unfair Quantum Ground-State Sampling.

    PubMed

    Zhang, Brian Hu; Wagenbreth, Gene; Martin-Mayor, Victor; Hen, Itay

    2017-04-21

    The debate around the potential superiority of quantum annealers over their classical counterparts has been ongoing since the inception of the field. Recent technological breakthroughs, which have led to the manufacture of experimental prototypes of quantum annealing optimizers with sizes approaching the practical regime, have reignited this discussion. However, the demonstration of quantum annealing speedups remains to this day an elusive albeit coveted goal. We examine the power of quantum annealers to provide a different type of quantum enhancement of practical relevance, namely, their ability to serve as useful samplers from the ground-state manifolds of combinatorial optimization problems. We study, both numerically by simulating stoquastic and non-stoquastic quantum annealing processes, and experimentally, using a prototypical quantum annealing processor, the ability of quantum annealers to sample the ground-states of spin glasses differently than thermal samplers. We demonstrate that (i) quantum annealers sample the ground-state manifolds of spin glasses very differently than thermal optimizers (ii) the nature of the quantum fluctuations driving the annealing process has a decisive effect on the final distribution, and (iii) the experimental quantum annealer samples ground-state manifolds significantly differently than thermal and ideal quantum annealers. We illustrate how quantum annealers may serve as powerful tools when complementing standard sampling algorithms.

  4. Knowledge-based nonuniform sampling in multidimensional NMR.

    PubMed

    Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C

    2011-07-01

    The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.

  5. Optimal Appearance Model for Visual Tracking

    PubMed Central

    Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao

    2016-01-01

    Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639

  6. Feedback quantum control of molecular electronic population transfer

    NASA Astrophysics Data System (ADS)

    Bardeen, Christopher J.; Yakovlev, Vladislav V.; Wilson, Kent R.; Carpenter, Scott D.; Weber, Peter M.; Warren, Warren S.

    1997-11-01

    Feedback quantum control, where the sample `teaches' a computer-controlled arbitrary lightform generator to find the optimal light field, is experimentally demonstrated for a molecular system. Femtosecond pulses tailored by a computer-controlled acousto-optic pulse shaper excite fluorescence from laser dye molecules in solution. Fluorescence and laser power are monitored, and the computer uses the experimental data and a genetic algorithm to optimize population transfer from ground to first excited state. Both efficiency (the ratio of excited state population to laser energy) and effectiveness (total excited state population) are optimized. Potential use as an `automated theory tester' is discussed.

  7. Optimization of MR fluid Yield stress using Taguchi Method and Response Surface Methodology Techniques

    NASA Astrophysics Data System (ADS)

    Mangal, S. K.; Sharma, Vivek

    2018-02-01

    Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.

  8. Exchange Bias Optimization by Controlled Oxidation of Cobalt Nanoparticle Films Prepared by Sputter Gas Aggregation

    PubMed Central

    Antón, Ricardo López; González, Juan A.; Andrés, Juan P.; Normile, Peter S.; Canales-Vázquez, Jesús; Muñiz, Pablo; Riveiro, José M.; De Toro, José A.

    2017-01-01

    Porous films of cobalt nanoparticles have been obtained by sputter gas aggregation and controllably oxidized by air annealing at 100 °C for progressively longer times (up to more than 1400 h). The magnetic properties of the samples were monitored during the process, with a focus on the exchange bias field. Air annealing proves to be a convenient way to control the Co/CoO ratio in the samples, allowing the optimization of the exchange bias field to a value above 6 kOe at 5 K. The occurrence of the maximum in the exchange bias field is understood in terms of the density of CoO uncompensated spins and their degree of pinning, with the former reducing and the latter increasing upon the growth of a progressively thicker CoO shell. Vertical shifts exhibited in the magnetization loops are found to correlate qualitatively with the peak in the exchange bias field, while an increase in vertical shift observed for longer oxidation times may be explained by a growing fraction of almost completely oxidized particles. The presence of a hummingbird-like form in magnetization loops can be understood in terms of a combination of hard (biased) and soft (unbiased) components; however, the precise origin of the soft phase is as yet unresolved. PMID:28336895

  9. Exchange Bias Optimization by Controlled Oxidation of Cobalt Nanoparticle Films Prepared by Sputter Gas Aggregation.

    PubMed

    Antón, Ricardo López; González, Juan A; Andrés, Juan P; Normile, Peter S; Canales-Vázquez, Jesús; Muñiz, Pablo; Riveiro, José M; De Toro, José A

    2017-03-11

    Porous films of cobalt nanoparticles have been obtained by sputter gas aggregation and controllably oxidized by air annealing at 100 °C for progressively longer times (up to more than 1400 h). The magnetic properties of the samples were monitored during the process, with a focus on the exchange bias field. Air annealing proves to be a convenient way to control the Co/CoO ratio in the samples, allowing the optimization of the exchange bias field to a value above 6 kOe at 5 K. The occurrence of the maximum in the exchange bias field is understood in terms of the density of CoO uncompensated spins and their degree of pinning, with the former reducing and the latter increasing upon the growth of a progressively thicker CoO shell. Vertical shifts exhibited in the magnetization loops are found to correlate qualitatively with the peak in the exchange bias field, while an increase in vertical shift observed for longer oxidation times may be explained by a growing fraction of almost completely oxidized particles. The presence of a hummingbird-like form in magnetization loops can be understood in terms of a combination of hard (biased) and soft (unbiased) components; however, the precise origin of the soft phase is as yet unresolved.

  10. The effect of growth temperature on the irreversibility line of MPMG YBCO bulk with Y2O3 layer

    NASA Astrophysics Data System (ADS)

    Kurnaz, Sedat; Çakır, Bakiye; Aydıner, Alev

    2017-07-01

    In this study, three kinds of YBCO samples which are named Y1040, Y1050 and Y1060 were fabricated by Melt-Powder-Melt-Growth (MPMG) method without a seed crystal. Samples seem to be single crystal. The compacted powders were located on a crucible with a buffer layer of Y2O3 to avoid liquid to spread on the furnace plate and also to support crystal growth. YBCO samples were investigated by magnetoresistivity (ρ-T) and magnetization (M-T) measurements in dc magnetic fields (parallel to c-axis) up to 5 T. Irreversibility fields (Hirr) and upper critical fields (Hc2) were obtained using 10% and 90% criteria of the normal state resistivity value from ρ-T curves. M-T measurements were carried out using the zero field cooling (ZFC) and field cooling (FC) processes to get irreversible temperature (Tirr). Fitting of the irreversibility line results to giant flux creep and vortex glass models were discussed. The results were found to be consistent with the results of the samples fabricated using a seed crystal. At the fabrication of MPMG YBCO, optimized temperature for crystal growth was determined to be around 1050-1060 °C.

  11. Stimulated Raman spectroscopy and nanoscopy of molecules using near field photon induced forces without resonant electronic enhancement gain

    NASA Astrophysics Data System (ADS)

    Tamma, Venkata Ananth; Huang, Fei; Nowak, Derek; Kumar Wickramasinghe, H.

    2016-06-01

    We report on stimulated Raman spectroscopy and nanoscopy of molecules, excited without resonant electronic enhancement gain, and recorded using near field photon induced forces. Photon-induced interaction forces between the sharp metal coated silicon tip of an Atomic Force Microscope (AFM) and a sample resulting from stimulated Raman excitation were detected. We controlled the tip to sample spacing using the higher order flexural eigenmodes of the AFM cantilever, enabling the tip to come very close to the sample. As a result, the detection sensitivity was increased compared with previous work on Raman force microscopy. Raman vibrational spectra of azobenzene thiol and l-phenylalanine were measured and found to agree well with published results. Near-field force detection eliminates the need for far-field optical spectrometer detection. Recorded images show spatial resolution far below the optical diffraction limit. Further optimization and use of ultrafast pulsed lasers could push the detection sensitivity towards the single molecule limit.

  12. Stimulated Raman spectroscopy and nanoscopy of molecules using near field photon induced forces without resonant electronic enhancement gain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamma, Venkata Ananth; Huang, Fei; Kumar Wickramasinghe, H., E-mail: hkwick@uci.edu

    We report on stimulated Raman spectroscopy and nanoscopy of molecules, excited without resonant electronic enhancement gain, and recorded using near field photon induced forces. Photon-induced interaction forces between the sharp metal coated silicon tip of an Atomic Force Microscope (AFM) and a sample resulting from stimulated Raman excitation were detected. We controlled the tip to sample spacing using the higher order flexural eigenmodes of the AFM cantilever, enabling the tip to come very close to the sample. As a result, the detection sensitivity was increased compared with previous work on Raman force microscopy. Raman vibrational spectra of azobenzene thiol andmore » l-phenylalanine were measured and found to agree well with published results. Near-field force detection eliminates the need for far-field optical spectrometer detection. Recorded images show spatial resolution far below the optical diffraction limit. Further optimization and use of ultrafast pulsed lasers could push the detection sensitivity towards the single molecule limit.« less

  13. Magnetostriction measurement by four probe method

    NASA Astrophysics Data System (ADS)

    Dange, S. N.; Radha, S.

    2018-04-01

    The present paper describes the design and setting up of an indigenouslydevelopedmagnetostriction(MS) measurement setup using four probe method atroom temperature.A standard strain gauge is pasted with a special glue on the sample and its change in resistance with applied magnetic field is measured using KeithleyNanovoltmeter and Current source. An electromagnet with field upto 1.2 tesla is used to source the magnetic field. The sample is placed between the magnet poles using self designed and developed wooden probe stand, capable of moving in three mutually perpendicular directions. The nanovoltmeter and current source are interfaced with PC using RS232 serial interface. A software has been developed in for logging and processing of data. Proper optimization of measurement has been done through software to reduce the noise due to thermal emf and electromagnetic induction. The data acquired for some standard magnetic samples are presented. The sensitivity of the setup is 1microstrain with an error in measurement upto 5%.

  14. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.

  15. Field Detection of Citrus Huanglongbing Associated with ' Candidatus Liberibacter Asiaticus' by Recombinese Polymerase Amplification within 15 min.

    PubMed

    Qian, Wenjuan; Lu, Ying; Meng, Youqing; Ye, Zunzhong; Wang, Liu; Wang, Rui; Zheng, Qiqi; Wu, Hui; Wu, Jian

    2018-06-06

    ' Candidatus Liberibacter asiaticus' (Las) is the most prevalent bacterium associated with huanglongbing, which is one of the most destructive diseases of citrus. In this paper, an extremely rapid and simple method for field detection of Las from leaf samples, based on recombinase polymerase amplification (RPA), is described. Three RPA primer pairs were designed and evaluated. RPA amplification was optimized so that it could be accomplished within 10 min. In combination with DNA crude extraction by a 50-fold dilution after 1 min of grinding in 0.5 M sodium hydroxide and visual detection via fluorescent DNA dye (positive samples display obvious green fluorescence while negative samples remain colorless), the whole detection process can be accomplished within 15 min. The sensitivity and specificity of this RPA-based method were evaluated and were proven to be equal to those of real-time PCR. The reliability of this method was also verified by analyzing field samples.

  16. Nanometer-sized alumina packed microcolumn solid-phase extraction combined with field-amplified sample stacking-capillary electrophoresis for the speciation analysis of inorganic selenium in environmental water samples.

    PubMed

    Duan, Jiankuan; Hu, Bin; He, Man

    2012-10-01

    In this paper, a new method of nanometer-sized alumina packed microcolumn SPE combined with field-amplified sample stacking (FASS)-CE-UV detection was developed for the speciation analysis of inorganic selenium in environmental water samples. Self-synthesized nanometer-sized alumina was packed in a microcolumn as the SPE adsorbent to retain Se(IV) and Se(VI) simultaneously at pH 6 and the retained inorganic selenium was eluted by concentrated ammonia. The eluent was used for FASS-CE-UV analysis after NH₃ evaporation. The factors affecting the preconcentration of both Se(IV) and Se(VI) by SPE and FASS were studied and the optimal CE separation conditions for Se(IV) and Se(VI) were obtained. Under the optimal conditions, the LODs of 57 ng L⁻¹ (Se(IV)) and 71 ng L⁻¹ (Se(VI)) were obtained, respectively. The developed method was validated by the analysis of a certified reference material of GBW(E)080395 environmental water and the determined value was in a good agreement with the certified value. It was also successfully applied to the speciation analysis of inorganic selenium in environmental water samples, including Yangtze River water, spring water, and tap water. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Optimizing ultrafast illumination for multiphoton-excited fluorescence imaging

    PubMed Central

    Stoltzfus, Caleb R.; Rebane, Aleksander

    2016-01-01

    We study the optimal conditions for high throughput two-photon excited fluorescence (2PEF) and three-photon excited fluorescence (3PEF) imaging using femtosecond lasers. We derive relations that allow maximization of the rate of imaging depending on the average power, pulse repetition rate, and noise characteristics of the laser, as well as on the size and structure of the sample. We perform our analysis using ~100 MHz, ~1 MHz and 1 kHz pulse rates and using both a tightly-focused illumination beam with diffraction-limited image resolution, as well loosely focused illumination with a relatively low image resolution, where the latter utilizes separate illumination and fluorescence detection beam paths. Our theoretical estimates agree with the experiments, which makes our approach especially useful for optimizing high throughput imaging of large samples with a field-of-view up to 10x10 cm2. PMID:27231620

  18. Evaluation of PCR methods for detection of Brucella strains from culture and tissues.

    PubMed

    Çiftci, Alper; İça, Tuba; Savaşan, Serap; Sareyyüpoğlu, Barış; Akan, Mehmet; Diker, Kadir Serdar

    2017-04-01

    The genus Brucella causes significant economic losses due to infertility, abortion, stillbirth or weak calves, and neonatal mortality in livestock. Brucellosis is still a zoonosis of public health importance worldwide. The study was aimed to optimize and evaluate PCR assays used for the diagnosis of Brucella infections. For this aim, several primers and PCR protocols were performed and compared with Brucella cultures and biological material inoculated with Brucella. In PCR assays, genus- or species-specific oligonucleotide primers derived from 16S rRNA sequences (F4/R2, Ba148/928, IS711, BruP6-P7) and OMPs (JPF/JPR, 31ter/sd) of Brucella were used. All primers except for BruP6-P7 detected the DNA from reference Brucella strains and field isolates. In spiked blood, milk, and semen samples, F4-R2 primer-oriented PCR assays detected minimal numbers of Brucella. In spiked serum and fetal stomach content, Ba148/928 primer-oriented PCR assays detected minimal numbers of Brucella. Field samples collected from sheep and cattle were examined by bacteriological methods and optimized PCR assays. Overall, sensitivity of PCR assays was found superior to conventional bacteriological isolation. Brucella DNA was detected in 35.1, 1.1, 24.8, 5.0, and 8.0% of aborted fetus, blood, milk, semen, and serum samples by PCR assays, respectively. In conclusion, PCR assay in optimized conditions was found to be valuable in sensitive and specific detection of Brucella infections of animals.

  19. Direct and sensitive detection of foodborne pathogens within fresh produce samples using a field-deployable handheld device.

    PubMed

    You, David J; Geshell, Kenneth J; Yoon, Jeong-Yeol

    2011-10-15

    Direct and sensitive detection of foodborne pathogens from fresh produce samples was accomplished using a handheld lab-on-a-chip device, requiring little to no sample processing and enrichment steps for a near-real-time detection and truly field-deployable device. The detection of Escherichia coli K12 and O157:H7 in iceberg lettuce was achieved utilizing optimized Mie light scatter parameters with a latex particle immunoagglutination assay. The system exhibited good sensitivity, with a limit of detection of 10 CFU mL(-1) and an assay time of <6 min. Minimal pretreatment with no detrimental effects on assay sensitivity and reproducibility was accomplished with a simple and cost-effective KimWipes filter and disposable syringe. Mie simulations were used to determine the optimal parameters (particle size d, wavelength λ, and scatter angle θ) for the assay that maximize light scatter intensity of agglutinated latex microparticles and minimize light scatter intensity of the tissue fragments of iceberg lettuce, which were experimentally validated. This introduces a powerful method for detecting foodborne pathogens in fresh produce and other potential sample matrices. The integration of a multi-channel microfluidic chip allowed for differential detection of the agglutinated particles in the presence of the antigen, revealing a true field-deployable detection system with decreased assay time and improved robustness over comparable benchtop systems. Additionally, two sample preparation methods were evaluated through simulated field studies based on overall sensitivity, protocol complexity, and assay time. Preparation of the plant tissue sample by grinding resulted in a two-fold improvement in scatter intensity over washing, accompanied with a significant increase in assay time: ∼5 min (grinding) versus ∼1 min (washing). Specificity studies demonstrated binding of E. coli O157:H7 EDL933 to only O157:H7 antibody conjugated particles, with no cross-reactivity to K12. This suggests the adaptability of the system for use with a wide variety of pathogens, and the potential to detect in a variety of biological matrices with little to no sample pretreatment. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Dispositional optimism and self-assessed situation awareness in a Norwegian military training exercise.

    PubMed

    Eid, Jarle; Matthews, Michael D; Meland, Nils Tore; Johnsen, Bjørn Helge

    2005-06-01

    The current study examined the relationship between dispositional optimism and situation awareness. A sample of 77 Royal Norwegian Naval Academy and 57 Royal Norwegian Army Academy cadets were administered the Life Orientation Test prior to participating in a field-training exercise involving a series of challenging missions. Following an infantry mission component of the exercise, situation awareness was measured using the Mission Awareness Rating Scale (MARS), a self-assessment tool. The analysis indicated that dispositional optimism correlated negatively with situation awareness under these conditions. The role of intrapersonal variables in mediating situation awareness and decision-making in stressful situations is discussed.

  1. Optimization of the Bridgman crystal growth process

    NASA Astrophysics Data System (ADS)

    Margulies, M.; Witomski, P.; Duffar, T.

    2004-05-01

    A numerical optimization method of the vertical Bridgman growth configuration is presented and developed. It permits to optimize the furnace temperature field and the pulling rate versus time in order to decrease the radial thermal gradients in the sample. Some constraints are also included in order to insure physically realistic results. The model includes the two classical non-linearities associated to crystal growth processes, the radiative thermal exchange and the release of latent heat at the solid-liquid interface. The mathematical analysis and development of the problem is shortly described. On some examples, it is shown that the method works in a satisfactory way; however the results are dependent on the numerical parameters. Improvements of the optimization model, on the physical and numerical point of view, are suggested.

  2. Semi-Autonomous Small Unmanned Aircraft Systems for Sampling Tornadic Supercell Thunderstorms

    NASA Astrophysics Data System (ADS)

    Elston, Jack S.

    This work describes the development of a network-centric unmanned aircraft system (UAS) for in situ sampling of supercell thunderstorms. UAS have been identified as a well-suited platform for meteorological observations given their portability, endurance, and ability to mitigate atmospheric disturbances. They represent a unique tool for performing targeted sampling in regions of a supercell thunderstorm previously unreachable through other methods. Doppler radar can provide unique measurements of the wind field in and around supercell thunderstorms. In order to exploit this capability, a planner was developed that can optimize ingress trajectories for severe storm penetration. The resulting trajectories were examined to determine the feasibility of such a mission, and to optimize ingress in terms of flight time and exposure to precipitation. A network-centric architecture was developed to handle the large amount of distributed data produced during a storm sampling mission. Creation of this architecture was performed through a bottom-up design approach which reflects and enhances the interplay between networked communication and autonomous aircraft operation. The advantages of the approach are demonstrated through several field and hardware-in-the-loop experiments containing different hardware, networking protocols, and objectives. Results are provided from field experiments involving the resulting network-centric architecture. An airmass boundary was sampled in the Collaborative Colorado Nebraska Unmanned Aircraft Experiment (CoCoNUE). Utilizing lessons learned from CoCoNUE, a new concept of operations (CONOPS) and UAS were developed to perform in situ sampling of supercell thunderstorms. Deployment during the Verification of the Origins of Rotation in Tornadoes Experiment 2 (VORTEX2) resulted in the first ever sampling of the airmass associated with the rear flank downdraft of a tornadic supercell thunderstorm by a UAS. Hardware-in-the-loop simulation capability was added to the UAS to enable further assessment of the system and CONOPS. The simulation combines a full six degree-of-freedom aircraft dynamic model with wind and precipitation data from simulations of severe convective storms. Interfaces were written to involve as much of the system's field hardware as possible, including the creation of a simulated radar product server. A variety of simulations were conducted to evaluate different aspects of the CONOPS used for the 2010 VORTEX2 field campaign.

  3. Simple Example of Backtest Overfitting (SEBO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less

  4. Toward high-resolution NMR spectroscopy of microscopic liquid samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Mark C.; Mehta, Hardeep S.; Chen, Ying

    A longstanding limitation of high-resolution NMR spectroscopy is the requirement for samples to have macroscopic dimensions. Commercial probes, for example, are designed for volumes of at least 5 mL, in spite of decades of work directed toward the goal of miniaturization. Progress in miniaturizing inductive detectors has been limited by a perceived need to meet two technical requirements: (1) minimal separation between the sample and the detector, which is essential for sensitivity, and (2) near-perfect magnetic-field homogeneity at the sample, which is typically needed for spectral resolution. The first of these requirements is real, but the second can be relaxed,more » as we demonstrate here. By using pulse sequences that yield high-resolution spectra in an inhomogeneous field, we eliminate the need for near-perfect field homogeneity and the accompanying requirement for susceptibility matching of microfabricated detector components. With this requirement removed, typical imperfections in microfabricated components can be tolerated, and detector dimensions can be matched to those of the sample, even for samples of volume << 5 uL. Pulse sequences that are robust to field inhomogeneity thus enable small-volume detection with optimal sensitivity. We illustrate the potential of this approach to miniaturization by presenting spectra acquired with a flat-wire detector that can easily be scaled to subnanoliter volumes. In particular, we report high-resolution NMR spectroscopy of an alanine sample of volume 500 pL.« less

  5. Comparison of Sample and Detection Quantification Methods for Salmonella Enterica from Produce

    NASA Technical Reports Server (NTRS)

    Hummerick, M. P.; Khodadad, C.; Richards, J. T.; Dixit, A.; Spencer, L. M.; Larson, B.; Parrish, C., II; Birmele, M.; Wheeler, Raymond

    2014-01-01

    The purpose of this study was to identify and optimize fast and reliable sampling and detection methods for the identification of pathogens that may be present on produce grown in small vegetable production units on the International Space Station (ISS), thus a field setting. Microbiological testing is necessary before astronauts are allowed to consume produce grown on ISS where currently there are two vegetable production units deployed, Lada and Veggie.

  6. TH-CD-209-10: Scanning Proton Arc Therapy (SPArc) - The First Robust and Delivery-Efficient Spot Scanning Proton Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, X; Li, X; Zhang, J

    Purpose: To develop a delivery-efficient proton spot-scanning arc therapy technique with robust plan quality. Methods: We developed a Scanning Proton Arc(SPArc) optimization algorithm integrated with (1)Control point re-sampling by splitting control point into adjacent sub-control points; (2)Energy layer re-distribution by assigning the original energy layers to the new sub-control points; (3)Energy layer filtration by deleting low MU weighting energy layers; (4)Energy layer re-sampling by sampling additional layers to ensure the optimal solution. A bilateral head and neck oropharynx case and a non-mobile lung target case were tested. Plan quality and total estimated delivery time were compared to original robust optimizedmore » multi-field step-and-shoot arc plan without SPArc optimization (Arcmulti-field) and standard robust optimized Intensity Modulated Proton Therapy(IMPT) plans. Dose-Volume-Histograms (DVH) of target and Organ-at-Risks (OARs) were analyzed along with all worst case scenarios. Total delivery time was calculated based on the assumption of a 360 degree gantry room with 1 RPM rotation speed, 2ms spot switching time, beam current 1nA, minimum spot weighting 0.01 MU, energy-layer-switching-time (ELST) from 0.5 to 4s. Results: Compared to IMPT, SPArc delivered less integral dose(−14% lung and −8% oropharynx). For lung case, SPArc reduced 60% of skin max dose, 35% of rib max dose and 15% of lung mean dose. Conformity Index is improved from 7.6(IMPT) to 4.0(SPArc). Compared to Arcmulti-field, SPArc reduced number of energy layers by 61%(276 layers in lung) and 80%(1008 layers in oropharynx) while kept the same robust plan quality. With ELST from 0.5s to 4s, it reduced 55%–60% of Arcmulti-field delivery time for the lung case and 56%–67% for the oropharynx case. Conclusion: SPArc is the first robust and delivery-efficient proton spot-scanning arc therapy technique which could be implemented in routine clinic. For modern proton machine with ELST close to 0.5s, SPArc would be a popular treatment option for both single and multi-room center.« less

  7. Adapting geostatistics to analyze spatial and temporal trends in weed populations

    USDA-ARS?s Scientific Manuscript database

    Geostatistics were originally developed in mining to estimate the location, abundance and quality of ore over large areas from soil samples to optimize future mining efforts. Here, some of these methods were adapted to weeds to account for a limited distribution area (i.e., inside a field), variatio...

  8. Design of access-tube TDR sensor for soil water content: Theory

    USDA-ARS?s Scientific Manuscript database

    The design of a cylindrical access-tube mounted waveguide was developed for in-situ soil water content sensing using time-domain reflectometry (TDR). To optimize the design with respect to sampling volume and losses, we derived the electromagnetic fields produced by a TDR sensor with cylindrical geo...

  9. [Efficacy of a rapid test to diagnose Plasmodium vivax in symptomatic patients of Chiapas, Mexico].

    PubMed

    González-Cerón, Lilia; Rodríguez, Mario H; Betanzos, Angel F; Abadía, Acatl

    2005-01-01

    To evaluate, under laboratory conditions, the sensitivity and specificity of a rapid diagnostic test (OptiMAL), based on immunoreactive strips, to detect Plasmodium vivax infection in febrile patients in Southern Chiapas, Mexico. The presence of parasites in blood samples of 893 patients was investigated by Giemsa-stained thick blood smear microscopic examination (gold standard). A blood drop from the same sample was smeared on immunoreactive strips to investigate the presence of the parasite pLDH. Discordant results were resolved by PCR amplification of the parasite's 18S SSU rRNA, to discard infection. OptiMAL had an overall sensitivity of 93.3% and its specificity was 99.5%. Its positive and negative predictive values were 96.5% and 98.9%, respectively. Signal intensity in OptiMAL strips correlated well with the parasitemia density in the blood samples (r = 0.601, p = 0.0001). This rapid test had acceptable sensitivity and specificity to detect P. vivax under laboratory conditions and could be useful for malaria diagnosis in field operations in Mexico.

  10. Use of terrestrial field studies in the derivation of bioaccumulation potential of chemicals

    USGS Publications Warehouse

    van den Brink, Nico W.; Arblaster, Jennifer A.; Bowman, Sarah R.; Conder, Jason M.; Elliott, John E.; Johnson, Mark S.; Muir, Derek C.G.; Natal-da-Luz, Tiago; Rattner, Barnett A.; Sample, Bradley E.; Shore, Richard F.

    2016-01-01

    Field-based studies are an essential component of research addressing the behavior of organic chemicals, and a unique line of evidence that can be used to assess bioaccumulation potential in chemical registration programs and aid in development of associated laboratory and modeling efforts. To aid scientific and regulatory discourse on the application of terrestrial field data in this manner, this article provides practical recommendations regarding the generation and interpretation of terrestrial field data. Currently, biota-to-soil-accumulation factors (BSAFs), biomagnification factors (BMFs), and bioaccumulation factors (BAFs) are the most suitable bioaccumulation metrics that are applicable to bioaccumulation assessment evaluations and able to be generated from terrestrial field studies with relatively low uncertainty. Biomagnification factors calculated from field-collected samples of terrestrial carnivores and their prey appear to be particularly robust indicators of bioaccumulation potential. The use of stable isotope ratios for quantification of trophic relationships in terrestrial ecosystems needs to be further developed to resolve uncertainties associated with the calculation of terrestrial trophic magnification factors (TMFs). Sampling efforts for terrestrial field studies should strive for efficiency, and advice on optimization of study sample sizes, practical considerations for obtaining samples, selection of tissues for analysis, and data interpretation is provided. Although there is still much to be learned regarding terrestrial bioaccumulation, these recommendations provide some initial guidance to the present application of terrestrial field data as a line of evidence in the assessment of chemical bioaccumulation potential and a resource to inform laboratory and modeling efforts.

  11. Processing, Fabrication, Characterization and Device Demonstration of High Temperature Superconducting Ceramics

    DTIC Science & Technology

    1994-07-30

    optimize processes for grain alignment in bulk and tape samples; and (4) provide a technology base for utilization of flux-trap magnets . SUXIARY a) A...of the vortices in the bulk material. Neutron scattering experiments can be performed in a magnetic field range of -0.05 T up to several teslas, a...uncorrected for demagnetization ) was then taken as the field associated with the first nonzero value of magnetization -difference, AM; 3ee Figs.5&6

  12. Consensus Classification Using Non-Optimized Classifiers.

    PubMed

    Brownfield, Brett; Lemos, Tony; Kalivas, John H

    2018-04-03

    Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.

  13. Scanning SQUID microscope with an in-situ magnetization/demagnetization field for geological samples

    NASA Astrophysics Data System (ADS)

    Du, Junwei; Liu, Xiaohong; Qin, Huafeng; Wei, Zhao; Kong, Xiangyang; Liu, Qingsong; Song, Tao

    2018-04-01

    Magnetic properties of rocks are crucial for paleo-, rock-, environmental-magnetism, and magnetic material sciences. Conventional rock magnetometers deal with bulk properties of samples, whereas scanning microscope can map the distribution of remanent magnetization. In this study, a new scanning microscope based on a low-temperature DC superconducting quantum interference device (SQUID) equipped with an in-situ magnetization/demagnetization device was developed. To realize the combination of sensitive instrument as SQUID with high magnetizing/demagnetizing fields, the pick-up coil, the magnetization/demagnetization coils and the measurement mode of the system were optimized. The new microscope has a field sensitivity of 250 pT/√Hz at a coil-to-sample spacing of ∼350 μm, and high magnetization (0-1 T)/ demagnetization (0-300 mT, 400 Hz) functions. With this microscope, isothermal remanent magnetization (IRM) acquisition and the according alternating field (AF) demagnetization curves can be obtained for each point without transferring samples between different procedures, which could result in position deviation, waste of time, and other interferences. The newly-designed SQUID microscope, thus, can be used to investigate the rock magnetic properties of samples at a micro-area scale, and has a great potential to be an efficient tool in paleomagnetism, rock magnetism, and magnetic material studies.

  14. Low-field magnetoresistance up to 400 K in double perovskite Sr{sub 2}FeMoO{sub 6} synthesized by a citrate route

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harnagea, L., E-mail: harnagealuminita@gmail.com; Jurca, B.; Physical Chemistry Department, University of Bucharest, 4-12 Bd. Elisabeta, 030018 Bucharest

    2014-03-15

    A wet-chemistry technique, namely the citrate route, has been used to prepare high-quality polycrystalline samples of double perovskite Sr{sub 2}FeMoO{sub 6}. We report on the evolution of magnetic and magnetoresistive properties of the synthesized samples as a function of three parameters (i) the pH of the starting solution, (ii) the decomposition temperature of the citrate precursors and (iii) the sintering conditions. The low-field magnetoresistance (LFMR) value of our best samples is as high as 5% at room temperature for an applied magnetic field of 1 kOe. Additionally, the distinguishing feature of these samples is the persistence of LFMR, with amore » reasonably large value, up to 400 K which is a crucial parameter for any practical application. Our study indicates that the enhancement of LFMR observed is due to a good compromise between the grain size distribution and their magnetic polarization. -- Graphical abstract: The microstructure (left panel) and corresponding low-field magnetoresistance of one of the Sr{sub 2}FeMoO{sub 6} samples synthesized in the course of this work. Highlights: • Samples of Sr{sub 2}FeMoO{sub 6} are prepared using a citrate route under varying conditions. • Magnetoresistive properties are improved and optimized. • Low-field magnetoresitence values as large as 5% at 300 K/1 kOe are reported. • Persistence of low-field magnetoresistance up to 400 K.« less

  15. Computer program documentation for a subcritical wing design code using higher order far-field drag minimization

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.; Shu, J. Y.

    1981-01-01

    A subsonic, linearized aerodynamic theory, wing design program for one or two planforms was developed which uses a vortex lattice near field model and a higher order panel method in the far field. The theoretical development of the wake model and its implementation in the vortex lattice design code are summarized and sample results are given. Detailed program usage instructions, sample input and output data, and a program listing are presented in the Appendixes. The far field wake model assumes a wake vortex sheet whose strength varies piecewise linearly in the spanwise direction. From this model analytical expressions for lift coefficient, induced drag coefficient, pitching moment coefficient, and bending moment coefficient were developed. From these relationships a direct optimization scheme is used to determine the optimum wake vorticity distribution for minimum induced drag, subject to constraints on lift, and pitching or bending moment. Integration spanwise yields the bound circulation, which is interpolated in the near field vortex lattice to obtain the design camber surface(s).

  16. The effect of Nb additions on the thermal stability of melt-spun Nd2Fe14B

    NASA Astrophysics Data System (ADS)

    Lewis, L. H.; Gallagher, K.; Panchanathan, V.

    1999-04-01

    Elevated-temperature superconducting quantum interference device (SQUID) magnetometry was performed on two samples of melt-spun and optimally annealed Nd2Fe14B; one sample contained 2.3 wt % Nb and one was Nb-free. Continuous full hysteresis loops were measured with a SQUID magnetometer at T=630 K, above the Curie temperature of the 2-14-1 phase, as a function of field (1 T⩽H⩽-1 T) and time on powdered samples sealed in quartz tubes at a vacuum of 10-6 Torr. The measured hysteresis signals were deconstructed into a high-field linear paramagnetic portion and a low-field ferromagnetic signal of unclear origin. While the saturation magnetization of the ferromagnetic signal from both samples grows with time, the signal from the Nb-containing sample is always smaller. The coercivity data are consistent with a constant impurity particle size in the Nb-containing sample and an increasing impurity particle size in the Nb-free sample. The paramagnetic susceptibility signal from the Nd2Fe14B-type phase in the Nb-free sample increases with time, while that from the Nb-containing sample remains constant. It is suggested that the presence of Nb actively suppresses the thermally induced formation of poorly crystallized Fe-rich regions that apparently exist in samples of both compositions.

  17. Application of calendering for improving the electrical characteristics of a printed top-gate, bottom-contact organic thin film transistors

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hoon; Lee, Dong Geun; Jung, Hoeryong; Lee, Sangyoon

    2018-05-01

    Interface between the channel and the gate dielectric of organic thin film transistors (OTFTs) needs to be smoothed in order to improve the electrical characteristics. In this study, an optimized calendering process was proposed to improve the surface roughness of the channel. Top-gate, bottom-contact structural p-type OTFT samples were fabricated using roll-to-roll gravure printing (source/drain, channel), spin coating (gate dielectric), and inkjet printing (gate electrode). The calendering process was optimized using the grey-based Taguchi method. The channel surface roughness and electrical characteristics of calendered and non-calendered samples were measured and compared. As a result, the average improvement in the surface roughness of the calendered samples was 26.61%. The average on–off ratio and field-effect mobility of the calendered samples were 3.574 × 104 and 0.1113 cm2 V‑1 s‑1, respectively, which correspond to the improvements of 16.72 and 10.20%, respectively.

  18. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    PubMed

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  19. Complex Permittivity of Planar Building Materials Measured With an Ultra-Wideband Free-Field Antenna Measurement System.

    PubMed

    Davis, Ben; Grosvenor, Chriss; Johnk, Robert; Novotny, David; Baker-Jarvis, James; Janezic, Michael

    2007-01-01

    Building materials are often incorporated into complex, multilayer macrostructures that are simply not amenable to measurements using coax or waveguide sample holders. In response to this, we developed an ultra-wideband (UWB) free-field measurement system. This measurement system uses a ground-plane-based system and two TEM half-horn antennas to transmit and receive the RF signal. The material samples are placed between the antennas, and reflection and transmission measurements made. Digital signal processing techniques are then applied to minimize environmental and systematic effects. The processed data are compared to a plane-wave model to extract the material properties with optimization software based on genetic algorithms.

  20. Measurement and Visualization of Mass Transport for the Flowing Atmospheric Pressure Afterglow (FAPA) Ambient Mass-Spectrometry Source

    PubMed Central

    Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.

    2014-01-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last nine years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification due to the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass-spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet. PMID:24658804

  1. Measurement and visualization of mass transport for the flowing atmospheric pressure afterglow (FAPA) ambient mass-spectrometry source.

    PubMed

    Pfeuffer, Kevin P; Ray, Steven J; Hieftje, Gary M

    2014-05-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.

  2. Measurement and Visualization of Mass Transport for the Flowing Atmospheric Pressure Afterglow (FAPA) Ambient Mass-Spectrometry Source

    NASA Astrophysics Data System (ADS)

    Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.

    2014-05-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.

  3. Tailored Fano resonance and localized electromagnetic field enhancement in Ag gratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhaozhu; Klopf, J. Michael; Wang, Lei

    Metallic gratings can support Fano resonances when illuminated with EM radiation, and their characteristic reflectivity versus incident angle lineshape can be greatly affected by the surrounding dielectric environment and the grating geometry. By using conformal oblique incidence thin film deposition onto an optical grating substrate, it is possible to increase the grating amplitude due to shadowing effects, thereby enabling tailoring of the damping processes and electromagnetic field couplings of the Fano resonances, hence optimizing the associated localized electric field intensity. To investigate these effects we compare the optical reflectivity under resonance excitation in samples prepared by oblique angle deposition (OAD)more » and under normal deposition (ND) onto the same patterned surfaces. We observe that by applying OAD method, the sample exhibits a deeper and narrower reflectivity dip at resonance than that obtained under ND. This can be explained in terms of a lower damping of Fano resonance on obliquely deposited sample and leads to a stronger localized electric field. This approach opens a fabrication path for applications where tailoring the electromagnetic field induced by Fano resonance can improve the figure of merit of specific device characteristics, e.g. quantum efficiency (QE) in grating-based metallic photocathodes.« less

  4. Tailored Fano resonance and localized electromagnetic field enhancement in Ag gratings

    DOE PAGES

    Li, Zhaozhu; Klopf, J. Michael; Wang, Lei; ...

    2017-03-14

    Metallic gratings can support Fano resonances when illuminated with EM radiation, and their characteristic reflectivity versus incident angle lineshape can be greatly affected by the surrounding dielectric environment and the grating geometry. By using conformal oblique incidence thin film deposition onto an optical grating substrate, it is possible to increase the grating amplitude due to shadowing effects, thereby enabling tailoring of the damping processes and electromagnetic field couplings of the Fano resonances, hence optimizing the associated localized electric field intensity. To investigate these effects we compare the optical reflectivity under resonance excitation in samples prepared by oblique angle deposition (OAD)more » and under normal deposition (ND) onto the same patterned surfaces. We observe that by applying OAD method, the sample exhibits a deeper and narrower reflectivity dip at resonance than that obtained under ND. This can be explained in terms of a lower damping of Fano resonance on obliquely deposited sample and leads to a stronger localized electric field. This approach opens a fabrication path for applications where tailoring the electromagnetic field induced by Fano resonance can improve the figure of merit of specific device characteristics, e.g. quantum efficiency (QE) in grating-based metallic photocathodes.« less

  5. A Tunable Reentrant Resonator with Transverse Orientation of Electric Field for in Vivo EPR Spectroscopy

    NASA Astrophysics Data System (ADS)

    Chzhan, Michael; Kuppusamy, Periannan; Samouilov, Alexandre; He, Guanglong; Zweier, Jay L.

    1999-04-01

    There has been a need for development of microwave resonator designs optimized to provide high sensitivity and high stability for EPR spectroscopy and imaging measurements ofin vivosystems. The design and construction of a novel reentrant resonator with transversely oriented electric field (TERR) and rectangular sample opening cross section for EPR spectroscopy and imaging ofin vivobiological samples, such as the whole body of mice and rats, is described. This design with its transversely oriented capacitive element enables wide and simple setting of the center frequency by trimming the dimensions of the capacitive plate over the range 100-900 MHz with unloadedQvalues of approximately 1100 at 750 MHz, while the mechanical adjustment mechanism allows smooth continuous frequency tuning in the range ±50 MHz. This orientation of the capacitive element limits the electric field based loss of resonatorQobserved with large lossy samples, and it facilitates the use of capacitive coupling. Both microwave performance data and EPR measurements of aqueous samples demonstrate high sensitivity and stability of the design, which make it well suited forin vivoapplications.

  6. Design of 2D time-varying vector fields.

    PubMed

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  7. An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier

    NASA Astrophysics Data System (ADS)

    Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim

    2017-05-01

    Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.

  8. Quantification of Cannabinoid Content in Cannabis

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.

    2015-09-01

    Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.

  9. Design and testing of access-tube TDR soil water sensor

    USDA-ARS?s Scientific Manuscript database

    We developed the design of a waveguide on the exterior of an access tube for use in time-domain reflectometry (TDR) for in-situ soil water content sensing. In order to optimize the design with respect to sampling volume and losses, we derived the electromagnetic (EM) fields produced by a TDR sensor...

  10. Optimizing a community-engaged multi-level group intervention to reduce substance use: an application of the multiphase optimization strategy.

    PubMed

    Windsor, Liliane Cambraia; Benoit, Ellen; Smith, Douglas; Pinto, Rogério M; Kugler, Kari C

    2018-04-27

    Rates of alcohol and illicit drug use (AIDU) are consistently similar across racial groups (Windsor and Negi, J Addict Dis 28:258-68, 2009; Keyes et al. Soc Sci Med 124:132-41, 2015). Yet AIDU has significantly higher consequences for residents in distressed communities with concentrations of African Americans (DCAA - i.e., localities with high rates of poverty and crime) who also have considerably less access to effective treatment of substance use disorders (SUD). This project is optimizing Community Wise, an innovative multi-level behavioral-health intervention created in partnership with service providers and residents of distressed communities with histories of SUD and incarceration, to reduce health inequalities related to AIDU. Grounded in critical consciousness theory, community-based participatory research principles (CBPR), and the multiphase optimization strategy (MOST), this study employs a 2 × 2 × 2 × 2 factorial design to engineer the most efficient, effective, and scalable version of Community Wise that can be delivered for US$250 per person or less. This study is fully powered to detect change in AIDU in a sample of 528 men with a histories of SUD and incarceration, residing in Newark, NJ in the United States. A community collaborative board oversees recruitment using a variety of strategies including indigenous field worker sampling, facility-based sampling, community advertisement through fliers, and street outreach. Participants are randomly assigned to one of 16 conditions that include a combination of the following candidate intervention components: peer or licensed facilitator, group dialogue, personal goal development, and community organizing. All participants receive a core critical-thinking component. Data are collected at baseline plus five post-baseline monthly follow ups. Once the optimized Community Wise intervention is identified, it will be evaluated against an existing standard of care in a future randomized clinical trial. This paper describes the protocol of the first ever study using CBPR and MOST to optimize a substance use intervention targeting a marginalized population. Data from this study will culminate in an optimized Community Wise manual; enhanced methodological strategies to develop multi-component scalable interventions using MOST and CBPR; and a better understanding of the application of critical consciousness theory to the field of health inequalities related to AIDU. ClinicalTrials.gov, NCT02951455 . Registered on 1 November 2016.

  11. What do we need to measure, how much, and where? A quantitative assessment of terrestrial data needs across North American biomes through data-model fusion and sampling optimization

    NASA Astrophysics Data System (ADS)

    Dietze, M. C.; Davidson, C. D.; Desai, A. R.; Feng, X.; Kelly, R.; Kooper, R.; LeBauer, D. S.; Mantooth, J.; McHenry, K.; Serbin, S. P.; Wang, D.

    2012-12-01

    Ecosystem models are designed to synthesize our current understanding of how ecosystems function and to predict responses to novel conditions, such as climate change. Reducing uncertainties in such models can thus improve both basic scientific understanding and our predictive capacity, but rarely have the models themselves been employed in the design of field campaigns. In the first part of this paper we provide a synthesis of uncertainty analyses conducted using the Predictive Ecosystem Analyzer (PEcAn) ecoinformatics workflow on the Ecosystem Demography model v2 (ED2). This work spans a number of projects synthesizing trait databases and using Bayesian data assimilation techniques to incorporate field data across temperate forests, grasslands, agriculture, short rotation forestry, boreal forests, and tundra. We report on a number of data needs that span a wide array diverse biomes, such as the need for better constraint on growth respiration. We also identify other data needs that are biome specific, such as reproductive allocation in tundra, leaf dark respiration in forestry and early-successional trees, and root allocation and turnover in mid- and late-successional trees. Future data collection needs to balance the unequal distribution of past measurements across biomes (temperate biased) and processes (aboveground biased) with the sensitivities of different processes. In the second part we present the development of a power analysis and sampling optimization module for the the PEcAn system. This module uses the results of variance decomposition analyses to estimate the further reduction in model predictive uncertainty for different sample sizes of different variables. By assigning a cost to each measurement type, we apply basic economic theory to optimize the reduction in model uncertainty for any total expenditure, or to determine the cost required to reduce uncertainty to a given threshold. Using this system we find that sampling switches among multiple measurement types but favors those with no prior measurements due to the need integrate over prior uncertainty in within and among site variability. When starting from scratch in a new system, the optimal design favors initial measurements of SLA due to high sensitivity and low cost. The value of many data types, such as photosynthetic response curves, depends strongly on whether one includes initial equipment costs or just per-sample costs. Similarly, sampling at previously measured locations is favored when infrastructure costs are high, otherwise across-site sampling is favored over intensive sampling except when within-site variability strongly dominates.

  12. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  13. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    NASA Astrophysics Data System (ADS)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  14. Low-Field and High-Field Characterization of THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Ounaies, Z.; Mossi, K.; Smith, R.; Bernd, J.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    THUNDER (THin UNimorph DrivER) actuators are pre-stressed piezoelectric devices developed at NASA Langley Research Center (LaRC) that exhibit enhanced strain capabilities. As a result, they are of interest in a variety of aerospace applications. Characterization of their performance as a function of electric field, temperature and frequency is needed in order to optimize their operation. Towards that end, a number of THUNDER devices were obtained from FACE International Co. with a stainless steel substrate varying in thickness from 1 mil to 20 mils. The various devices were evaluated to determine low-field and high-field displacement its well as the polarization hysteresis loops. The thermal stability of these drivers was evaluated by two different methods. First, the samples were thermally cycled under electric field by systematically increasing the maximum temperature from 25 C to 200 C while the displacement was being measured. Second, the samples were isothermally aged at 0 C, 50 C, 100 C. and 150 C in air, and the isothermal decay of the displacement was measured at room temperature as a function of time.

  15. Adaptive Metropolis Sampling with Product Distributions

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Lee, Chiu Fan

    2005-01-01

    The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.

  16. Optimization of pulsed electric field pre-treatments to enhance health-promoting glucosinolates in broccoli flowers and stalk.

    PubMed

    Aguiló-Aguayo, Ingrid; Suarez, Manuel; Plaza, Lucia; Hossain, Mohammad B; Brunton, Nigel; Lyng, James G; Rai, Dilip K

    2015-07-01

    The effect of pulsed electric field (PEF) treatment variables (electric field strength and treatment time) on the glucosinolate content of broccoli flowers and stalks was evaluated. Samples were subjected to electric field strengths from 1 to 4 kV cm(-1) and treatment times from 50 to 1000 µs at 5 Hz. Data fitted significantly (P < 0.0014) the proposed second-order response functions. The results showed that PEF combined treatment conditions of 4 kV cm(-1) for 525 and 1000 µs were optimal to maximize glucosinolate levels in broccoli flowers (ranging from 187.1 to 212.5%) and stalks (ranging from 110.6 to 203.0%) respectively. The predicted values from the developed quadratic polynomial equation were in close agreement with the actual experimental values, with low average mean deviations (E%) ranging from 0.59 to 8.80%. The use of PEF processing at moderate conditions could be a suitable method to stimulate production of broccoli with high health-promoting glucosinolate content. © 2014 Society of Chemical Industry.

  17. Monitoring/Verification using DMS: TATP Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephan Weeks, Kevin Kyle, Manuel Manard

    Field-rugged and field-programmable differential mobility spectrometry (DMS) networks provide highly selective, universal monitoring of vapors and aerosols at detectable levels from persons or areas involved with illicit chemical/biological/explosives (CBE) production. CBE sensor motes used in conjunction with automated fast gas chromatography with DMS detection (GC/DMS) verification instrumentation integrated into situational operations-management systems can be readily deployed and optimized for changing application scenarios. The feasibility of developing selective DMS motes for a “smart dust” sampling approach with guided, highly selective, fast GC/DMS verification analysis is a compelling approach to minimize or prevent the illegal use of explosives or chemical and biologicalmore » materials. DMS is currently one of the foremost emerging technologies for field separation and detection of gas-phase chemical species. This is due to trace-level detection limits, high selectivity, and small size. Fast GC is the leading field analytical method for gas phase separation of chemical species in complex mixtures. Low-thermal-mass GC columns have led to compact, low-power field systems capable of complete analyses in 15–300 seconds. A collaborative effort optimized a handheld, fast GC/DMS, equipped with a non-rad ionization source, for peroxide-based explosive measurements.« less

  18. Landcover Based Optimal Deconvolution of PALS L-band Microwave Brightness Temperature

    NASA Technical Reports Server (NTRS)

    Limaye, Ashutosh S.; Crosson, William L.; Laymon, Charles A.; Njoku, Eni G.

    2004-01-01

    An optimal de-convolution (ODC) technique has been developed to estimate microwave brightness temperatures of agricultural fields using microwave radiometer observations. The technique is applied to airborne measurements taken by the Passive and Active L and S band (PALS) sensor in Iowa during Soil Moisture Experiments in 2002 (SMEX02). Agricultural fields in the study area were predominantly soybeans and corn. The brightness temperatures of corn and soybeans were observed to be significantly different because of large differences in vegetation biomass. PALS observations have significant over-sampling; observations were made about 100 m apart and the sensor footprint extends to about 400 m. Conventionally, observations of this type are averaged to produce smooth spatial data fields of brightness temperatures. However, the conventional approach is in contrast to reality in which the brightness temperatures are in fact strongly dependent on landcover, which is characterized by sharp boundaries. In this study, we mathematically de-convolve the observations into brightness temperature at the field scale (500-800m) using the sensor antenna response function. The result is more accurate spatial representation of field-scale brightness temperatures, which may in turn lead to more accurate soil moisture retrieval.

  19. Design of Interactively Time-Pulsed Microfluidic Mixers in Microchips using Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Fu, Lung-Ming; Tsai, Chien-Hsiung

    2007-01-01

    In this paper, we propose a novel technique in which driving voltages are applied interactively to the respective inlet fluid flows of three configurations of a microfluidic device, namely T-shaped, double-T-shaped, and double-cross-shaped configurations, to induce electroosmotic flow (EOF) velocity variations in such a way as to develop a rapid mixing effect in the microchannel. In these configurations a microfluidic mixer apply only one electrokinetic driving force, which drives the sample fluids and simultaneously produces a periodic switching frequency. It requires no other external driving force to induce perturbations to the flow field. The effects of the main applied electric field, the interactive frequency, and the pullback electric field on the mixing performance are thoroughly examined numerically. The optimal interactive frequency range for a given set of micromixer parameters is identified for each type of control mode. The numerical results confirm that micromixers operating at an optimal interactive frequency are capable of delivering a significantly enhanced mixing performance. Furthermore, it is shown that the optimal interactive frequency depends upon the magnitude of the main applied electric field. The interactively pulsed mixers developed in this study have a strong potential for use in lab-on-a-chip systems. They involve a simpler fabrication process than either passive or active on-chip mixers and require less human intervention in operation than their bulky external counterparts.

  20. Microwave absorption in powders of small conducting particles for heating applications.

    PubMed

    Porch, Adrian; Slocombe, Daniel; Edwards, Peter P

    2013-02-28

    In microwave chemistry there is a common misconception that small, highly conducting particles heat profusely when placed in a large microwave electric field. However, this is not the case; with the simple physical explanation that the electric field (which drives the heating) within a highly conducting particle is highly screened. Instead, it is the magnetic absorption associated with induction that accounts for the large experimental heating rates observed for small metal particles. We present simple principles for the effective heating of particles in microwave fields from calculations of electric and magnetic dipole absorptions for a range of practical values of particle size and conductivity. For highly conducting particles, magnetic absorption dominates electric absorption over a wide range of particle radii, with an optimum absorption set by the ratio of mean particle radius a to the skin depth δ (specifically, by the condition a = 2.41δ). This means that for particles of any conductivity, optimized magnetic absorption (and hence microwave heating by magnetic induction) can be achieved by simple selection of the mean particle size. For weakly conducting samples, electric dipole absorption dominates, and is maximized when the conductivity is approximately σ ≈ 3ωε(0) ≈ 0.4 S m(-1), independent of particle radius. Therefore, although electric dipole heating can be as effective as magnetic dipole heating for a powder sample of the same volume, it is harder to obtain optimized conditions at a fixed frequency of microwave field. The absorption of sub-micron particles is ineffective in both magnetic and electric fields. However, if the particles are magnetic, with a lossy part to their complex permeability, then magnetic dipole losses are dramatically enhanced compared to their values for non-magnetic particles. An interesting application of this is the use of very small magnetic particles for the selective microwave heating of biological samples.

  1. Optimized co-extraction and quantification of DNA from enteric pathogens in surface water samples near produce fields in California

    USDA-ARS?s Scientific Manuscript database

    Pathogen contamination of surface water is a health hazard in agricultural environments primarily due to the potential for contamination of crops. Furthermore, pathogen levels in surface water are often unreported or under reported due to difficulty with culture of the bacteria. The pathogens are of...

  2. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  3. Determination of MLC model parameters for Monaco using commercial diode arrays.

    PubMed

    Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian

    2016-07-08

    Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors

  4. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  5. Optimal Design of River Monitoring Network in Taizihe River by Matter Element Analysis

    PubMed Central

    Wang, Hui; Liu, Zhe; Sun, Lina; Luo, Qing

    2015-01-01

    The objective of this study is to optimize the river monitoring network in Taizihe River, Northeast China. The situation of the network and water characteristics were studied in this work. During this study, water samples were collected once a month during January 2009 - December 2010 from seventeen sites. Futhermore, the 16 monitoring indexes were analyzed in the field and laboratory. The pH value of surface water sample was found to be in the range of 6.83 to 9.31, and the average concentrations of NH4 +-N, chemical oxygen demand (COD), volatile phenol and total phosphorus (TP) were found decreasing significantly. The water quality of the river has been improved from 2009 to 2010. Through the calculation of the data availability and the correlation between adjacent sections, it was found that the present monitoring network was inefficient as well as the optimization was indispensable. In order to improve the situation, the matter element analysis and gravity distance were applied in the optimization of river monitoring network, which were proved to be a useful method to optimize river quality monitoring network. The amount of monitoring sections were cut from 17 to 13 for the monitoring network was more cost-effective after being optimized. The results of this study could be used in developing effective management strategies to improve the environmental quality of Taizihe River. Also, the results show that the proposed model can be effectively used for the optimal design of monitoring networks in river systems. PMID:26023785

  6. Determination of the functioning parameters in asymmetrical flow field-flow fractionation with an exponential channel.

    PubMed

    Déjardin, P

    2013-08-30

    The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Magnetic solid phase extraction of gemfibrozil from human serum and pharmaceutical wastewater samples utilizing a β-cyclodextrin grafted graphene oxide-magnetite nano-hybrid.

    PubMed

    Abdolmohammad-Zadeh, Hossein; Talleb, Zeynab

    2015-03-01

    A magnetic solid phase extraction method based on β-cyclodextrin (β-CD) grafted graphene oxide (GO)/magnetite (Fe3O4) nano-hybrid as an innovative adsorbent was developed for the separation and pre-concentration of gemfibrozil prior to its determination by spectrofluorometry. The as-prepared β-CD/GO/Fe3O4 nano-hybrid possesses the magnetism property of Fe3O4 nano-particles that makes it easily manipulated by an external magnetic field. On the other hand, the surface modification of GO by β-CD leads to selective separation of the target analyte from sample matrices. The structure and morphology of the synthesized adsorbent were characterized using powder X-ray diffraction, Fourier transform infrared spectroscopy, and field emission scanning electron microscopy. The experimental factors affecting the extraction/pre-concentration and determination of the analyte were investigated and optimized. Under the optimized experimental conditions, the calibration graph was linear in the range between 10 and 5000 pg mL(-1) with a correlation coefficient of 0.9989. The limit of detection and enrichment factor for gemfibrozil were 3 pg mL(-1) and 100, respectively. The maximum sorption capacity of the adsorbent for gemfibrozil was 49.8 mg g(-1). The method was successfully applied to monitoring gemfibrozil in human serum and pharmaceutical wastewaters samples with recoveries in the range of 96.0-104.0% for the spiked samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Exponentially-Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians

    NASA Technical Reports Server (NTRS)

    Mandra, Salvatore

    2017-01-01

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated to a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  9. 3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy

    PubMed Central

    Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan

    2017-01-01

    High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings. PMID:28819645

  10. Improved visualization of breast cancer features in multifocal carcinoma using phase-contrast and dark-field mammography: an ex vivo study.

    PubMed

    Grandl, Susanne; Scherer, Kai; Sztrókay-Gaul, Anikó; Birnbacher, Lorenz; Willer, Konstantin; Chabior, Michael; Herzen, Julia; Mayr, Doris; Auweter, Sigrid D; Pfeiffer, Franz; Bamberg, Fabian; Hellerhoff, Karin

    2015-12-01

    Conventional X-ray attenuation-based contrast is inherently low for the soft-tissue components of the female breast. To overcome this limitation, we investigate the diagnostic merits arising from dark-field mammography by means of certain tumour structures enclosed within freshly dissected mastectomy samples. We performed grating-based absorption, absolute phase and dark-field mammography of three freshly dissected mastectomy samples containing bi- and multifocal carcinoma using a compact, laboratory Talbot-Lau interferometer. Preoperative in vivo imaging (digital mammography, ultrasound, MRI), postoperative histopathological analysis and ex vivo digital mammograms of all samples were acquired for the diagnostic verification of our results. In the diagnosis of multifocal tumour growth, dark-field mammography seems superior to standard breast imaging modalities, providing a better resolution of small, calcified tumour nodules, demarcation of tumour boundaries with desmoplastic stromal response and spiculated soft-tissue strands extending from an invasive ductal breast cancer. On the basis of selected cases, we demonstrate that dark-field mammography is capable of outperforming conventional mammographic imaging of tumour features in both calcified and non-calcified tumours. Presuming dose optimization, our results encourage further studies on larger patient cohorts to identify those patients that will benefit the most from this promising additional imaging modality. • X-ray dark-field mammography provides significantly improved visualization of tumour features • X-ray dark-field mammography is capable of outperforming conventional mammographic imaging • X-ray dark-field mammography provides imaging sensitivity towards highly dispersed calcium grains.

  11. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  12. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  13. 'Nano-immuno test' for the detection of live Mycobacterium avium subspecies paratuberculosis bacilli in the milk samples using magnetic nano-particles and chromogen.

    PubMed

    Singh, Manju; Singh, Shoor Vir; Gupta, Saurabh; Chaubey, Kundan Kumar; Stephan, Bjorn John; Sohal, Jagdip Singh; Dutta, Manali

    2018-04-26

    Early rapid detection of Mycobacterium avium subspecies paratuberculosis (MAP) bacilli in milk samples is the major challenge since traditional culture method is time consuming and laboratory dependent. We report a simple, sensitive and specific nano-technology based 'Nano-immuno test' capable of detecting viable MAP bacilli in the milk samples within 10 h. Viable MAP bacilli were captured by MAP specific antibody-conjugated magnetic nano-particles using resazurin dye as chromogen. Test was optimized using true culture positive (10-bovine and 12-goats) and true culture negative (16-bovine and 25-goats) raw milk samples. Domestic livestock species in India are endemically infected with MAP. After successful optimization, sensitivity and specificity of the 'nano-immuno test' in goats with respect to milk culture was 91.7% and 96.0%, respectively. Whereas, it was 90.0% (sensitivity) and 92.6% (specificity) with respect to IS900 PCR. In bovine milk samples, sensitivity and specificity of 'nano-immuno test' with respect to milk culture was 90.0% and 93.7%, respectively. However, with respect to IS900 PCR, the sensitivity and specificity was 88.9% and 94.1%, respectively. Test was validated with field raw milk samples (goats-258 and bovine-138) collected from domestic livestock species to detect live/viable MAP bacilli. Of 138 bovine raw milk samples screened by six diagnostic tests, 81 (58.7%) milk samples were positive for MAP infection in one or more than one diagnostic tests. Of 81 (58.7%) positive bovine raw milk samples, only 24 (17.4%) samples were detected positive for the presence of viable MAP bacilli. Of 258 goats raw milk samples screened by six diagnostic tests, 141 (54.6%) were positive for MAP infection in one or more than one test. Of 141 (54.6%) positive raw milk samples from goats, only 48 (34.0%) were detected positive for live MAP bacilli. Simplicity and efficiency of this novel 'nano-immuno test' makes it suitable for wide-scale screening of milk samples in the field. Standardization, validation and re-usability of functionalized nano-particles and the test was successfully achieved in field samples. Test was highly specific, simple to perform and easy to read by naked eyes and does not require laboratory support in the performance of test. Test has potential to be used as screening test to estimate bio-load of MAP in milk samples at National level.

  14. Boltzmann sampling from the Ising model using quantum heating of coupled nonlinear oscillators.

    PubMed

    Goto, Hayato; Lin, Zhirong; Nakamura, Yasunobu

    2018-05-08

    A network of Kerr-nonlinear parametric oscillators without dissipation has recently been proposed for solving combinatorial optimization problems via quantum adiabatic evolution through its bifurcation point. Here we investigate the behavior of the quantum bifurcation machine (QbM) in the presence of dissipation. Our numerical study suggests that the output probability distribution of the dissipative QbM is Boltzmann-like, where the energy in the Boltzmann distribution corresponds to the cost function of the optimization problem. We explain the Boltzmann distribution by generalizing the concept of quantum heating in a single nonlinear oscillator to the case of multiple coupled nonlinear oscillators. The present result also suggests that such driven dissipative nonlinear oscillator networks can be applied to Boltzmann sampling, which is used, e.g., for Boltzmann machine learning in the field of artificial intelligence.

  15. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.

  16. Hartmann-Hahn 2D-map to optimize the RAMP-CPMAS NMR experiment for pharmaceutical materials.

    PubMed

    Suzuki, Kazuko; Martineau, Charlotte; Fink, Gerhard; Steuernagel, Stefan; Taulelle, Francis

    2012-02-01

    Cross polarization-magic angle spinning (CPMAS) is the most used experiment for solid-state NMR measurements in the pharmaceutical industry, with the well-known variant RAMP-CPMAS its dominant implementation. The experimental work presented in this contribution focuses on the entangled effects of the main parameters of such an experiment. The shape of the RAMP-CP pulse has been considered as well as the contact time duration, and a particular attention also has been devoted to the radio-frequency (RF) field inhomogeneity. (13)C CPMAS NMR spectra have been recorded with a systematic variation of (13)C and (1)H constant radiofrequency field pair values and represented as a Hartmann-Hahn matching two-dimensional map. Such a map yields a rational overview of the intricate optimal conditions necessary to achieve an efficient CP magnetization transfer. The map also highlights the effects of sweeping the RF by the RAMP-CP pulse on the number of Hartmann-Hahn matches crossed and how RF field inhomogeneity helps in increasing the CP efficiency by using a larger fraction of the sample. In the light of the results, strategies for optimal RAMP-CPMAS measurements are suggested, which lead to a much higher efficiency than constant amplitude CP experiment. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Optimization and validation of a minicolumn method for determining aflatoxins in copra meal.

    PubMed

    Arim, R H; Aguinaldo, A R; Tanaka, T; Yoshizawa, T

    1999-01-01

    A minicolumn (MC) method for determining aflatoxins in copra meal was optimized and validated. The method uses methanol-4% KCl solution as extractant and CuSO4 solution as clarifying agent. The chloroform extract is applied to an MC that incorporates "lahar," an indigenous material, as substitute for silica gel. The "lahar"-containing MC produces a more distinct and intense blue fluoresence on the Florisil layer than an earlier MC. The method has a detection limit of 15 micrograms total aflatoxins/kg sample. Confirmatory tests using 50% H2SO4 and trifluoroacetic acid in benzene with 25% HNO3 showed that copra meal samples contained aflatoxins and no interfering agents. The MC responses of the copra meal samples were in good agreement with their behavior in thin-layer chromatography. This modified MC method is accurate, giving linearity-valid results; rapid, being done in 15 min; economical, using low-volume reagents; relatively safe, having low-exposure risk of analysts to chemicals; and simple, making its field application feasible.

  18. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  19. Optimizing methods and dodging pitfalls in microbiome research.

    PubMed

    Kim, Dorothy; Hofstaedter, Casey E; Zhao, Chunyu; Mattei, Lisa; Tanes, Ceylan; Clarke, Erik; Lauder, Abigail; Sherrill-Mix, Scott; Chehoud, Christel; Kelsen, Judith; Conrad, Máire; Collman, Ronald G; Baldassano, Robert; Bushman, Frederic D; Bittinger, Kyle

    2017-05-05

    Research on the human microbiome has yielded numerous insights into health and disease, but also has resulted in a wealth of experimental artifacts. Here, we present suggestions for optimizing experimental design and avoiding known pitfalls, organized in the typical order in which studies are carried out. We first review best practices in experimental design and introduce common confounders such as age, diet, antibiotic use, pet ownership, longitudinal instability, and microbial sharing during cohousing in animal studies. Typically, samples will need to be stored, so we provide data on best practices for several sample types. We then discuss design and analysis of positive and negative controls, which should always be run with experimental samples. We introduce a convenient set of non-biological DNA sequences that can be useful as positive controls for high-volume analysis. Careful analysis of negative and positive controls is particularly important in studies of samples with low microbial biomass, where contamination can comprise most or all of a sample. Lastly, we summarize approaches to enhancing experimental robustness by careful control of multiple comparisons and to comparing discovery and validation cohorts. We hope the experimental tactics summarized here will help researchers in this exciting field advance their studies efficiently while avoiding errors.

  20. Advanced sampling techniques for hand-held FT-IR instrumentation

    NASA Astrophysics Data System (ADS)

    Arnó, Josep; Frunzi, Michael; Weber, Chris; Levy, Dustin

    2013-05-01

    FT-IR spectroscopy is the technology of choice to identify solid and liquid phase unknown samples. The challenging ConOps in emergency response and military field applications require a significant redesign of the stationary FT-IR bench-top instruments typically used in laboratories. Specifically, field portable units require high levels of resistance against mechanical shock and chemical attack, ease of use in restrictive gear, extreme reliability, quick and easy interpretation of results, and reduced size. In the last 20 years, FT-IR instruments have been re-engineered to fit in small suitcases for field portable use and recently further miniaturized for handheld operation. This article introduces the HazMatID™ Elite, a FT-IR instrument designed to balance the portability advantages of a handheld device with the performance challenges associated with miniaturization. In this paper, special focus will be given to the HazMatID Elite's sampling interfaces optimized to collect and interrogate different types of samples: accumulated material using the on-board ATR press, dispersed powders using the ClearSampler™ tool, and the touch-to-sample sensor for direct liquid sampling. The application of the novel sample swipe accessory (ClearSampler) to collect material from surfaces will be discussed in some detail. The accessory was tested and evaluated for the detection of explosive residues before and after detonation. Experimental results derived from these investigations will be described in an effort to outline the advantages of this technology over existing sampling methods.

  1. Local measurement of thermal conductivity and diffusivity.

    PubMed

    Hurley, David H; Schley, Robert S; Khafizov, Marat; Wendt, Brycen L

    2015-12-01

    Simultaneous measurement of local thermal diffusivity and conductivity is demonstrated on a range of ceramic samples. This was accomplished by measuring the temperature field spatial profile of samples excited by an amplitude modulated continuous wave laser beam. A thin gold film is applied to the samples to ensure strong optical absorption and to establish a second boundary condition that introduces an expression containing the substrate thermal conductivity. The diffusivity and conductivity are obtained by comparing the measured phase profile of the temperature field to a continuum based model. A sensitivity analysis is used to identify the optimal film thickness for extracting the both substrate conductivity and diffusivity. Proof of principle studies were conducted on a range of samples having thermal properties that are representatives of current and advanced accident tolerant nuclear fuels. It is shown that by including the Kapitza resistance as an additional fitting parameter, the measured conductivity and diffusivity of all the samples considered agreed closely with the literature values. A distinguishing feature of this technique is that it does not require a priori knowledge of the optical spot size which greatly increases measurement reliability and reproducibility.

  2. PubChem3D: Conformer generation

    PubMed Central

    2011-01-01

    Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340

  3. Transport properties of kA class QMG current limiting elements

    NASA Astrophysics Data System (ADS)

    Morita, M.; Miura, O.; Ito, D.

    2001-09-01

    In order to estimate the feasibility of a resistive type fault current limiter made of QMG, transport properties of QMG current limiting elements which can transport about 1 kA continuously in a superconducting state were studied. QMG is a rare earth based bulk superconductor that has high Jc properties and relatively high electrical resistivity in a normal state. Because of these properties, QMG is a promising bulk material for superconducting fault current limiter applications. A bar-shaped sample in which the cross-section and the effective length were 2.2×0.8 mm2 and 30 mm, respectively, was prepared. Bypass resistance of 7 mΩ was connected in parallel with the sample. A field assist mechanism that can apply a magnetic field of about 0.9 T to the sample was installed. A half cycle of AC current up to about 3 kA was applied to the samples at 77 K. In the case when applied current ( I) was less than 1000 A in a self-field, flux flow voltage was less than 0.5 mV. The n-value was about 6. In the applied field of 0.9 T, a rapid increase of voltage (quench) was observed around I=1820 A. The quench phenomena reproduced without degradation in the case of I>1820 A. From these results, it was found that QMG fault current elements can endure the thermal shock of the quench by the optimization of bypass resistance and the applied field.

  4. Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image

    PubMed Central

    Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.

    2014-01-01

    Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681

  5. Experimental demonstration of all-optical weak magnetic field detection using beam-deflection of single-mode fiber coated with cobalt-doped nickel ferrite nanoparticles.

    PubMed

    Pradhan, Somarpita; Chaudhuri, Partha Roy

    2015-07-10

    We experimentally demonstrate single-mode optical-fiber-beam-deflection configuration for weak magnetic-field-detection using an optimized (low coercive-field) composition of cobalt-doped nickel ferrite nanoparticles. Devising a fiber-double-slit type experiment, we measure the surrounding magnetic field through precisely measuring interference-fringe yielding a minimum detectable field ∼100  mT and we procure magnetization data of the sample that fairly predicts SQUID measurement. To improve sensitivity, we incorporate etched single-mode fiber in double-slit arrangement and recorded a minimum detectable field, ∼30  mT. To further improve, we redefine the experiment as modulating fiber-to-fiber light-transmission and demonstrate the minimum field as 2.0 mT. The device will be uniquely suited for electrical or otherwise hazardous environments.

  6. Systematic parameter inference in stochastic mesoscopic modeling

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  7. FDTD simulation of EM wave propagation in 3-D media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T.; Tripp, A.C.

    1996-01-01

    A finite-difference, time-domain solution to Maxwell`s equations has been developed for simulating electromagnetic wave propagation in 3-D media. The algorithm allows arbitrary electrical conductivity and permittivity variations within a model. The staggered grid technique of Yee is used to sample the fields. A new optimized second-order difference scheme is designed to approximate the spatial derivatives. Like the conventional fourth-order difference scheme, the optimized second-order scheme needs four discrete values to calculate a single derivative. However, the optimized scheme is accurate over a wider wavenumber range. Compared to the fourth-order scheme, the optimized scheme imposes stricter limitations on the time stepmore » sizes but allows coarser grids. The net effect is that the optimized scheme is more efficient in terms of computation time and memory requirement than the fourth-order scheme. The temporal derivatives are approximated by second-order central differences throughout. The Liao transmitting boundary conditions are used to truncate an open problem. A reflection coefficient analysis shows that this transmitting boundary condition works very well. However, it is subject to instability. A method that can be easily implemented is proposed to stabilize the boundary condition. The finite-difference solution is compared to closed-form solutions for conducting and nonconducting whole spaces and to an integral-equation solution for a 3-D body in a homogeneous half-space. In all cases, the finite-difference solutions are in good agreement with the other solutions. Finally, the use of the algorithm is demonstrated with a 3-D model. Numerical results show that both the magnetic field response and electric field response can be useful for shallow-depth and small-scale investigations.« less

  8. Decision support tool for soil sampling of heterogeneous pesticide (chlordecone) pollution.

    PubMed

    Clostre, Florence; Lesueur-Jannoyer, Magalie; Achard, Raphaël; Letourmy, Philippe; Cabidoche, Yves-Marie; Cattan, Philippe

    2014-02-01

    When field pollution is heterogeneous due to localized pesticide application, as is the case of chlordecone (CLD), the mean level of pollution is difficult to assess. Our objective was to design a decision support tool to optimize soil sampling. We analyzed the CLD heterogeneity of soil content at 0-30- and 30-60-cm depth. This was done within and between nine plots (0.4 to 1.8 ha) on andosol and ferralsol. We determined that 20 pooled subsamples per plot were a satisfactory compromise with respect to both cost and accuracy. Globally, CLD content was greater for andosols and the upper soil horizon (0-30 cm). Soil organic carbon cannot account for CLD intra-field variability. Cropping systems and tillage practices influence the CLD content and distribution; that is CLD pollution was higher under intensive banana cropping systems and, while upper soil horizon was more polluted than the lower one with shallow tillage (<40 cm), deeper tillage led to a homogenization and a dilution of the pollution in the soil profile. The decision tool we proposed compiles and organizes these results to better assess CLD soil pollution in terms of sampling depth, distance, and unit at field scale. It accounts for sampling objectives, farming practices (cropping system, tillage), type of soil, and topographical characteristics (slope) to design a relevant sampling plan. This decision support tool is also adaptable to other types of heterogeneous agricultural pollution at field level.

  9. Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ampomah, William; Balch, Robert; Will, Robert

    This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less

  10. Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty

    DOE PAGES

    Ampomah, William; Balch, Robert; Will, Robert; ...

    2017-07-01

    This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less

  11. [Vis-NIR spectroscopic pattern recognition combined with SG smoothing applied to breed screening of transgenic sugarcane].

    PubMed

    Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan

    2014-10-01

    Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.

  12. An atomistic fingerprint algorithm for learning ab initio molecular force fields

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Zhang, Dongkun; Karniadakis, George Em

    2018-01-01

    Molecular fingerprints, i.e., feature vectors describing atomistic neighborhood configurations, is an important abstraction and a key ingredient for data-driven modeling of potential energy surface and interatomic force. In this paper, we present the density-encoded canonically aligned fingerprint algorithm, which is robust and efficient, for fitting per-atom scalar and vector quantities. The fingerprint is essentially a continuous density field formed through the superimposition of smoothing kernels centered on the atoms. Rotational invariance of the fingerprint is achieved by aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame computed from a kernel minisum optimization procedure. We show that this approach is superior over principal components analysis-based methods especially when the atomistic neighborhood is sparse and/or contains symmetry. We propose that the "distance" between the density fields be measured using a volume integral of their pointwise difference. This can be efficiently computed using optimal quadrature rules, which only require discrete sampling at a small number of grid points. We also experiment on the choice of weight functions for constructing the density fields and characterize their performance for fitting interatomic potentials. The applicability of the fingerprint is demonstrated through a set of benchmark problems.

  13. Monitoring/Verification Using DMS: TATP Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevin Kyle; Stephan Weeks

    Field-rugged and field-programmable differential mobility spectrometry (DMS) networks provide highly selective, universal monitoring of vapors and aerosols at detectable levels from persons or areas involved with illicit chemical/biological/explosives (CBE) production. CBE sensor motes used in conjunction with automated fast gas chromatography with DMS detection (GC/DMS) verification instrumentation integrated into situational operationsmanagement systems can be readily deployed and optimized for changing application scenarios. The feasibility of developing selective DMS motes for a “smart dust” sampling approach with guided, highly selective, fast GC/DMS verification analysis is a compelling approach to minimize or prevent the illegal use of explosives or chemical and biologicalmore » materials. DMS is currently one of the foremost emerging technologies for field separation and detection of gas-phase chemical species. This is due to trace-level detection limits, high selectivity, and small size. GC is the leading analytical method for the separation of chemical species in complex mixtures. Low-thermal-mass GC columns have led to compact, low-power field systems capable of complete analyses in 15–300 seconds. A collaborative effort optimized a handheld, fast GC/DMS, equipped with a non-rad ionization source, for peroxide-based explosive measurements.« less

  14. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach

    PubMed Central

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-01-01

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called Aη, is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that SoDDS, which is currently used at NATO STO Centre for Maritime Research and Experimentation (CMRE), can represent a step forward towards a systematic mission planning of glider fleets, dramatically reducing the efforts of glider operators. PMID:26712763

  15. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. PMID:27003958

  16. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.

    PubMed

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-12-26

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that SoDDS, which is currently used at NATO STO Centre for Maritime Research and Experimentation (CMRE), can represent a step forward towards a systematic mission planning of glider fleets, dramatically reducing the efforts of glider operators.

  17. Analytic Optimization of Near-Field Optical Chirality Enhancement

    PubMed Central

    2017-01-01

    We present an analytic derivation for the enhancement of local optical chirality in the near field of plasmonic nanostructures by tuning the far-field polarization of external light. We illustrate the results by means of simulations with an achiral and a chiral nanostructure assembly and demonstrate that local optical chirality is significantly enhanced with respect to circular polarization in free space. The optimal external far-field polarizations are different from both circular and linear. Symmetry properties of the nanostructure can be exploited to determine whether the optimal far-field polarization is circular. Furthermore, the optimal far-field polarization depends on the frequency, which results in complex-shaped laser pulses for broadband optimization. PMID:28239617

  18. Distortion correction of echo planar images applying the concept of finite rate of innovation to point spread function mapping (FRIP).

    PubMed

    Nunes, Rita G; Hajnal, Joseph V

    2018-06-01

    Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.

  19. Generalized filtering of laser fields in optimal control theory: application to symmetry filtering of quantum gate operations

    NASA Astrophysics Data System (ADS)

    Schröder, Markus; Brown, Alex

    2009-10-01

    We present a modified version of a previously published algorithm (Gollub et al 2008 Phys. Rev. Lett.101 073002) for obtaining an optimized laser field with more general restrictions on the search space of the optimal field. The modification leads to enforcement of the constraints on the optimal field while maintaining good convergence behaviour in most cases. We demonstrate the general applicability of the algorithm by imposing constraints on the temporal symmetry of the optimal fields. The temporal symmetry is used to reduce the number of transitions that have to be optimized for quantum gate operations that involve inversion (NOT gate) or partial inversion (Hadamard gate) of the qubits in a three-dimensional model of ammonia.

  20. Strongly enhanced current densities in Sr0.6K0.4Fe2As2 + Sn superconducting tapes.

    PubMed

    Lin, He; Yao, Chao; Zhang, Xianping; Zhang, Haitao; Wang, Dongliang; Zhang, Qianjun; Ma, Yanwei; Awaji, Satoshi; Watanabe, Kazuo

    2014-03-25

    Improving transport current has been the primary topic for practical application of superconducting wires and tapes. However, the porous nature of powder-in-tube (PIT) processed iron-based tapes is one of the important reasons for low critical current density (Jc) values. In this work, the superconducting core density of ex-situ Sr0.6K0.4Fe2As2 + Sn tapes, prepared from optimized precursors, was significantly improved by employing a simple hot pressing as an alternative route for final sintering. The resulting samples exhibited optimal critical temperature (Tc), sharp resistive transition, small resistivity and high Vickers hardness (Hv) value. Consequently, the transport Jc reached excellent values of 5.1 × 10(4) A/cm(2) in 10 T and 4.3 × 10(4) A/cm(2) in 14 T at 4.2 K, respectively. Our tapes also exhibited high upper critical field Hc2 and almost field-independent Jc. These results clearly demonstrate that PIT pnictide wire conductors are very promising for high-field magnet applications.

  1. Strongly enhanced current densities in Sr0.6K0.4Fe2As2 + Sn superconducting tapes

    PubMed Central

    Lin, He; Yao, Chao; Zhang, Xianping; Zhang, Haitao; Wang, Dongliang; Zhang, Qianjun; Ma, Yanwei; Awaji, Satoshi; Watanabe, Kazuo

    2014-01-01

    Improving transport current has been the primary topic for practical application of superconducting wires and tapes. However, the porous nature of powder-in-tube (PIT) processed iron-based tapes is one of the important reasons for low critical current density (Jc) values. In this work, the superconducting core density of ex-situ Sr0.6K0.4Fe2As2 + Sn tapes, prepared from optimized precursors, was significantly improved by employing a simple hot pressing as an alternative route for final sintering. The resulting samples exhibited optimal critical temperature (Tc), sharp resistive transition, small resistivity and high Vickers hardness (Hv) value. Consequently, the transport Jc reached excellent values of 5.1 × 104 A/cm2 in 10 T and 4.3 × 104 A/cm2 in 14 T at 4.2 K, respectively. Our tapes also exhibited high upper critical field Hc2 and almost field-independent Jc. These results clearly demonstrate that PIT pnictide wire conductors are very promising for high-field magnet applications. PMID:24663054

  2. Strongly enhanced current densities in Sr0.6K0.4Fe2As2 + Sn superconducting tapes

    NASA Astrophysics Data System (ADS)

    Lin, He; Yao, Chao; Zhang, Xianping; Zhang, Haitao; Wang, Dongliang; Zhang, Qianjun; Ma, Yanwei; Awaji, Satoshi; Watanabe, Kazuo

    2014-03-01

    Improving transport current has been the primary topic for practical application of superconducting wires and tapes. However, the porous nature of powder-in-tube (PIT) processed iron-based tapes is one of the important reasons for low critical current density (Jc) values. In this work, the superconducting core density of ex-situ Sr0.6K0.4Fe2As2 + Sn tapes, prepared from optimized precursors, was significantly improved by employing a simple hot pressing as an alternative route for final sintering. The resulting samples exhibited optimal critical temperature (Tc), sharp resistive transition, small resistivity and high Vickers hardness (Hv) value. Consequently, the transport Jc reached excellent values of 5.1 × 104 A/cm2 in 10 T and 4.3 × 104 A/cm2 in 14 T at 4.2 K, respectively. Our tapes also exhibited high upper critical field Hc2 and almost field-independent Jc. These results clearly demonstrate that PIT pnictide wire conductors are very promising for high-field magnet applications.

  3. Implications of sampling design and sample size for national carbon accounting systems.

    PubMed

    Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

    2011-11-08

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

  4. Design of a mobile, homogeneous, and efficient electromagnet with a large field of view for neonatal low-field MRI.

    PubMed

    Lother, Steffen; Schiff, Steven J; Neuberger, Thomas; Jakob, Peter M; Fidler, Florian

    2016-08-01

    In this work, a prototype of an effective electromagnet with a field-of-view (FoV) of 140 mm for neonatal head imaging is presented. The efficient implementation succeeded by exploiting the use of steel plates as a housing system. We achieved a compromise between large sample volumes, high homogeneity, high B0 field, low power consumption, light weight, simple fabrication, and conserved mobility without the necessity of a dedicated water cooling system. The entire magnetic resonance imaging (MRI) system (electromagnet, gradient system, transmit/receive coil, control system) is introduced and its unique features discussed. Furthermore, simulations using a numerical optimization algorithm for magnet and gradient system are presented. Functionality and quality of this low-field scanner operating at 23 mT (generated with 500 W) is illustrated using spin-echo imaging (in-plane resolution 1.6 mm × 1.6 mm, slice thickness 5 mm, and signal-to-noise ratio (SNR) of 23 with a acquisition time of 29 min). B0 field-mapping measurements are presented to characterize the homogeneity of the magnet, and the B0 field limitations of 80 mT of the system are fully discussed. The cryogen-free system presented here demonstrates that this electromagnet with a ferromagnetic housing can be optimized for MRI with an enhanced and homogeneous magnetic field. It offers an alternative to prepolarized MRI designs in both readout field strength and power use. There are multiple indications for the clinical medical application of such low-field devices.

  5. Design of a mobile, homogeneous, and efficient electromagnet with a large field of view for neonatal low-field MRI

    PubMed Central

    Schiff, Steven J.; Neuberger, Thomas; Jakob, Peter M.; Fidler, Florian

    2017-01-01

    Objective In this work, a prototype of an effective electromagnet with a field-of-view (FoV) of 140 mm for neonatal head imaging is presented. The efficient implementation succeeded by exploiting the use of steel plates as a housing system. We achieved a compromise between large sample volumes, high homogeneity, high B0 field, low power consumption, light weight, simple fabrication, and conserved mobility without the necessity of a dedicated water cooling system. Materials and methods The entire magnetic resonance imaging (MRI) system (electromagnet, gradient system, transmit/receive coil, control system) is introduced and its unique features discussed. Furthermore, simulations using a numerical optimization algorithm for magnet and gradient system are presented. Results Functionality and quality of this low-field scanner operating at 23 mT (generated with 500 W) is illustrated using spin-echo imaging (in-plane resolution 1.6 mm × 1.6 mm, slice thickness 5 mm, and signal-to-noise ratio (SNR) of 23 with a acquisition time of 29 min). B0 field-mapping measurements are presented to characterize the homogeneity of the magnet, and the B0 field limitations of 80 mT of the system are fully discussed. Conclusion The cryogen-free system presented here demonstrates that this electromagnet with a ferromagnetic housing can be optimized for MRI with an enhanced and homogeneous magnetic field. It offers an alternative to pre-polarized MRI designs in both readout field strength and power use. There are multiple indications for the clinical medical application of such low-field devices. PMID:26861046

  6. [Matrix effect and application of field-amplified sample injection in the analysis of four tetracyclines in waters by capillary electrohoresis].

    PubMed

    2014-08-01

    The system abilities of two chromatographic techniques, capillary electrophoresis (CE) and high performance liquid chromatography (HPLC), were compared for the analysis of four tetracyclines (tetracycline, chlorotetracycline, oxytetracycline and doxycycline). The pH, concentration of background electrolyte (BGE) were optimized for the analysis of the standard mixture sample, meanwhile, the effects of separation voltage and water matrix (pH value and hardness) effects were investigated. In hydrodynamic injection (HDI) mode, a good quantitative linearity and baseline separation within 9. 0 min were obtained for the four tetracyclines at the optimal conditions; the analytical time was about half of that of HPLC. The limits of detection (LODs) were in the range of 0. 28 - 0. 62 mg/L, and the relative standard deviations (RSDs) (n= 6) of migration time and peak area were 0. 42% - 0. 56% and 2. 24% - 2. 95%, respectively. The obtained recoveries spiked in tap water and fishpond water were at the ranges of 96. 3% - 107. 2% and 87. 1% - 105. 2%, respectively. In addition, the stacking method, field-amplified sample injection (FASI), was employed to improve the sensitivity, and the LOD was down to the range of 17.8-35.5 μg/L. With FASI stacking, the RSDs (n=6) of migration time and peak area were 0. 85%-0. 95% and 1. 69%-3.43%, respectively. Due to the advantages of simple sample pretreatment and fast speed, CE is promising in the analysis of the antibiotics in environmental water.

  7. Probes for investigating the effect of magnetic field, field orientation, temperature and strain on the critical current density of anisotropic high-temperature superconducting tapes in a split-pair 15 T horizontal magnet.

    PubMed

    Sunwong, P; Higgins, J S; Hampshire, D P

    2014-06-01

    We present the designs of probes for making critical current density (Jc) measurements on anisotropic high-temperature superconducting tapes as a function of field, field orientation, temperature and strain in our 40 mm bore, split-pair 15 T horizontal magnet. Emphasis is placed on the design of three components: the vapour-cooled current leads, the variable temperature enclosure, and the springboard-shaped bending beam sample holder. The vapour-cooled brass critical-current leads used superconducting tapes and in operation ran hot with a duty cycle (D) of ~0.2. This work provides formulae for optimising cryogenic consumption and calculating cryogenic boil-off, associated with current leads used to make J(c) measurements, made by uniformly ramping the current up to a maximum current (I(max)) and then reducing the current very quickly to zero. They include consideration of the effects of duty cycle, static helium boil-off from the magnet and Dewar (b'), and the maximum safe temperature for the critical-current leads (T(max)). Our optimized critical-current leads have a boil-off that is about 30% less than leads optimized for magnet operation at the same maximum current. Numerical calculations show that the optimum cross-sectional area (A) for each current lead can be parameterized by LI(max)/A = [1.46D(-0.18)L(0.4)(T(max) - 300)(0.25D(-0.09)) + 750(b'/I(max))D(10(-3)I(max)-2.87b') × 10⁶ A m⁻¹ where L is the current lead's length and the current lead is operated in liquid helium. An optimum A of 132 mm(2) is obtained when I(max) = 1000 A, T(max) = 400 K, D = 0.2, b' = 0.3 l h(-1) and L = 1.0 m. The optimized helium consumption was found to be 0.7 l h(-1). When the static boil-off is small, optimized leads have a boil-off that can be roughly parameterized by: b/I(max)  ≈ (1.35 × 10(-3))D(0.41) l h(‑1) A(-1). A split-current-lead design is employed to minimize the rotation of the probes during the high current measurements in our high-field horizontal magnet. The variable-temperature system is based on the use of an inverted insulating cup that operates above 4.2 K in liquid helium and above 77.4 K in liquid nitrogen, with a stability of ±80 mK to ±150 mK. Uniaxial strains of -1.4% to 1.0% can be applied to the sample, with a total uncertainty of better than ±0.02%, using a modified bending beam apparatus which includes a copper beryllium springboard-shaped sample holder.

  8. Numerical optimization of three-dimensional coils for NSTX-U

    DOE PAGES

    Lazerson, S. A.; Park, J. -K.; Logan, N.; ...

    2015-09-03

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capablemore » of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. In conclusion, comparison between error field correction experiments on DIII-D and the optimizer show good agreement.« less

  9. Evaluating Parametrization Protocols for Hydration Free Energy Calculations with the AMOEBA Polarizable Force Field.

    PubMed

    Bradshaw, Richard T; Essex, Jonathan W

    2016-08-09

    Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.

  10. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction.

    PubMed

    Granovsky, Alexander A

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  11. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granovsky, Alexander A., E-mail: alex.granovsky@gmail.com

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  12. Uncertainty-Based Multi-Objective Optimization of Groundwater Remediation Design

    NASA Astrophysics Data System (ADS)

    Singh, A.; Minsker, B.

    2003-12-01

    Management of groundwater contamination is a cost-intensive undertaking filled with conflicting objectives and substantial uncertainty. A critical source of this uncertainty in groundwater remediation design problems comes from the hydraulic conductivity values for the aquifer, upon which the prediction of flow and transport of contaminants are dependent. For a remediation solution to be reliable in practice it is important that it is robust over the potential error in the model predictions. This work focuses on incorporating such uncertainty within a multi-objective optimization framework, to get reliable as well as Pareto optimal solutions. Previous research has shown that small amounts of sampling within a single-objective genetic algorithm can produce highly reliable solutions. However with multiple objectives the noise can interfere with the basic operations of a multi-objective solver, such as determining non-domination of individuals, diversity preservation, and elitism. This work proposes several approaches to improve the performance of noisy multi-objective solvers. These include a simple averaging approach, taking samples across the population (which we call extended averaging), and a stochastic optimization approach. All the approaches are tested on standard multi-objective benchmark problems and a hypothetical groundwater remediation case-study; the best-performing approach is then tested on a field-scale case at Umatilla Army Depot.

  13. Design of experiments for amino acid extraction from tobacco leaves and their subsequent determination by capillary zone electrophoresis.

    PubMed

    Hodek, Ondřej; Křížek, Tomáš; Coufal, Pavel; Ryšlavá, Helena

    2017-03-01

    In this study, we optimized a method for the determination of free amino acids in Nicotiana tabacum leaves. Capillary electrophoresis with contactless conductivity detector was used for the separation of 20 proteinogenic amino acids in acidic background electrolyte. Subsequently, the conditions of extraction with HCl were optimized for the highest extraction yield of the amino acids because sample treatment of plant materials brings some specific challenges. Central composite face-centered design with fractional factorial design was used in order to evaluate the significance of selected factors (HCl volume, HCl concentration, sonication, shaking) on the extraction process. In addition, the composite design helped us to find the optimal values for each factor using the response surface method. The limits of detection and limits of quantification for the 20 proteinogenic amino acids were found to be in the order of 10 -5 and 10 -4  mol l -1 , respectively. Addition of acetonitrile to the sample was tested as a method commonly used to decrease limits of detection. Ambiguous results of this experiment pointed out some features of plant extract samples, which often required specific approaches. Suitability of the method for metabolomic studies was tested by analysis of a real sample, in which all amino acids, except for L-methionine and L-cysteine, were successfully detected. The optimized extraction process together with the capillary electrophoresis method can be used for the determination of proteinogenic amino acids in plant materials. The resulting inexpensive, simple, and robust method is well suited for various metabolomic studies in plants. As such, the method represents a valuable tool for research and practical application in the fields of biology, biochemistry, and agriculture.

  14. Dispersive liquid-liquid microextraction prior to field-amplified sample injection for the sensitive analysis of 3,4-methylenedioxymethamphetamine, phencyclidine and lysergic acid diethylamide by capillary electrophoresis in human urine.

    PubMed

    Airado-Rodríguez, Diego; Cruces-Blanco, Carmen; García-Campaña, Ana M

    2012-12-07

    A novel capillary zone electrophoresis (CZE) with ultraviolet detection method has been developed and validated for the analysis of 3,4-methylenedioxymethamphetamine (MDMA), lysergic acid diethylamide (LSD) and phencyclidine (PCP) in human urine. The separation of these three analytes has been achieved in less than 8 min in a 72-cm effective length capillary with 50-μm internal diameter. 100 mM NaH(2)PO(4)/Na(2)HPO(4), pH 6.0 has been employed as running buffer, and the separation has been carried out at temperature and voltage of 20°C, and 25kV, respectively. The three drugs have been detected at 205 nm. Field amplified sample injection (FASI) has been employed for on-line sample preconcentration. FASI basically consists in a mismatch between the electric conductivity of the sample and that of the running buffer and it is achieved by electrokinetically injecting the sample diluted in a solvent of lower conductivity than that of the carrier electrolyte. Ultrapure water resulted to be the better sample solvent to reach the greatest enhancement factor. Injection voltage and time have been optimized to 5 kV and 20s, respectively. The irreproducibility associated to electrokinetic injection has been correcting by using tetracaine as internal standard. Dispersive liquid-liquid microextraction (DLLME) has been employed as sample treatment using experimental design and response surface methodology for the optimization of critical variables. Linear responses were found for MDMA, PCP and LSD in presence of urine matrix between 10.0 and 100 ng/mL approximately, and LODs of 1.00, 4.50, and 4.40 ng/mL were calculated for MDMA, PCP and LSD, respectively. The method has been successfully applied to the analysis of the three drugs of interest in human urine with satisfactory recovery percentages. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  16. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  17. Magnetic field effect on the electrical resistivity of Y1-xNixBa2Cu3O7-δ superconductor

    NASA Astrophysics Data System (ADS)

    Hadi-Sichani, Behnaz; Shakeripour, Hamideh; Salamati, Hadi

    2018-06-01

    The Ni- substituted Y1-xNixBa2Cu3O7-δ high temperature superconducting samples with 0 ≤ x < 0.01 were synthesized by the standard solid-state reaction. The temperature dependent resistivity of the samples was measured under magnetic fields in the range of zero to 1 Tesla, applied perpendicular to the current direction. To study of magnetoresistance is one of the most important ways to investigate the intergranular nature of superconducting materials. The resistive transition is made of two parts. The first- unaffected to applied magnetic field part which is near the onset of superconductivity. This region is due to superconductivity in grains. The second- broaden tail part which is due to the connectivity of the grains. At temperatures close to Tc 0, (ρ = 0), under applied magnetic fields, weak links are affected and the vortices are penetrated and move inside the intergranular and then the tail part is broaden. This broadening part observed in the electrical resistivity, ρ(T), and in the derivative of the electrical resistivity, dρ/dT, becomes too small or even absent in Ni doped samples. For pure sample, Tc 0 was around 90 K; by applying a magnetic field H = 0.3 T it shifted to 40 K. This broadening is 91.4 K to 80 K for x = 0.002 and 91.7 K to 85 K for x = 0.004 samples. We found an optimal value of Ni doping concentration which improves the coupling of the grains. Then, vortices get strongly pinned. These observations suggest that the Ni substitution can reduce the weak links and increase the Jc values of these superconductors.

  18. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  19. Design of planar microcoil-based NMR probe ensuring high SNR

    NASA Astrophysics Data System (ADS)

    Ali, Zishan; Poenar, D. P.; Aditya, Sheel

    2017-09-01

    A microNMR probe for ex vivo applications may consist of at least one microcoil, which can be used as the oscillating magnetic field (MF) generator as well as receiver coil, and a sample holder, with a volume in the range of nanoliters to micro-liters, placed near the microcoil. The Signal-to-Noise ratio (SNR) of such a probe is, however, dependent not only on its design but also on the measurement setup, and the measured sample. This paper introduces a performance factor P independent of both the proton spin density in the sample and the external DC magnetic field, and which can thus assess the performance of the probe alone. First, two of the components of the P factor (inhomogeneity factor K and filling factor η ) are defined and an approach to calculate their values for different probe variants from electromagnetic simulations is devised. A criterion based on dominant component of the magnetic field is then formulated to help designers optimize the sample volume which also affects the performance of the probe, in order to obtain the best SNR for a given planar microcoil. Finally, the P factor values are compared between different planar microcoils with different number of turns and conductor aspect ratios, and planar microcoils are also compared with conventional solenoids. These comparisons highlight which microcoil geometry-sample volume combination will ensure a high SNR under any external setup.

  20. Development and testing of an optimized method for DNA-based identification of jaguar (Panthera onca) and puma (Puma concolor) faecal samples for use in ecological and genetic studies.

    PubMed

    Haag, Taiana; Santos, Anelisie S; De Angelo, Carlos; Srbek-Araujo, Ana Carolina; Sana, Dênis A; Morato, Ronaldo G; Salzano, Francisco M; Eizirik, Eduardo

    2009-07-01

    The elusive nature and endangered status of most carnivore species imply that efficient approaches for their non-invasive sampling are required to allow for genetic and ecological studies. Faecal samples are a major potential source of information, and reliable approaches are needed to foster their application in this field, particularly in areas where few studies have been conducted. A major obstacle to the reliable use of faecal samples is their uncertain species-level identification in the field, an issue that can be addressed with DNA-based assays. In this study we describe a sequence-based approach that efficiently distinguishes jaguar versus puma scats, and that presents several desirable properties: (1) considerably high amplification and sequencing rates; (2) multiple diagnostic sites reliably differentiating the two focal species; (3) high information content that allows for future application in other carnivores; (4) no evidence of amplification of prey DNA; and (5) no evidence of amplification of a nuclear mitochondrial DNA insertion known to occur in the jaguar. We demonstrate the reliability and usefulness of this approach by evaluating 55 field-collected samples from four locations in the highly fragmented Atlantic Forest biome of Brazil and Argentina, and document the presence of one or both of these endangered felids in each of these areas.

  1. Engineering two-wire optical antennas for near field enhancement

    NASA Astrophysics Data System (ADS)

    Yang, Zhong-Jian; Zhao, Qian; Xiao, Si; He, Jun

    2017-07-01

    We study the optimization of near field enhancement in the two-wire optical antenna system. By varying the nanowire sizes we obtain the optimized side-length (width and height) for the maximum field enhancement with a given gap size. The optimized side-length applies to a broadband range (λ = 650-1000 nm). The ratio of extinction cross section to field concentration size is found to be closely related to the field enhancement behavior. We also investigate two experimentally feasible cases which are antennas on glass substrate and mirror, and find that the optimized side-length also applies to these systems. It is also found that the optimized side-length shows a tendency of increasing with the gap size. Our results could find applications in field-enhanced spectroscopies.

  2. Microwave surface resistance of MgB2

    NASA Astrophysics Data System (ADS)

    Zhukov, A. A.; Purnell, A.; Miyoshi, Y.; Bugoslavsky, Y.; Lockman, Z.; Berenov, A.; Zhai, H. Y.; Christen, H. M.; Paranthaman, M. P.; Lowndes, D. H.; Jo, M. H.; Blamire, M. G.; Hao, Ling; Gallop, J.; MacManus-Driscoll, J. L.; Cohen, L. F.

    2002-04-01

    The microwave power and frequency dependence of the surface resistance of MgB2 films and powder samples were studied. Sample quality is relatively easy to identify by the breakdown in the ω2 law for poor-quality samples at all temperatures. The performance of MgB2 at 10 GHz and 21 K was compared directly with that of high-quality YBCO films. The surface resistance of MgB2 was found to be approximately three times higher at low microwave power and showed an onset of nonlinearity at microwave surface fields ten times lower than the YBCO film. It is clear that MgB2 films are not yet optimized for microwave applications.

  3. A uniplanar three-axis gradient set for in vivo magnetic resonance microscopy.

    PubMed

    Demyanenko, Andrey V; Zhao, Lin; Kee, Yun; Nie, Shuyi; Fraser, Scott E; Tyszka, J Michael

    2009-09-01

    We present an optimized uniplanar magnetic resonance gradient design specifically tailored for MR imaging applications in developmental biology and histology. Uniplanar gradient designs sacrifice gradient uniformity for high gradient efficiency and slew rate, and are attractive for surface imaging applications where open access from one side of the sample is required. However, decreasing the size of the uniplanar gradient set presents several unique engineering challenges, particularly for heat dissipation and thermal insulation of the sample from gradient heating. We demonstrate a new three-axis, target-field optimized uniplanar gradient coil design that combines efficient cooling and insulation to significantly reduce sample heating at sample-gradient distances of less than 5mm. The instrument is designed for microscopy in horizontal bore magnets. Empirical gradient current efficiencies in the prototype coils lie between 3.75G/cm/A and 4.5G/cm/A with current and heating-limited maximum gradient strengths between 235G/cm and 450G/cm at a 2% duty cycle. The uniplanar gradient prototype is demonstrated with non-linearity corrections for both high-resolution structural imaging of tissue slices and for long time-course imaging of live, developing amphibian embryos in a horizontal bore 7T magnet.

  4. Low voltage electrophoresis chip with multi-segments synchronized scanning

    NASA Astrophysics Data System (ADS)

    Gu, Wenwen; Wen, Zhiyu; Xu, Yi

    2017-03-01

    For low voltage electrophoresis chip, there is always a problem that the samples are truncated and peaks are broadened, as well as longer time for separation. In this paper, a low voltage electrophoresis separation model was established, and the separation conditions were discussed. A new driving mode was proposed for applying low voltage, which was called multi-segments synchronized scanning. By using this driving mode, the reversed electric field that existed between the multi-segments can enrich samples and shorten the sample zone. The low voltage electrophoresis experiments using multi-segments synchronized scanning were carried out by home-made silicon-PDMS-based chip. The fluorescein isothiocyanate (FITC) labeled lysine and phenylalanine mixed samples with the concentration of 10-4 mol/L were successfully separated under the optimal conditions of 10 mmol/L borax buffer (pH = 10.0), 200 V/cm separation electric field and electrode switch time of 2.5 s. The separation was completed with a resolution of 2.0, and the peak time for lysine and phenylalanine was 4 min and 6 min, respectively.

  5. Optimization of a novel large field of view distortion phantom for MR-only treatment planning.

    PubMed

    Price, Ryan G; Knight, Robert A; Hwang, Ken-Pin; Bayram, Ersin; Nejad-Davarani, Siamak P; Glide-Hurst, Carri K

    2017-07-01

    MR-only treatment planning requires images of high geometric fidelity, particularly for large fields of view (FOV). However, the availability of large FOV distortion phantoms with analysis software is currently limited. This work sought to optimize a modular distortion phantom to accommodate multiple bore configurations and implement distortion characterization in a widely implementable solution. To determine candidate materials, 1.0 T MR and CT images were acquired of twelve urethane foam samples of various densities and strengths. Samples were precision-machined to accommodate 6 mm diameter paintballs used as landmarks. Final material candidates were selected by balancing strength, machinability, weight, and cost. Bore sizes and minimum aperture width resulting from couch position were tabulated from the literature (14 systems, 5 vendors). Bore geometry and couch position were simulated using MATLAB to generate machine-specific models to optimize the phantom build. Previously developed software for distortion characterization was modified for several magnet geometries (1.0 T, 1.5 T, 3.0 T), compared against previously published 1.0 T results, and integrated into the 3D Slicer application platform. All foam samples provided sufficient MR image contrast with paintball landmarks. Urethane foam (compressive strength ∼1000 psi, density ~20 lb/ft 3 ) was selected for its accurate machinability and weight characteristics. For smaller bores, a phantom version with the following parameters was used: 15 foam plates, 55 × 55 × 37.5 cm 3 (L×W×H), 5,082 landmarks, and weight ~30 kg. To accommodate > 70 cm wide bores, an extended build used 20 plates spanning 55 × 55 × 50 cm 3 with 7,497 landmarks and weight ~44 kg. Distortion characterization software was implemented as an external module into 3D Slicer's plugin framework and results agreed with the literature. The design and implementation of a modular, extendable distortion phantom was optimized for several bore configurations. The phantom and analysis software will be available for multi-institutional collaborations and cross-validation trials to support MR-only planning. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. Different elution modes and field programming in gravitational field-flow fractionation. III. Field programming by flow-rate gradient generated by a programmable pump.

    PubMed

    Plocková, J; Chmelík, J

    2001-05-25

    Gravitational field-flow fractionation (GFFF) utilizes the Earth's gravitational field as an external force that causes the settlement of particles towards the channel accumulation wall. Hydrodynamic lift forces oppose this action by elevating particles away from the channel accumulation wall. These two counteracting forces enable modulation of the resulting force field acting on particles in GFFF. In this work, force-field programming based on modulating the magnitude of hydrodynamic lift forces was implemented via changes of flow-rate, which was accomplished by a programmable pump. Several flow-rate gradients (step gradients, linear gradients, parabolic, and combined gradients) were tested and evaluated as tools for optimization of the separation of a silica gel particle mixture. The influence of increasing amount of sample injected on the peak resolution under flow-rate gradient conditions was also investigated. This is the first time that flow-rate gradients have been implemented for programming of the resulting force field acting on particles in GFFF.

  7. Spectropolarimetry of blood plasma in optimal molecular targeted therapy

    NASA Astrophysics Data System (ADS)

    Voloshynska, Katerina; Ilashchuk, Tetjana; Yermolenko, Sergey

    2015-02-01

    The aim of the study was to establish objective parameters of the field of laser and incoherent radiation of different spectral ranges (UV, visible, IR) as a non-invasive optical method of interaction with different samples of biological tissues and fluids of patients to determine the dynamics of metabolic syndrome and choosing the best personal treatment. As diagnostic methods have been used ultraviolet spectrometry samples of blood plasma in the liquid state, infrared spectroscopy middle range (2,5 - 25 microns) dry residue of plasma polarization and laser diagnostic technique of thin histological sections of biological tissues.

  8. Proteomic profiling of an undefined microbial consortium cultured in fermented dairy manure: Methods development.

    PubMed

    Hanson, Andrea J; Paszczynski, Andrzej J; Coats, Erik R

    2016-03-01

    The production of polyhydroxyalkanoates (PHA; bioplastics) from waste or surplus feedstocks using mixed microbial consortia (MMC) and aerobic dynamic feeding (ADF) is a growing field within mixed culture biotechnology. This study aimed to optimize a 2DE workflow to investigate the proteome dynamics of an MMC synthesizing PHA from fermented dairy manure. To mitigate the challenges posed to effective 2DE by this complex sample matrix, the bacterial biomass was purified using Accudenz gradient centrifugation (AGC) before protein extraction. The optimized 2DE method yielded high-quality gels suitable for quantitative comparative analysis and subsequent protein identification by LC-MS/MS. The optimized 2DE method could be adapted to other proteomic investigations involving MMC in complex organic or environmental matrices. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Optimization of the coplanar interdigital capacitive sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yunzhi; Zhan, Zheng; Bowler, Nicola

    2017-02-01

    Interdigital capacitive sensors are applied in nondestructive testing and material property characterization of low-conductivity materials. The sensor performance is typically described based on the penetration depth of the electric field into the sample material, the sensor signal strength and its sensitivity. These factors all depend on the geometry and material properties of the sensor and sample. In this paper, a detailed analysis is provided, through finite element simulations, of the ways in which the sensor's geometrical parameters affect its performance. The geometrical parameters include the number of digits forming the interdigital electrodes and the ratio of digit width to their separation. In addition, the influence of the presence or absence of a metal backplane on the sample is analyzed. Further, the effects of sensor substrate thickness and material on signal strength are studied. The results of the analysis show that it is necessary to take into account a trade-off between the desired sensitivity and penetration depth when designing the sensor. Parametric equations are presented to assist the sensor designer or nondestructive evaluation specialist in optimizing the design of a capacitive sensor.

  10. Monitoring sea lamprey pheromones and their degradation using rapid stream-side extraction coupled with UPLC-MS/MS

    USGS Publications Warehouse

    Wang, Huiyong; Johnson, Nicholas; Bernardy, Jeffrey; Hubert, Terry; Li, Weiming

    2013-01-01

    Pheromones guide adult sea lamprey (Petromyzon marinus) to suitable spawning streams and mates, and therefore, when quantified, can be used to assess population size and guide management. Here, we present an efficient sample preparation method where 100 mL of river water was spiked with deuterated pheromone as an internal standard and underwent rapid field-based SPE and elution in the field. The combination of field extraction with laboratory UPLC-MS/MS reduced the sample consumption from 1 to 0.1 L, decreased the sample process time from more than 1 h to 10 min, and increased the precision and accuracy. The sensitivity was improved more than one order of magnitude compared with the previous method. The influences of experimental conditions were assessed to optimize the separation and peak shapes. The analytical method has been validated by studies of stability, selectivity, precision, and linearity and by the determination of the limits of detection and quantification. The method was used to quantify pheromone concentration from five streams tributary to Lake Ontario and to estimate that the environmental half-life of 3kPZS is about 26 h.

  11. Portable lensless wide-field microscopy imaging platform based on digital inline holography and multi-frame pixel super-resolution

    PubMed Central

    Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan

    2017-01-01

    In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866

  12. Multipurpose EPR loop-gap resonator and cylindrical TE011 cavity for aqueous samples at 94 GHz.

    PubMed

    Sidabras, Jason W; Mett, Richard R; Froncisz, Wojciech; Camenisch, Theodore G; Anderson, James R; Hyde, James S

    2007-03-01

    A loop-gap resonator (LGR) and a cylindrical TE(011) cavity resonator for use at W band, 94 GHz, have been designed and characterized using the Ansoft (Pittsburgh, PA) high frequency structure simulator (HFSS; Version 10.0). Field modulation penetration was analyzed using Ansoft MAXWELL 3D (Version 11.0). Optimizing both resonators to the same sample sizes shows that EPR signal intensities of the LGR and TE(011) are similar. The 3 dB bandwidth of the LGR, on the order of 1 GHz, is a new advantage for high frequency experiments. Ultraprecision electric discharge machining (EDM) was used to fabricate the resonators from silver. The TE(011) cavity has slots that are cut into the body to allow penetration of 100 kHz field modulation. The resonator body is embedded in graphite, also cut by EDM techniques, for a combination of reasons that include (i) reduced microwave leakage and improved TE(011) mode purity, (ii) field modulation penetration, (iii) structural support for the cavity body, and (iv) machinability by EDM. Both resonators use a slotted iris. Variable coupling is provided by a three-stub tuning element. A collet system designed to hold sample tubes has been implemented, increasing repeatability of sample placement and reducing sample vibration noise. Initial results include multiquantum experiments up to 9Q using the LGR to examine 1 mM 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) in aqueous solution at room temperature and field modulation experiments using the TE(011) cavity to obtain an EPR spectrum of 1 microM TEMPO.

  13. Topology optimization based design of unilateral NMR for generating a remote homogeneous field.

    PubMed

    Wang, Qi; Gao, Renjing; Liu, Shutian

    2017-06-01

    This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.

  14. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  15. Scanning magnetic tunnel junction microscope for high-resolution imaging of remanent magnetization fields

    NASA Astrophysics Data System (ADS)

    Lima, E. A.; Bruno, A. C.; Carvalho, H. R.; Weiss, B. P.

    2014-10-01

    Scanning magnetic microscopy is a new methodology for mapping magnetic fields with high spatial resolution and field sensitivity. An important goal has been to develop high-performance instruments that do not require cryogenic technology due to its high cost, complexity, and limitation on sensor-to-sample distance. Here we report the development of a low-cost scanning magnetic microscope based on commercial room-temperature magnetic tunnel junction (MTJ) sensors that typically achieves spatial resolution better than 7 µm. By comparing different bias and detection schemes, optimal performance was obtained when biasing the MTJ sensor with a modulated current at 1.0 kHz in a Wheatstone bridge configuration while using a lock-in amplifier in conjunction with a low-noise custom-made preamplifier. A precision horizontal (x-y) scanning stage comprising two coupled nanopositioners controls the position of the sample and a linear actuator adjusts the sensor-to-sample distance. We obtained magnetic field sensitivities better than 150 nT/Hz1/2 between 0.1 and 10 Hz, which is a critical frequency range for scanning magnetic microscopy. This corresponds to a magnetic moment sensitivity of 10-14 A m2, a factor of 100 better than achievable with typical commercial superconducting moment magnetometers. It also represents an improvement in sensitivity by a factor between 10 and 30 compared to similar scanning MTJ microscopes based on conventional bias-detection schemes. To demonstrate the capabilities of the instrument, two polished thin sections of representative geological samples were scanned along with a synthetic sample containing magnetic microparticles. The instrument is usable for a diversity of applications that require mapping of samples at room temperature to preserve magnetic properties or viability, including paleomagnetism and rock magnetism, nondestructive evaluation of materials, and biological assays.

  16. Search for life on Mars in surface samples: Lessons from the 1999 Marsokhod rover field experiment

    USGS Publications Warehouse

    Newsom, Horton E.; Bishop, J.L.; Cockell, C.; Roush, T.L.; Johnson, J. R.

    2001-01-01

    The Marsokhod 1999 field experiment in the Mojave Desert included a simulation of a rover-based sample selection mission. As part of this mission, a test was made of strategies and analytical techniques for identifying past or present life in environments expected to be present on Mars. A combination of visual clues from high-resolution images and the detection of an important biomolecule (chlorophyll) with visible/near-infrared (NIR) spectroscopy led to the successful identification of a rock with evidence of cryptoendolithic organisms. The sample was identified in high-resolution images (3 times the resolution of the Imager for Mars Pathfinder camera) on the basis of a green tinge and textural information suggesting the presence of a thin, partially missing exfoliating layer revealing the organisms. The presence of chlorophyll bands in similar samples was observed in visible/NIR spectra of samples in the field and later confirmed in the laboratory using the same spectrometer. Raman spectroscopy in the laboratory, simulating a remote measurement technique, also detected evidence of carotenoids in samples from the same area. Laboratory analysis confirmed that the subsurface layer of the rock is inhabited by a community of coccoid Chroococcidioposis cyanobacteria. The identification of minerals in the field, including carbonates and serpentine, that are associated with aqueous processes was also demonstrated using the visible/NIR spectrometer. Other lessons learned that are applicable to future rover missions include the benefits of web-based programs for target selection and for daily mission planning and the need for involvement of the science team in optimizing image compression schemes based on the retention of visual signature characteristics. Copyright 2000 by the American Geophysical Union.

  17. Systematic parameter inference in stochastic mesoscopic modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Li, Zhen

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less

  18. Radio frequency coil technology for small-animal MRI.

    PubMed

    Doty, F David; Entzminger, George; Kulkarni, Jatin; Pamarthy, Kranti; Staab, John P

    2007-05-01

    A review of the theory, technology, and use of radio frequency (RF) coils for small-animal MRI is presented. It includes a brief overview of MR signal-to-noise (S/N) analysis and discussions of the various coils commonly used in small-animal MR: surface coils, linear volume coils, birdcages, and their derivatives. The scope is limited to mid-range coils, i.e. coils where the product (fd) of the frequency f and the coil diameter d is in the range 2-30 MHz-m. Common applications include mouse brain and body coils from 125 to 750 MHz, rat body coils up to 500 MHz, and small surface coils at all fields. In this regime, all the sources of loss (coil, capacitor, sample, shield, and transmission lines) are important. All such losses may be accurately captured in some modern full-wave 3D electromagnetics software, and new simulation results are presented for a selection of surface coils using Microwave Studio 2006 by Computer Simulation Technology, showing the dramatic importance of the "lift-off effect". Standard linear circuit simulators have been shown to be useful in optimization of complex coil tuning and matching circuits. There appears to be considerable potential for trading S/N for speed using phased arrays, especially for a larger field of view. Circuit simulators are shown to be useful for optimal mismatching of ultra-low-noise preamps based on the enhancement-mode pseudomorphic high-electron-mobility transistor for optimal coil decoupling in phased arrays. Cryogenically cooled RF coils are shown to offer considerable opportunity for future gains in S/N in smaller samples.

  19. Design of a sensitive grating-based phase contrast mammography prototype (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Arboleda Clavijo, Carolina; Wang, Zhentian; Köhler, Thomas; van Stevendaal, Udo; Martens, Gerhard; Bartels, Matthias; Villanueva-Perez, Pablo; Roessl, Ewald; Stampanoni, Marco

    2017-03-01

    Grating-based phase contrast mammography can help facilitate breast cancer diagnosis, as several research works have demonstrated. To translate this technique to the clinics, it has to be adapted to cover a large field of view within a limited exposure time and with a clinically acceptable radiation dose. This indicates that a straightforward approach would be to install a grating interferometer (GI) into a commercial mammography device. We developed a wave propagation based optimization method to select the most convenient GI designs in terms of phase and dark-field sensitivities for the Philips Microdose Mammography (PMM) setup. The phase sensitivity was defined as the minimum detectable breast tissue electron density gradient, whereas the dark-field sensitivity was defined as its corresponding signal-to-noise Ratio (SNR). To be able to derive sample-dependent sensitivity metrics, a visibility reduction model for breast tissue was formulated, based on previous research works on the dark-field signal and utilizing available Ultra-Small-Angle X-ray Scattering (USAXS) data and the outcomes of measurements on formalin-fixed breast tissue specimens carried out in tube-based grating interferometers. The results of this optimization indicate the optimal scenarios for each metric are different and fundamentally depend on the noise behavior of the signals and the visibility reduction trend with respect to the system autocorrelation length. In addition, since the inter-grating distance is constrained by the space available between the breast support and the detector, the best way we have to improve sensitivity is to count on a small G2 pitch.

  20. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  1. Real-time PCR detection of Plasmodium directly from whole blood and filter paper samples

    PubMed Central

    2011-01-01

    Background Real-time PCR is a sensitive and specific method for the analysis of Plasmodium DNA. However, prior purification of genomic DNA from blood is necessary since PCR inhibitors and quenching of fluorophores from blood prevent efficient amplification and detection of PCR products. Methods Reagents designed to specifically overcome PCR inhibition and quenching of fluorescence were evaluated for real-time PCR amplification of Plasmodium DNA directly from blood. Whole blood from clinical samples and dried blood spots collected in the field in Colombia were tested. Results Amplification and fluorescence detection by real-time PCR were optimal with 40× SYBR® Green dye and 5% blood volume in the PCR reaction. Plasmodium DNA was detected directly from both whole blood and dried blood spots from clinical samples. The sensitivity and specificity ranged from 93-100% compared with PCR performed on purified Plasmodium DNA. Conclusions The methodology described facilitates high-throughput testing of blood samples collected in the field by fluorescence-based real-time PCR. This method can be applied to a broad range of clinical studies with the advantages of immediate sample testing, lower experimental costs and time-savings. PMID:21851640

  2. Local measurement of thermal conductivity and diffusivity

    DOE PAGES

    Hurley, David H.; Schley, Robert S.; Khafizov, Marat; ...

    2015-12-01

    Simultaneous measurement of local thermal diffusivity and conductivity is demonstrated on a range of ceramic samples. This was accomplished by measuring the temperature field spatial profile of samples excited by an amplitude modulated continuous wave laser beam. A thin gold film is applied to the samples to ensure strong optical absorption and to establish a second boundary condition that introduces an expression containing the substrate thermal conductivity. The diffusivity and conductivity are obtained by comparing the measured phase profile of the temperature field to a continuum based model. A sensitivity analysis is used to identify the optimal film thickness formore » extracting the both substrate conductivity and diffusivity. Proof of principle studies were conducted on a range of samples having thermal properties that are representative of current and advanced accident tolerant nuclear fuels. It is shown that by including the Kapitza resistance as an additional fitting parameter, the measured conductivity and diffusivity of all the samples considered agree closely with literature values. Lastly, a distinguishing feature of this technique is that it does not require a priori knowledge of the optical spot size which greatly increases measurement reliability and reproducibility.« less

  3. Local measurement of thermal conductivity and diffusivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurley, David H.; Schley, Robert S.; Khafizov, Marat

    2015-12-15

    Simultaneous measurement of local thermal diffusivity and conductivity is demonstrated on a range of ceramic samples. This was accomplished by measuring the temperature field spatial profile of samples excited by an amplitude modulated continuous wave laser beam. A thin gold film is applied to the samples to ensure strong optical absorption and to establish a second boundary condition that introduces an expression containing the substrate thermal conductivity. The diffusivity and conductivity are obtained by comparing the measured phase profile of the temperature field to a continuum based model. A sensitivity analysis is used to identify the optimal film thickness formore » extracting the both substrate conductivity and diffusivity. Proof of principle studies were conducted on a range of samples having thermal properties that are representatives of current and advanced accident tolerant nuclear fuels. It is shown that by including the Kapitza resistance as an additional fitting parameter, the measured conductivity and diffusivity of all the samples considered agreed closely with the literature values. A distinguishing feature of this technique is that it does not require a priori knowledge of the optical spot size which greatly increases measurement reliability and reproducibility.« less

  4. Tunable dynamic response of magnetic gels: Impact of structural properties and magnetic fields

    NASA Astrophysics Data System (ADS)

    Tarama, Mitsusuke; Cremer, Peet; Borin, Dmitry Y.; Odenbach, Stefan; Löwen, Hartmut; Menzel, Andreas M.

    2014-10-01

    Ferrogels and magnetic elastomers feature mechanical properties that can be reversibly tuned from outside through magnetic fields. Here we concentrate on the question of how their dynamic response can be adjusted. The influence of three factors on the dynamic behavior is demonstrated using appropriate minimal models: first, the orientational memory imprinted into one class of the materials during their synthesis; second, the structural arrangement of the magnetic particles in the materials; and third, the strength of an external magnetic field. To illustrate the latter point, structural data are extracted from a real experimental sample and analyzed. Understanding how internal structural properties and external influences impact the dominant dynamical properties helps to design materials that optimize the requested behavior.

  5. Patterned growth of carbon nanotubes over vertically aligned silicon nanowire bundles for achieving uniform field emission.

    PubMed

    Hung, Yung-Jr; Huang, Yung-Jui; Chang, Hsuan-Chen; Lee, Kuei-Yi; Lee, San-Liang

    2014-01-01

    A fabrication strategy is proposed to enable precise coverage of as-grown carbon nanotube (CNT) mats atop vertically aligned silicon nanowire (VA-SiNW) bundles in order to realize a uniform bundle array of CNT-SiNW heterojunctions over a large sample area. No obvious electrical degradation of as-fabricated SiNWs is observed according to the measured current-voltage characteristic of a two-terminal single-nanowire device. Bundle arrangement of CNT-SiNW heterojunctions is optimized to relax the electrostatic screening effect and to maximize the field enhancement factor. As a result, superior field emission performance and relatively stable emission current over 12 h is obtained. A bright and uniform fluorescent radiation is observed from CNT-SiNW-based field emitters regardless of its bundle periodicity, verifying the existence of high-density and efficient field emitters on the proposed CNT-SiNW bundle arrays.

  6. Topology optimized permanent magnet systems

    NASA Astrophysics Data System (ADS)

    Bjørk, R.; Bahl, C. R. H.; Insinga, A. R.

    2017-09-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a Λcool figure of merit of 0.472 is reached, which is an increase of 100% compared to a previous optimized design.

  7. Field development planning using simulated annealing - optimal economic well scheduling and placement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckner, B.L.; Xong, X.

    1995-12-31

    A method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented. This approach is automated and combines an economics package and Mobil`s in-house simulator, PEGASUS, within a simulated annealing optimization engine. A novel framing of the well placement and scheduling problem as a classic {open_quotes}travelling salesman problem{close_quotes} is required before optimization via simulated annealing can be applied practically. An example of a full field development using this technique shows that non-uniform well spacings are optimal (from an NPV standpoint) when the effects of well interference and variablemore » reservoir properties are considered. Examples of optimizing field NPV with variable well costs also show that non-uniform wells spacings are optimal. Project NPV increases of 25 to 30 million dollars were shown using the optimal, nonuniform development versus reasonable, uniform developments. The ability of this technology to deduce these non-uniform well spacings opens up many potential applications that should materially impact the economic performance of field developments.« less

  8. Subsurface Noble Gas Sampling Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrigan, C. R.; Sun, Y.

    2017-09-18

    The intent of this document is to provide information about best available approaches for performing subsurface soil gas sampling during an On Site Inspection or OSI. This information is based on field sampling experiments, computer simulations and data from the NA-22 Noble Gas Signature Experiment Test Bed at the Nevada Nuclear Security Site (NNSS). The approaches should optimize the gas concentration from the subsurface cavity or chimney regime while simultaneously minimizing the potential for atmospheric radioxenon and near-surface Argon-37 contamination. Where possible, we quantitatively assess differences in sampling practices for the same sets of environmental conditions. We recognize that allmore » sampling scenarios cannot be addressed. However, if this document helps to inform the intuition of the reader about addressing the challenges resulting from the inevitable deviations from the scenario assumed here, it will have achieved its goal.« less

  9. Numerical optimization of perturbative coils for tokamaks

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Park, Jong-Kyu; Logan, Nikolas; Boozer, Allen; NSTX-U Research Team

    2014-10-01

    Numerical optimization of coils which apply three dimensional (3D) perturbative fields to tokamaks is presented. The application of perturbative 3D magnetic fields in tokamaks is now commonplace for control of error fields, resistive wall modes, resonant field drive, and neoclassical toroidal viscosity (NTV) torques. The design of such systems has focused on control of toroidal mode number, with coil shapes based on simple window-pane designs. In this work, a numerical optimization suite based on the STELLOPT 3D equilibrium optimization code is presented. The new code, IPECOPT, replaces the VMEC equilibrium code with the IPEC perturbed equilibrium code, and targets NTV torque by coupling to the PENT code. Fixed boundary optimizations of the 3D fields for the NSTX-U experiment are underway. Initial results suggest NTV torques can be driven by normal field spectrums which are not pitch-resonant with the magnetic field lines. Work has focused on driving core torque with n = 1 and edge torques with n = 3 fields. Optimizations of the coil currents for the planned NSTX-U NCC coils highlight the code's free boundary capability. This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy.

  10. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    NASA Astrophysics Data System (ADS)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  11. Compressed Sensing for fMRI: Feasibility Study on the Acceleration of Non-EPI fMRI at 9.4T

    PubMed Central

    Kim, Seong-Gi; Ye, Jong Chul

    2015-01-01

    Conventional functional magnetic resonance imaging (fMRI) technique known as gradient-recalled echo (GRE) echo-planar imaging (EPI) is sensitive to image distortion and degradation caused by local magnetic field inhomogeneity at high magnetic fields. Non-EPI sequences such as spoiled gradient echo and balanced steady-state free precession (bSSFP) have been proposed as an alternative high-resolution fMRI technique; however, the temporal resolution of these sequences is lower than the typically used GRE-EPI fMRI. One potential approach to improve the temporal resolution is to use compressed sensing (CS). In this study, we tested the feasibility of k-t FOCUSS—one of the high performance CS algorithms for dynamic MRI—for non-EPI fMRI at 9.4T using the model of rat somatosensory stimulation. To optimize the performance of CS reconstruction, different sampling patterns and k-t FOCUSS variations were investigated. Experimental results show that an optimized k-t FOCUSS algorithm with acceleration by a factor of 4 works well for non-EPI fMRI at high field under various statistical criteria, which confirms that a combination of CS and a non-EPI sequence may be a good solution for high-resolution fMRI at high fields. PMID:26413503

  12. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  13. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    USGS Publications Warehouse

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (<0.5 L/min) pumping rates during well purging and sampling captures primarily lateral flow from the formation through the well-screened interval at a depth coincident with the pump intake. However, if the intake is adjacent to a low hydraulic conductivity part of the screened formation, this scenario will induce vertical groundwater flow to the pump intake from parts of the screened interval with high hydraulic conductivity. Because less formation water will initially be captured during pumping, a substantial volume of water already in the well (preexisting screen water or screen storage) will be captured during this initial time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  14. Optimal Budget Allocation for Sample Average Approximation

    DTIC Science & Technology

    2011-06-01

    an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample

  15. Determination of Aniline and Its Derivatives in Environmental Water by Capillary Electrophoresis with On-Line Concentration

    PubMed Central

    Liu, Shuhui; Wang, Wenjun; Chen, Jie; Sun, Jianzhi

    2012-01-01

    This paper describes a simple, sensitive and environmentally benign method for the direct determination of aniline and its derivatives in environmental water samples by capillary zone electrophoresis (CZE) with field-enhanced sample injection. The parameters that influenced the enhancement and separation efficiencies were investigated. Surprisingly, under the optimized conditions, two linear ranges for the calibration plot, 1–50 ng/mL and 50–1000 ng/mL (R > 0.998), were obtained. The detection limit was in the range of 0.29–0.43 ng/mL. To eliminate the effect of the real sample matrix on the stacking efficiency, the standard addition method was applied to the analysis of water samples from local rivers. PMID:22837668

  16. Rapid Automated Sample Preparation for Biological Assays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shusteff, M

    Our technology utilizes acoustic, thermal, and electric fields to separate out contaminants such as debris or pollen from environmental samples, lyse open cells, and extract the DNA from the lysate. The objective of the project is to optimize the system described for a forensic sample, and demonstrate its performance for integration with downstream assay platforms (e.g. MIT-LL's ANDE). We intend to increase the quantity of DNA recovered from the sample beyond the current {approx}80% achieved using solid phase extraction methods. Task 1: Develop and test an acoustic filter for cell extraction. Task 2: Develop and test lysis chip. Task 3:more » Develop and test DNA extraction chip. All chips have been fabricated based on the designs laid out in last month's report.« less

  17. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  18. A doping dependent study of interplay between magnetic and superconducting properties in BaFe2-x Co x As2 single crystals

    NASA Astrophysics Data System (ADS)

    Bag, Biplab; Shaw, Gorky; Banerjee, S. S.; Vinod, K.; Bharathi, A.

    2018-03-01

    We show strong interplay between magnetic and superconducting order in three BaFe2-xCoxAs2 single crystals with different x. Our study reveals the presence of magnetic fluctuations with superconducting order in our samples and the strength of the magnetic fluctuations as well as the pinning properties are found to be the strongest for the optimally doped sample and weakest for the overdoped sample. Using local magnetization measurements, we show that application of an external magnetic field in our samples suppresses the magnetic fluctuations and enhances the diamagnetic response. Further, we show presence of unusual superconducting fluctuations above T c in our samples which we find strongly depends on the strength of the magnetic fluctuations. We believe that our data suggest the possible role of magnetic fluctuations in mediating superconducting fluctuations above Tc in our samples.

  19. Sampling Simulations for Assessing the Accuracy of U.S. Agricultural Crop Mapping from Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Dwyer, Linnea; Yadav, Kamini; Congalton, Russell G.

    2017-04-01

    Providing adequate food and water for a growing, global population continues to be a major challenge. Mapping and monitoring crops are useful tools for estimating the extent of crop productivity. GFSAD30 (Global Food Security Analysis Data at 30m) is a program, funded by NASA, that is producing global cropland maps by using field measurements and remote sensing images. This program studies 8 major crop types, and includes information on cropland area/extent, if crops are irrigated or rainfed, and the cropping intensities. Using results from the US and the extensive reference data available, CDL (USDA Crop Data Layer), we will experiment with various sampling simulations to determine optimal sampling for thematic map accuracy assessment. These simulations will include varying the sampling unit, the sampling strategy, and the sample number. Results of these simulations will allow us to recommend assessment approaches to handle different cropping scenarios.

  20. Evaluation of sample holders designed for long-lasting X-ray micro-tomographic scans of ex-vivo soft tissue samples

    NASA Astrophysics Data System (ADS)

    Dudak, J.; Zemlicka, J.; Krejci, F.; Karch, J.; Patzelt, M.; Zach, P.; Sykora, V.; Mrzilkova, J.

    2016-03-01

    X-ray microradiography and microtomography are imaging techniques with increasing applicability in the field of biomedical and preclinical research. Application of hybrid pixel detector Timepix enables to obtain very high contrast of low attenuating materials such as soft biological tissue. However X-ray imaging of ex-vivo soft tissue samples is a difficult task due to its structural instability. Ex-vivo biological tissue is prone to fast drying-out which is connected with undesired changes of sample size and shape producing later on artefacts within the tomographic reconstruction. In this work we present the optimization of our Timepix equipped micro-CT system aiming to maintain soft tissue sample in stable condition. Thanks to the suggested approach higher contrast of tomographic reconstructions can be achieved while also large samples that require detector scanning can be easily measured.

  1. Sub-nanometer Resolution Imaging with Amplitude-modulation Atomic Force Microscopy in Liquid

    PubMed Central

    Farokh Payam, Amir; Piantanida, Luca; Cafolla, Clodomiro; Voïtchovsky, Kislon

    2016-01-01

    Atomic force microscopy (AFM) has become a well-established technique for nanoscale imaging of samples in air and in liquid. Recent studies have shown that when operated in amplitude-modulation (tapping) mode, atomic or molecular-level resolution images can be achieved over a wide range of soft and hard samples in liquid. In these situations, small oscillation amplitudes (SAM-AFM) enhance the resolution by exploiting the solvated liquid at the surface of the sample. Although the technique has been successfully applied across fields as diverse as materials science, biology and biophysics and surface chemistry, obtaining high-resolution images in liquid can still remain challenging for novice users. This is partly due to the large number of variables to control and optimize such as the choice of cantilever, the sample preparation, and the correct manipulation of the imaging parameters. Here, we present a protocol for achieving high-resolution images of hard and soft samples in fluid using SAM-AFM on a commercial instrument. Our goal is to provide a step-by-step practical guide to achieving high-resolution images, including the cleaning and preparation of the apparatus and the sample, the choice of cantilever and optimization of the imaging parameters. For each step, we explain the scientific rationale behind our choices to facilitate the adaptation of the methodology to every user's specific system. PMID:28060262

  2. Research on inverse, hybrid and optimization problems in engineering sciences with emphasis on turbomachine aerodynamics: Review of Chinese advances

    NASA Technical Reports Server (NTRS)

    Liu, Gao-Lian

    1991-01-01

    Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.

  3. Pulsed-voltage atom probe tomography of low conductivity and insulator materials by application of ultrathin metallic coating on nanoscale specimen geometry.

    PubMed

    Adineh, Vahid R; Marceau, Ross K W; Chen, Yu; Si, Kae J; Velkov, Tony; Cheng, Wenlong; Li, Jian; Fu, Jing

    2017-10-01

    We present a novel approach for analysis of low-conductivity and insulating materials with conventional pulsed-voltage atom probe tomography (APT), by incorporating an ultrathin metallic coating on focused ion beam prepared needle-shaped specimens. Finite element electrostatic simulations of coated atom probe specimens were performed, which suggest remarkable improvement in uniform voltage distribution and subsequent field evaporation of the insulated samples with a metallic coating of approximately 10nm thickness. Using design of experiment technique, an experimental investigation was performed to study physical vapor deposition coating of needle specimens with end tip radii less than 100nm. The final geometries of the coated APT specimens were characterized with high-resolution scanning electron microscopy and transmission electron microscopy, and an empirical model was proposed to determine the optimal coating thickness for a given specimen size. The optimal coating strategy was applied to APT specimens of resin embedded Au nanospheres. Results demonstrate that the optimal coating strategy allows unique pulsed-voltage atom probe analysis and 3D imaging of biological and insulated samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Magnetomotive room temperature dicationic ionic liquid: a new concept toward centrifuge-less dispersive liquid-liquid microextraction.

    PubMed

    Beiraghi, Asadollah; Shokri, Masood; Seidi, Shahram; Godajdar, Bijan Mombani

    2015-01-09

    A new centrifuge-less dispersive liquid-liquid microextraction technique based on application of magnetomotive room temperature dicationic ionic liquid followed by electrothermal atomic absorption spectrometry (ETAAS) was developed for preconcentration and determination of trace amount of gold and silver in water and ore samples, for the first time. Magnetic ionic liquids not only have the excellent properties of ionic liquids but also exhibit strong response to an external magnetic field. These properties provide more advantages and potential application prospects for magnetic ionic liquids than conventional ones in the fields of extraction processes. In this work, thio-Michler's ketone (TMK) was used as chelating agent to form Ag/Au-TMK complexes. Several important factors affecting extraction efficiency including extraction time, rate of vortex agitator, pH of sample solution, concentration of the chelating agent, volume of ionic liquid as well as effects of interfering species were investigated and optimized. Under the optimal conditions, the limits of detection (LOD) were 3.2 and 7.3ngL(-1) with the preconcentration factors of 245 and 240 for Au and Ag, respectively. The precision values (RSD%, n=7) were 5.3% and 5.8% at the concentration level of 0.05μgL(-1) for Au and Ag, respectively. The relative recoveries for the spiked samples were in the acceptable range of 96-104.5%. The results demonstrated that except Hg(2+), no remarkable interferences are created by other various ions in the determination of Au and Ag, so that the tolerance limits (WIon/WAu or Ag) of major cations and anions were in the range of 250-1000. The validated method was successfully applied for the analysis of Au and Ag in some water and ore samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Sample distribution in peak mode isotachophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il

    We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less

  6. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.

  7. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    PubMed

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling method without the need to determine the initial gas phase TCE concentration. The simplified field deployment approach of the solvent-based dissolution method combined with the conventional analytical procedure used for groundwater samples substantially facilitates the application of CSIA to gas phase studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The X-IFU end-to-end simulations performed for the TES array optimization exercise

    NASA Astrophysics Data System (ADS)

    Peille, Philippe; Wilms, J.; Brand, T.; Cobo, B.; Ceballos, M. T.; Dauser, T.; Smith, S. J.; Barret, D.; den Herder, J. W.; Piro, L.; Barcons, X.; Pointecouteau, E.; Bandler, S.; den Hartog, R.; de Plaa, J.

    2015-09-01

    The focal plane assembly of the Athena X-ray Integral Field Unit (X-IFU) includes as the baseline an array of ~4000 single size calorimeters based on Transition Edge Sensors (TES). Other sensor array configurations could however be considered, combining TES of different properties (e.g. size). In attempting to improve the X-IFU performance in terms of field of view, count rate performance, and even spectral resolution, two alternative TES array configurations to the baseline have been simulated, each combining a small and a large pixel array. With the X-IFU end-to-end simulator, a sub-sample of the Athena core science goals, selected by the X-IFU science team as potentially driving the optimal TES array configuration, has been simulated for the results to be scientifically assessed and compared. In this contribution, we will describe the simulation set-up for the various array configurations, and highlight some of the results of the test cases simulated.

  9. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    NASA Astrophysics Data System (ADS)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  10. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  11. Recent developments on field gas extraction and sample preparation methods for radiokrypton dating of groundwater

    NASA Astrophysics Data System (ADS)

    Yokochi, Reika

    2016-09-01

    Current and foreseen population growths will lead to an increased demand in freshwater, large quantities of which is stored as groundwater. The ventilation age is crucial to the assessment of groundwater resources, complementing the hydrological model approach based on hydrogeological parameters. Ultra-trace radioactive isotopes of Kr (81 Kr and 85 Kr) possess the ideal physical and chemical properties for groundwater dating. The recent advent of atom trap trace analyses (ATTA) has enabled determination of ultra-trace noble gas radioisotope abundances using 5-10 μ L of pure Kr. Anticipated developments will enable ATTA to analyze radiokrypton isotope abundances at high sample throughput, which necessitates simple and efficient sample preparation techniques that are adaptable to various sample chemistries. Recent developments of field gas extraction devices and simple and rapid Kr separation method at the University of Chicago are presented herein. Two field gas extraction devices optimized for different sampling conditions were recently designed and constructed, aiming at operational simplicity and portability. A newly developed Kr purification system enriches Kr by flowing a sample gas through a moderately cooled (138 K) activated charcoal column, followed by a gentle fractionating desorption. This simple process uses a single adsorbent and separates 99% of the bulk atmospheric gases from Kr without significant loss. The subsequent two stages of gas chromatographic separation and a hot Ti sponge getter further purify the Kr-enriched gas. Abundant CH4 necessitates multiple passages through one of the gas chromatographic separation columns. The presented Kr separation system has a demonstrated capability of extracting Kr with > 90% yield and 99% purity within 75 min from 1.2 to 26.8 L STP of atmospheric air with various concentrations of CH4. The apparatuses have successfully been deployed for sampling in the field and purification of groundwater samples.

  12. Microwave Nondestructive Evaluation of Dielectric Materials with a Metamaterial Lens

    NASA Technical Reports Server (NTRS)

    Shreiber, Daniel; Gupta, Mool; Cravey, Robin L.

    2008-01-01

    A novel microwave Nondestructive Evaluation (NDE) sensor was developed in an attempt to increase the sensitivity of the microwave NDE method for detection of defects small relative to a wavelength. The sensor was designed on the basis of a negative index material (NIM) lens. Characterization of the lens was performed to determine its resonant frequency, index of refraction, focus spot size, and optimal focusing length (for proper sample location). A sub-wavelength spot size (3 dB) of 0.48 lambda was obtained. The proof of concept for the sensor was achieved when a fiberglass sample with a 3 mm diameter through hole (perpendicular to the propagation direction of the wave) was tested. The hole was successfully detected with an 8.2 cm wavelength electromagnetic wave. This method is able to detect a defect that is 0.037 lambda. This method has certain advantages over other far field and near field microwave NDE methods currently in use.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xing; Ibrahim, Yehia M.; Chen, Tsung-Chi

    We report the first evaluation of a platform coupling a high speed field asymmetric ion mobility spectrometry microchip (µFAIMS) with drift tube ion mobility and mass spectrometry (IMS-MS). The µFAIMS/IMS-MS platform was used to analyze biological samples and simultaneously acquire multidimensional information of detected features from the measured FAIMS compensation fields and IMS drift times, while also obtaining accurate ion masses. These separations thereby increase the overall separation power, resulting increased information content, and provide more complete characterization of more complex samples. The separation conditions were optimized for sensitivity and resolving power by the selection of gas compositions and pressuresmore » in the FAIMS and IMS separation stages. The resulting performance provided three dimensional separations, benefitting both broad complex mixture studies and targeted analyses by e.g. improving isomeric separations and allowing detection of species obscured by “chemical noise” and other interfering peaks.« less

  14. Canine parvovirus in vaccinated dogs: a field study.

    PubMed

    Miranda, C; Thompson, G

    2016-04-16

    The authors report a field study that investigated the canine parvovirus (CPV) strains present in dogs that developed the disease after being vaccinated. Faecal samples of 78 dogs that have been vaccinated against CPV and later presented with clinical signs suspected of parvovirus infection were used. Fifty (64.1 per cent) samples tested positive by PCR for CPV. No CPV vaccine type was detected. The disease by CPV-2b occurred in older and female dogs when compared with that by CPV-2c. The clinical signs presented by infected dogs were similar when any of both variants were involved. In most cases of disease, the resulting infection by field variants occurred shortly after CPV vaccination. Two dogs that had been subjected to a complete vaccination schedule and presented with clinical signs after 10 days of vaccination, had the CPV-2c variant associated. The phylogenetic studies showed a close relationship of the isolates in vaccinated dogs to European field strains. Despite the limited sample size in this study, the findings point to the significance of the continuous molecular typing of the virus as a tool to monitor the prevalent circulating CPV strains and access the efficacy of current vaccines. Adjustments on the vaccine types to be used may have to be evaluated again according to each epidemiological situation in order to achieve the dog's optimal immune protection against CPV.

  15. Persistent Organic Pollutant Determination in Killer Whale Scat Samples: Optimization of a Gas Chromatography/Mass Spectrometry Method and Application to Field Samples.

    PubMed

    Lundin, Jessica I; Dills, Russell L; Ylitalo, Gina M; Hanson, M Bradley; Emmons, Candice K; Schorr, Gregory S; Ahmad, Jacqui; Hempelmann, Jennifer A; Parsons, Kim M; Wasser, Samuel K

    2016-01-01

    Biologic sample collection in wild cetacean populations is challenging. Most information on toxicant levels is obtained from blubber biopsy samples; however, sample collection is invasive and strictly regulated under permit, thus limiting sample numbers. Methods are needed to monitor toxicant levels that increase temporal and repeat sampling of individuals for population health and recovery models. The objective of this study was to optimize measuring trace levels (parts per billion) of persistent organic pollutants (POPs), namely polychlorinated-biphenyls (PCBs), polybrominated-diphenyl-ethers (PBDEs), dichlorodiphenyltrichloroethanes (DDTs), and hexachlorocyclobenzene, in killer whale scat (fecal) samples. Archival scat samples, initially collected, lyophilized, and extracted with 70 % ethanol for hormone analyses, were used to analyze POP concentrations. The residual pellet was extracted and analyzed using gas chromatography coupled with mass spectrometry. Method detection limits ranged from 11 to 125 ng/g dry weight. The described method is suitable for p,p'-DDE, PCBs-138, 153, 180, and 187, and PBDEs-47 and 100; other POPs were below the limit of detection. We applied this method to 126 scat samples collected from Southern Resident killer whales. Scat samples from 22 adult whales also had known POP concentrations in blubber and demonstrated significant correlations (p < 0.01) between matrices across target analytes. Overall, the scat toxicant measures matched previously reported patterns from blubber samples of decreased levels in reproductive-age females and a decreased p,p'-DDE/∑PCB ratio in J-pod. Measuring toxicants in scat samples provides an unprecedented opportunity to noninvasively evaluate contaminant levels in wild cetacean populations; these data have the prospect to provide meaningful information for vital management decisions.

  16. Design optimization of superconducting coils based on asymmetrical characteristics of REBCO tapes

    NASA Astrophysics Data System (ADS)

    Hong, Zhiyong; Li, Wenrong; Chen, Yanjun; Gömöry, Fedor; Frolek, Lubomír; Zhang, Min; Sheng, Jie

    2018-07-01

    Angle dependence Ic(B,θ) of superconducting tape is a crucial parameter to calculate the influence of magnetic field during the design of superconducting applications,. This paper focuses on the asymmetrical characteristics found in REBCO tapes and further applications based on this phenomenon. This paper starts with angle dependence measurements of different HTS tapes, asymmetrical characteristics are found in some of the testing samples. On basis of this property, optimization of superconducting coils in superconducting motor, transformer and insert magnet is discussed by simulation. Simplified experiments which represent the structure of insert magnet were carried out to prove the validity of numerical studies. Conclusions obtained in this paper show that the asymmetrical property of superconducting tape is quite important in design of superconducting applications, and optimized winding technique based on this property can be used to improve the performance of superconducting devices.

  17. Fire assisted pastoralism vs. sustainable forestry--the implications of missing markets for carbon in determining optimal land use in the wet-dry tropics of Australia.

    PubMed

    Ockwell, David; Lovett, Jon C

    2005-04-01

    Using Cape York Peninsula, Queensland, Australia as a case study, this paper combines field sampling of woody vegetation with cost-benefit analysis to compare the social optimality of fire-assisted pastoralism with sustainable forestry. Carbon sequestration is estimated to be significantly higher in the absence of fire. Integration of carbon sequestration benefits for mitigating future costs of climate change into cost-benefit analysis demonstrates that sustainable forestry is a more socially optimal land use than fire-assisted pastoralism. Missing markets for carbon, however, imply that fire-assisted pastoralism will continue to be pursued in the absence of policy intervention. Creation of markets for carbon represents a policy solution that has the potential to drive land use away from fire-assisted pastoralism towards sustainable forestry and environmental conservation.

  18. Spot-Scanning Proton Arc (SPArc) Therapy: The First Robust and Delivery-Efficient Spot-Scanning Proton Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Xuanfeng, E-mail: Xuanfeng.ding@beaumont.org; Li, Xiaoqiang; Zhang, J. Michele

    Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc{sub multi-field}) and the standard robust optimized intensity modulatedmore » proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc{sub multi-field} plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be implemented into routine clinical practice.« less

  19. Design and performance of an ultra-high vacuum spin-polarized scanning tunneling microscope operating at 30 mK and in a vector magnetic field

    NASA Astrophysics Data System (ADS)

    von Allwörden, Henning; Eich, Andreas; Knol, Elze J.; Hermenau, Jan; Sonntag, Andreas; Gerritsen, Jan W.; Wegner, Daniel; Khajetoorians, Alexander A.

    2018-03-01

    We describe the design and performance of a scanning tunneling microscope (STM) that operates at a base temperature of 30 mK in a vector magnetic field. The cryogenics is based on an ultra-high vacuum (UHV) top-loading wet dilution refrigerator that contains a vector magnet allowing for fields up to 9 T perpendicular and 4 T parallel to the sample. The STM is placed in a multi-chamber UHV system, which allows in situ preparation and exchange of samples and tips. The entire system rests on a 150-ton concrete block suspended by pneumatic isolators, which is housed in an acoustically isolated and electromagnetically shielded laboratory optimized for extremely low noise scanning probe measurements. We demonstrate the overall performance by illustrating atomic resolution and quasiparticle interference imaging and detail the vibrational noise of both the laboratory and microscope. We also determine the electron temperature via measurement of the superconducting gap of Re(0001) and illustrate magnetic field-dependent measurements of the spin excitations of individual Fe atoms on Pt(111). Finally, we demonstrate spin resolution by imaging the magnetic structure of the Fe double layer on W(110).

  20. Design and performance of an ultra-high vacuum spin-polarized scanning tunneling microscope operating at 30 mK and in a vector magnetic field.

    PubMed

    von Allwörden, Henning; Eich, Andreas; Knol, Elze J; Hermenau, Jan; Sonntag, Andreas; Gerritsen, Jan W; Wegner, Daniel; Khajetoorians, Alexander A

    2018-03-01

    We describe the design and performance of a scanning tunneling microscope (STM) that operates at a base temperature of 30 mK in a vector magnetic field. The cryogenics is based on an ultra-high vacuum (UHV) top-loading wet dilution refrigerator that contains a vector magnet allowing for fields up to 9 T perpendicular and 4 T parallel to the sample. The STM is placed in a multi-chamber UHV system, which allows in situ preparation and exchange of samples and tips. The entire system rests on a 150-ton concrete block suspended by pneumatic isolators, which is housed in an acoustically isolated and electromagnetically shielded laboratory optimized for extremely low noise scanning probe measurements. We demonstrate the overall performance by illustrating atomic resolution and quasiparticle interference imaging and detail the vibrational noise of both the laboratory and microscope. We also determine the electron temperature via measurement of the superconducting gap of Re(0001) and illustrate magnetic field-dependent measurements of the spin excitations of individual Fe atoms on Pt(111). Finally, we demonstrate spin resolution by imaging the magnetic structure of the Fe double layer on W(110).

  1. Comparative molecular field analysis of artemisinin derivatives: Ab initio versus semiempirical optimized structures

    NASA Astrophysics Data System (ADS)

    Tonmunphean, Somsak; Kokpol, Sirirat; Parasuk, Vudhichai; Wolschann, Peter; Winger, Rudolf H.; Liedl, Klaus R.; Rode, Bernd M.

    1998-07-01

    Based on the belief that structural optimization methods, producing structures more closely to the experimental ones, should give better, i.e. more relevant, steric fields and hence more predictive CoMFA models, comparative molecular field analyses of artemisinin derivatives were performed based on semiempirical AM1 and HF/3-21G optimized geometries. Using these optimized geometries, the CoMFA results derived from the HF/3-21G method are found to be usually but not drastically better than those from AM1. Additional calculations were performed to investigate the electrostatic field difference using the Gasteiger and Marsili charges, the electrostatic potential fit charges at the AM1 level, and the natural population analysis charges at the HF/3-21G level of theory. For the HF/3-21G optimized structures no difference in predictability was observed, whereas for AM1 optimized structures such differences were found. Interestingly, if ionic compounds are omitted, differences between the various HF/3-21G optimized structure models using these electrostatic fields were found.

  2. Earth as a Tool for Astrobiology—A European Perspective

    NASA Astrophysics Data System (ADS)

    Martins, Zita; Cottin, Hervé; Kotler, Julia Michelle; Carrasco, Nathalie; Cockell, Charles S.; de la Torre Noetzel, Rosa; Demets, René; de Vera, Jean-Pierre; d'Hendecourt, Louis; Ehrenfreund, Pascale; Elsaesser, Andreas; Foing, Bernard; Onofri, Silvano; Quinn, Richard; Rabbow, Elke; Rettberg, Petra; Ricco, Antonio J.; Slenzka, Klaus; Stalport, Fabien; ten Kate, Inge L.; van Loon, Jack J. W. A.; Westall, Frances

    2017-07-01

    Scientists use the Earth as a tool for astrobiology by analyzing planetary field analogues (i.e. terrestrial samples and field sites that resemble planetary bodies in our Solar System). In addition, they expose the selected planetary field analogues in simulation chambers to conditions that mimic the ones of planets, moons and Low Earth Orbit (LEO) space conditions, as well as the chemistry occurring in interstellar and cometary ices. This paper reviews the ways the Earth is used by astrobiologists: (i) by conducting planetary field analogue studies to investigate extant life from extreme environments, its metabolisms, adaptation strategies and modern biosignatures; (ii) by conducting planetary field analogue studies to investigate extinct life from the oldest rocks on our planet and its biosignatures; (iii) by exposing terrestrial samples to simulated space or planetary environments and producing a sample analogue to investigate changes in minerals, biosignatures and microorganisms. The European Space Agency (ESA) created a topical team in 2011 to investigate recent activities using the Earth as a tool for astrobiology and to formulate recommendations and scientific needs to improve ground-based astrobiological research. Space is an important tool for astrobiology (see Horneck et al. in Astrobiology, 16:201-243, 2016; Cottin et al., 2017), but access to space is limited. Complementing research on Earth provides fast access, more replications and higher sample throughput. The major conclusions of the topical team and suggestions for the future include more scientifically qualified calls for field campaigns with planetary analogy, and a centralized point of contact at ESA or the EU for the organization of a survey of such expeditions. An improvement of the coordinated logistics, infrastructures and funding system supporting the combination of field work with planetary simulation investigations, as well as an optimization of the scientific return and data processing, data storage and data distribution is also needed. Finally, a coordinated EU or ESA education and outreach program would improve the participation of the public in the astrobiological activities.

  3. An Elitist Multiobjective Tabu Search for Optimal Design of Groundwater Remediation Systems.

    PubMed

    Yang, Yun; Wu, Jianfeng; Wang, Jinguo; Zhou, Zhifang

    2017-11-01

    This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation-optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump-and-treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism-based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large-scale field remediation sites, the EMOTS-based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness. © 2017, National Ground Water Association.

  4. Optimization design of energy deposition on single expansion ramp nozzle

    NASA Astrophysics Data System (ADS)

    Ju, Shengjun; Yan, Chao; Wang, Xiaoyong; Qin, Yupei; Ye, Zhifei

    2017-11-01

    Optimization design has been widely used in the aerodynamic design process of scramjets. The single expansion ramp nozzle is an important component for scramjets to produces most of thrust force. A new concept of increasing the aerodynamics of the scramjet nozzle with energy deposition is presented. The essence of the method is to create a heated region in the inner flow field of the scramjet nozzle. In the current study, the two-dimensional coupled implicit compressible Reynolds Averaged Navier-Stokes and Menter's shear stress transport turbulence model have been applied to numerically simulate the flow fields of the single expansion ramp nozzle with and without energy deposition. The numerical results show that the proposal of energy deposition can be an effective method to increase force characteristics of the scramjet nozzle, the thrust coefficient CT increase by 6.94% and lift coefficient CN decrease by 26.89%. Further, the non-dominated sorting genetic algorithm coupled with the Radial Basis Function neural network surrogate model has been employed to determine optimum location and density of the energy deposition. The thrust coefficient CT and lift coefficient CN are selected as objective functions, and the sampling points are obtained numerically by using a Latin hypercube design method. The optimized thrust coefficient CT further increase by 1.94%, meanwhile, the optimized lift coefficient CN further decrease by 15.02% respectively. At the same time, the optimized performances are in good and reasonable agreement with the numerical predictions. The findings suggest that scramjet nozzle design and performance can benefit from the application of energy deposition.

  5. Quantifying the motion of magnetic particles in excised tissue: Effect of particle properties and applied magnetic field

    NASA Astrophysics Data System (ADS)

    Kulkarni, Sandip; Ramaswamy, Bharath; Horton, Emily; Gangapuram, Sruthi; Nacev, Alek; Depireux, Didier; Shimoji, Mika; Shapiro, Benjamin

    2015-11-01

    This article presents a method to investigate how magnetic particle characteristics affect their motion inside tissues under the influence of an applied magnetic field. Particles are placed on top of freshly excised tissue samples, a calibrated magnetic field is applied by a magnet underneath each tissue sample, and we image and quantify particle penetration depth by quantitative metrics to assess how particle sizes, their surface coatings, and tissue resistance affect particle motion. Using this method, we tested available fluorescent particles from Chemicell of four sizes (100 nm, 300 nm, 500 nm, and 1 μm diameter) with four different coatings (starch, chitosan, lipid, and PEG/P) and quantified their motion through freshly excised rat liver, kidney, and brain tissues. In broad terms, we found that the applied magnetic field moved chitosan particles most effectively through all three tissue types (as compared to starch, lipid, and PEG/P coated particles). However, the relationship between particle properties and their resulting motion was found to be complex. Hence, it will likely require substantial further study to elucidate the nuances of transport mechanisms and to select and engineer optimal particle properties to enable the most effective transport through various tissue types under applied magnetic fields.

  6. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    NASA Astrophysics Data System (ADS)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  7. Optimization of the magnetic dynamo.

    PubMed

    Willis, Ashley P

    2012-12-21

    In stars and planets, magnetic fields are believed to originate from the motion of electrically conducting fluids in their interior, through a process known as the dynamo mechanism. In this Letter, an optimization procedure is used to simultaneously address two fundamental questions of dynamo theory: "Which velocity field leads to the most magnetic energy growth?" and "How large does the velocity need to be relative to magnetic diffusion?" In general, this requires optimization over the full space of continuous solenoidal velocity fields possible within the geometry. Here the case of a periodic box is considered. Measuring the strength of the flow with the root-mean-square amplitude, an optimal velocity field is shown to exist, but without limitation on the strain rate, optimization is prone to divergence. Measuring the flow in terms of its associated dissipation leads to the identification of a single optimal at the critical magnetic Reynolds number necessary for a dynamo. This magnetic Reynolds number is found to be only 15% higher than that necessary for transient growth of the magnetic field.

  8. In situ semi-quantitative analysis of polluted soils by laser-induced breakdown spectroscopy (LIBS).

    PubMed

    Ismaël, Amina; Bousquet, Bruno; Michel-Le Pierrès, Karine; Travaillé, Grégoire; Canioni, Lionel; Roy, Stéphane

    2011-05-01

    Time-saving, low-cost analyses of soil contamination are required to ensure fast and efficient pollution removal and remedial operations. In this work, laser-induced breakdown spectroscopy (LIBS) has been successfully applied to in situ analyses of polluted soils, providing direct semi-quantitative information about the extent of pollution. A field campaign has been carried out in Brittany (France) on a site presenting high levels of heavy metal concentrations. Results on iron as a major component as well as on lead and copper as minor components are reported. Soil samples were dried and prepared as pressed pellets to minimize the effects of moisture and density on the results. LIBS analyses were performed with a Nd:YAG laser operating at 1064 nm, 60 mJ per 10 ns pulse, at a repetition rate of 10 Hz with a diameter of 500 μm on the sample surface. Good correlations were obtained between the LIBS signals and the values of concentrations deduced from inductively coupled plasma atomic emission spectroscopy (ICP-AES). This result proves that LIBS is an efficient method for optimizing sampling operations. Indeed, "LIBS maps" were established directly on-site, providing valuable assistance in optimizing the selection of the most relevant samples for future expensive and time-consuming laboratory analysis and avoiding useless analyses of very similar samples. Finally, it is emphasized that in situ LIBS is not described here as an alternative quantitative analytical method to the usual laboratory measurements but simply as an efficient time-saving tool to optimize sampling operations and to drastically reduce the number of soil samples to be analyzed, thus reducing costs. The detection limits of 200 ppm for lead and 80 ppm for copper reported here are compatible with the thresholds of toxicity; thus, this in situ LIBS campaign was fully validated for these two elements. Consequently, further experiments are planned to extend this study to other chemical elements and other matrices of soils.

  9. Temperature effects on sinking velocity of different Emiliania huxleyi strains.

    PubMed

    Rosas-Navarro, Anaid; Langer, Gerald; Ziveri, Patrizia

    2018-01-01

    The sinking properties of three strains of Emiliania huxleyi in response to temperature changes were examined. We used a recently proposed approach to calculate sinking velocities from coccosphere architecture, which has the advantage to be applicable not only to culture samples, but also to field samples including fossil material. Our data show that temperature in the sub-optimal range impacts sinking velocity of E. huxleyi. This response is widespread among strains isolated in different locations and moreover comparatively predictable, as indicated by the similar slopes of the linear regressions. Sinking velocity was positively correlated to temperature as well as individual cell PIC/POC over the sub-optimum to optimum temperature range in all strains. In the context of climate change our data point to an important influence of global warming on sinking velocities. It has recently been shown that seawater acidification has no effect on sinking velocity of a Mediterranean E. huxleyi strain, while nutrient limitation seems to have a small negative effect on sinking velocity. Given that warming, acidification, and lowered nutrient availability will occur simultaneously under climate change scenarios, the question is what the net effect of different influential factors will be. For example, will the effects of warming and nutrient limitation cancel? This question cannot be answered conclusively but analyses of field samples in addition to laboratory culture studies will improve predictions because in field samples multi-factor influences and even evolutionary changes are not excluded. As mentioned above, the approach of determining sinking rate followed here is applicable to field samples. Future studies could use it to analyse not only seasonal and geographic patterns but also changes in sinking velocity over geological time scales.

  10. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  11. Semiautomated Device for Batch Extraction of Metabolites from Tissue Samples

    PubMed Central

    2012-01-01

    Metabolomics has become a mainstream analytical strategy for investigating metabolism. The quality of data derived from these studies is proportional to the consistency of the sample preparation. Although considerable research has been devoted to finding optimal extraction protocols, most of the established methods require extensive sample handling. Manual sample preparation can be highly effective in the hands of skilled technicians, but an automated tool for purifying metabolites from complex biological tissues would be of obvious utility to the field. Here, we introduce the semiautomated metabolite batch extraction device (SAMBED), a new tool designed to simplify metabolomics sample preparation. We discuss SAMBED’s design and show that SAMBED-based extractions are of comparable quality to extracts produced through traditional methods (13% mean coefficient of variation from SAMBED versus 16% from manual extractions). Moreover, we show that aqueous SAMBED-based methods can be completed in less than a quarter of the time required for manual extractions. PMID:22292466

  12. X-ray simulations method for the large field of view

    NASA Astrophysics Data System (ADS)

    Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.

    2018-03-01

    In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.

  13. Magnetic resonance imaging with an optical atomic magnetometer

    PubMed Central

    Xu, Shoujun; Yashchuk, Valeriy V.; Donaldson, Marcus H.; Rochester, Simon M.; Budker, Dmitry; Pines, Alexander

    2006-01-01

    We report an approach for the detection of magnetic resonance imaging without superconducting magnets and cryogenics: optical atomic magnetometry. This technique possesses a high sensitivity independent of the strength of the static magnetic field, extending the applicability of magnetic resonance imaging to low magnetic fields and eliminating imaging artifacts associated with high fields. By coupling with a remote-detection scheme, thereby improving the filling factor of the sample, we obtained time-resolved flow images of water with a temporal resolution of 0.1 s and spatial resolutions of 1.6 mm perpendicular to the flow and 4.5 mm along the flow. Potentially inexpensive, compact, and mobile, our technique provides a viable alternative for MRI detection with substantially enhanced sensitivity and time resolution for various situations where traditional MRI is not optimal. PMID:16885210

  14. High-quality imaging in environmental scanning electron microscopy--optimizing the pressure limiting system and the secondary electron detection of a commercially available ESEM.

    PubMed

    Fitzek, H; Schroettner, H; Wagner, J; Hofer, F; Rattenberger, J

    2016-04-01

    In environmental scanning electron microscopy applications in the kPa regime are of increasing interest for the investigation of wet and biological samples, because neither sample preparation nor extensive cooling are necessary. Unfortunately, the applications are limited by poor image quality. In this work the image quality at high pressures of a FEI Quanta 600 (field emission gun) and a FEI Quanta 200 (thermionic gun) is greatly improved by optimizing the pressure limiting system and the secondary electron (SE) detection system. The scattering of the primary electron beam strongly increases with pressure and thus the image quality vanishes. The key to high-image quality at high pressures is to reduce scattering as far as possible while maintaining ideal operation conditions for the SE-detector. The amount of scattering is reduced by reducing both the additional stagnation gas thickness (aSGT) and the environmental distance (ED). A new aperture holder is presented that significantly reduces the aSGT while maintaining the same field-of-view (FOV) as the original design. With this aperture holder it is also possible to make the aSGT even smaller at the expense of a smaller FOV. A new blade-shaped SE-detector is presented yielding better image quality than usual flat SE-detectors. The electrode of the new SE detector is positioned on the sample table, which allows the SE-detector to operate at ideal conditions regardless of pressure and ED. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  15. Structured illumination diffuse optical tomography for noninvasive functional neuroimaging in mice.

    PubMed

    Reisman, Matthew D; Markow, Zachary E; Bauer, Adam Q; Culver, Joseph P

    2017-04-01

    Optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to superficial cortical tissues. Diffuse optical tomography (DOT) techniques provide noninvasive imaging, but previous DOT systems for rodent neuroimaging have been limited either by sparse spatial sampling or by slow speed. Here, we develop a DOT system with asymmetric source-detector sampling that combines the high-density spatial sampling (0.4 mm) detection of a scientific complementary metal-oxide-semiconductor camera with the rapid (2 Hz) imaging of a few ([Formula: see text]) structured illumination (SI) patterns. Analysis techniques are developed to take advantage of the system's flexibility and optimize trade-offs among spatial sampling, imaging speed, and signal-to-noise ratio. An effective source-detector separation for the SI patterns was developed and compared with light intensity for a quantitative assessment of data quality. The light fall-off versus effective distance was also used for in situ empirical optimization of our light model. We demonstrated the feasibility of this technique by noninvasively mapping the functional response in the somatosensory cortex of the mouse following electrical stimulation of the forepaw.

  16. An accurate bacterial DNA quantification assay for HTS library preparation of human biological samples.

    PubMed

    Seashols-Williams, Sarah; Green, Raquel; Wohlfahrt, Denise; Brand, Angela; Tan-Torres, Antonio Limjuco; Nogales, Francy; Brooks, J Paul; Singh, Baneshwar

    2018-05-17

    Sequencing and classification of microbial taxa within forensically relevant biological fluids has the potential for applications in the forensic science and biomedical fields. The quantity of bacterial DNA from human samples is currently estimated based on quantity of total DNA isolated. This method can miscalculate bacterial DNA quantity due to the mixed nature of the sample, and consequently library preparation is often unreliable. We developed an assay that can accurately and specifically quantify bacterial DNA within a mixed sample for reliable 16S ribosomal DNA (16S rDNA) library preparation and high throughput sequencing (HTS). A qPCR method was optimized using universal 16S rDNA primers, and a commercially available bacterial community DNA standard was used to develop a precise standard curve. Following qPCR optimization, 16S rDNA libraries from saliva, vaginal and menstrual secretions, urine, and fecal matter were amplified and evaluated at various DNA concentrations; successful HTS data were generated with as low as 20 pg of bacterial DNA. Changes in bacterial DNA quantity did not impact observed relative abundances of major bacterial taxa, but relative abundance changes of minor taxa were observed. Accurate quantification of microbial DNA resulted in consistent, successful library preparations for HTS analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Separation and quantification of monoclonal-antibody aggregates by hollow-fiber-flow field-flow fractionation.

    PubMed

    Fukuda, Jun; Iwura, Takafumi; Yanagihara, Shigehiro; Kano, Kenji

    2014-10-01

    Hollow-fiber-flow field-flow fractionation (HF5) separates protein molecules on the basis of the difference in the diffusion coefficient, and can evaluate the aggregation ratio of proteins. However, HF5 is still a minor technique because information on the separation conditions is limited. We examined in detail the effect of different settings, including the main-flow rate, the cross-flow rate, the focus point, the injection amount, and the ionic strength of the mobile phase, on fractographic characteristics. On the basis of the results, we proposed optimized conditions of the HF5 method for quantification of monoclonal antibody in sample solutions. The HF5 method was qualified regarding the precision, accuracy, linearity of the main peak, and quantitation limit. In addition, the HF5 method was applied to non-heated Mab A and heat-induced-antibody-aggregate-containing samples to evaluate the aggregation ratio and the distribution extent. The separation performance was comparable with or better than that of conventional methods including analytical ultracentrifugation-sedimentation velocity and asymmetric-flow field-flow fractionation.

  18. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  19. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  20. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  1. Optimization of classical nonpolarizable force fields for OH(-) and H3O(+).

    PubMed

    Bonthuis, Douwe Jan; Mamatkulov, Shavkat I; Netz, Roland R

    2016-03-14

    We optimize force fields for H3O(+) and OH(-) that reproduce the experimental solvation free energies and the activities of H3O(+) Cl(-) and Na(+) OH(-) solutions up to concentrations of 1.5 mol/l. The force fields are optimized with respect to the partial charge on the hydrogen atoms and the Lennard-Jones parameters of the oxygen atoms. Remarkably, the partial charge on the hydrogen atom of the optimized H3O(+) force field is 0.8 ± 0.1|e|--significantly higher than the value typically used for nonpolarizable water models and H3O(+) force fields. In contrast, the optimal partial charge on the hydrogen atom of OH(-) turns out to be zero. Standard combination rules can be used for H3O(+) Cl(-) solutions, while for Na(+) OH(-) solutions, we need to significantly increase the effective anion-cation Lennard-Jones radius. While highlighting the importance of intramolecular electrostatics, our results show that it is possible to generate thermodynamically consistent force fields without using atomic polarizability.

  2. Field Exploration and Life Detection Sampling for Planetary Analogue Research (FELDSPAR): Variability and Correlation in Biomarker and Mineralogy Measurements from Icelandic Mars Analogues

    NASA Technical Reports Server (NTRS)

    Gentry, D.; Amador, E.; Cable, M. L.; Cantrell, T.; Chaudry, N.; Cullen, T.; Duca, Z.; Jacobsen, M.; Kirby, J.; McCaig, H.; hide

    2018-01-01

    In situ exploration of planetary environments allows biochemical analysis of sub-centimeter-scale samples; however, landing sites are selected a priori based on measurable meter- to kilometer-scale geological features. Optimizing life detection mission science return requires both understanding the expected biomarker distributions across sample sites at different scales and efficiently using first-stage in situ geochemical instruments to justify later-stage biological or chemical analysis. Icelandic volcanic regions have an extensive history as Mars analogue sites due to desiccation, low nutrient availability, and temperature extremes, in addition to the advantages of geological youth and isolation from anthropogenic contamination. Many Icelandic analogue sites are also rugged and remote enough to create the same type of instrumentation and sampling constraints typically faced by robotic exploration.

  3. High-pressure freezing for scanning transmission electron tomography analysis of cellular organelles.

    PubMed

    Walther, Paul; Schmid, Eberhard; Höhn, Katharina

    2013-01-01

    Using an electron microscope's scanning transmission mode (STEM) for collection of tomographic datasets is advantageous compared to bright field transmission electron microscopic (TEM). For image formation, inelastic scattering does not cause chromatic aberration, since in STEM mode no image forming lenses are used after the beam has passed the sample, in contrast to regular TEM. Therefore, thicker samples can be imaged. It has been experimentally demonstrated that STEM is superior to TEM and energy filtered TEM for tomography of samples as thick as 1 μm. Even when using the best electron microscope, adequate sample preparation is the key for interpretable results. We adapted protocols for high-pressure freezing of cultivated cells from a physiological state. In this chapter, we describe optimized high-pressure freezing and freeze substitution protocols for STEM tomography in order to obtain high membrane contrast.

  4. SU-F-T-387: A Novel Optimization Technique for Field in Field (FIF) Chestwall Radiation Therapy Using a Single Plan to Improve Delivery Safety and Treatment Planning Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabibian, A; Kim, A; Rose, J

    Purpose: A novel optimization technique was developed for field-in-field (FIF) chestwall radiotherapy using bolus every other day. The dosimetry was compared to currently used optimization. Methods: The prior five patients treated at our clinic to the chestwall and supraclavicular nodes with a mono-isocentric four-field arrangement were selected for this study. The prescription was 5040 cGy in 28 fractions, 5 mm bolus every other day on the tangent fields, 6 and/or 10 MV x-rays, and multileaf collimation.Novelly, tangents FIF segments were forward planned optimized based on the composite bolus and non-bolus dose distribution simultaneously. The prescription was spilt into 14 fractionsmore » for both bolus and non-bolus tangents. The same segments and monitor units were used for the bolus and non-bolus treatment. The plan was optimized until the desired coverage was achieved, minimized 105% hotspots, and a maximum dose of less than 108%. Each tangential field had less than 5 segments.Comparison plans were generated using FIF optimization with the same dosimetric goals, but using only the non-bolus calculation for FIF optimization. The non-bolus fields were then copied and bolus was applied. The same segments and monitor units were used for the bolus and non-bolus segments. Results: The prescription coverage of the chestwall, as defined by RTOG guidelines, was on average 51.8% for the plans that optimized bolus and non-bolus treatments simultaneous (SB) and 43.8% for the plans optimized to the non-bolus treatments (NB). Chestwall coverage of 90% prescription averaged to 80.4% for SB and 79.6% for NB plans. The volume receiving 105% of the prescription was 1.9% for SB and 0.8% for NB plans on average. Conclusion: Simultaneously optimizing for bolus and non-bolus treatments noticeably improves prescription coverage of the chestwall while maintaining similar hotspots and 90% prescription coverage in comparison to optimizing only to non-bolus treatments.« less

  5. Determination of nanomolar chromate in drinking water with solid phase extraction and a portable spectrophotometer.

    PubMed

    Ma, Jian; Yang, Bo; Byrne, Robert H

    2012-06-15

    Determination of chromate at low concentration levels in drinking water is an important analytical objective for both human health and environmental science. Here we report the use of solid phase extraction (SPE) in combination with a custom-made portable light-emitting diode (LED) spectrophotometer to achieve detection of chromate in the field at nanomolar levels. The measurement chemistry is based on a highly selective reaction between 1,5-diphenylcarbazide (DPC) and chromate under acidic conditions. The Cr-DPC complex formed in the reaction can be extracted on a commercial C18 SPE cartridge. Concentrated Cr-DPC is subsequently eluted with methanol and detected by spectrophotometry. Optimization of analytical conditions involved investigation of reagent compositions and concentrations, eluent type, flow rate (sample loading), sample volume, and stability of the SPE cartridge. Under optimized conditions, detection limits are on the order of 3 nM. Only 50 mL of sample is required for an analysis, and total analysis time is around 10 min. The targeted analytical range of 0-500 nM can be easily extended by changing the sample volume. Compared to previous SPE-based spectrophotometric methods, this analytical procedure offers the benefits of improved sensitivity, reduced sample consumption, shorter analysis time, greater operational convenience, and lower cost. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Three-dimensional full-field X-ray orientation microscopy

    PubMed Central

    Viganò, Nicola; Tanguy, Alexandre; Hallais, Simon; Dimanov, Alexandre; Bornert, Michel; Batenburg, Kees Joost; Ludwig, Wolfgang

    2016-01-01

    A previously introduced mathematical framework for full-field X-ray orientation microscopy is for the first time applied to experimental near-field diffraction data acquired from a polycrystalline sample. Grain by grain tomographic reconstructions using convex optimization and prior knowledge are carried out in a six-dimensional representation of position-orientation space, used for modelling the inverse problem of X-ray orientation imaging. From the 6D reconstruction output we derive 3D orientation maps, which are then assembled into a common sample volume. The obtained 3D orientation map is compared to an EBSD surface map and local misorientations, as well as remaining discrepancies in grain boundary positions are quantified. The new approach replaces the single orientation reconstruction scheme behind X-ray diffraction contrast tomography and extends the applicability of this diffraction imaging technique to material micro-structures exhibiting sub-grains and/or intra-granular orientation spreads of up to a few degrees. As demonstrated on textured sub-regions of the sample, the new framework can be extended to operate on experimental raw data, thereby bypassing the concept of orientation indexation based on diffraction spot peak positions. This new method enables fast, three-dimensional characterization with isotropic spatial resolution, suitable for time-lapse observations of grain microstructures evolving as a function of applied strain or temperature. PMID:26868303

  7. Pulse shape optimization for electron-positron production in rotating fields

    NASA Astrophysics Data System (ADS)

    Fillion-Gourdeau, François; Hebenstreit, Florian; Gagnon, Denis; MacLean, Steve

    2017-07-01

    We optimize the pulse shape and polarization of time-dependent electric fields to maximize the production of electron-positron pairs via strong field quantum electrodynamics processes. The pulse is parametrized in Fourier space by a B -spline polynomial basis, which results in a relatively low-dimensional parameter space while still allowing for a large number of electric field modes. The optimization is performed by using a parallel implementation of the differential evolution, one of the most efficient metaheuristic algorithms. The computational performance of the numerical method and the results on pair production are compared with a local multistart optimization algorithm. These techniques allow us to determine the pulse shape and field polarization that maximize the number of produced pairs in computationally accessible regimes.

  8. Influences of misprediction costs on solar flare prediction

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Wang, HuaNing; Dai, XingHua

    2012-10-01

    The mispredictive costs of flaring and non-flaring samples are different for different applications of solar flare prediction. Hence, solar flare prediction is considered a cost sensitive problem. A cost sensitive solar flare prediction model is built by modifying the basic decision tree algorithm. Inconsistency rate with the exhaustive search strategy is used to determine the optimal combination of magnetic field parameters in an active region. These selected parameters are applied as the inputs of the solar flare prediction model. The performance of the cost sensitive solar flare prediction model is evaluated for the different thresholds of solar flares. It is found that more flaring samples are correctly predicted and more non-flaring samples are wrongly predicted with the increase of the cost for wrongly predicting flaring samples as non-flaring samples, and the larger cost of wrongly predicting flaring samples as non-flaring samples is required for the higher threshold of solar flares. This can be considered as the guide line for choosing proper cost to meet the requirements in different applications.

  9. Nanoliter microfluidic hybrid method for simultaneous screening and optimization validated with crystallization of membrane proteins

    PubMed Central

    Li, Liang; Mustafi, Debarshi; Fu, Qiang; Tereshko, Valentina; Chen, Delai L.; Tice, Joshua D.; Ismagilov, Rustem F.

    2006-01-01

    High-throughput screening and optimization experiments are critical to a number of fields, including chemistry and structural and molecular biology. The separation of these two steps may introduce false negatives and a time delay between initial screening and subsequent optimization. Although a hybrid method combining both steps may address these problems, miniaturization is required to minimize sample consumption. This article reports a “hybrid” droplet-based microfluidic approach that combines the steps of screening and optimization into one simple experiment and uses nanoliter-sized plugs to minimize sample consumption. Many distinct reagents were sequentially introduced as ≈140-nl plugs into a microfluidic device and combined with a substrate and a diluting buffer. Tests were conducted in ≈10-nl plugs containing different concentrations of a reagent. Methods were developed to form plugs of controlled concentrations, index concentrations, and incubate thousands of plugs inexpensively and without evaporation. To validate the hybrid method and demonstrate its applicability to challenging problems, crystallization of model membrane proteins and handling of solutions of detergents and viscous precipitants were demonstrated. By using 10 μl of protein solution, ≈1,300 crystallization trials were set up within 20 min by one researcher. This method was compatible with growth, manipulation, and extraction of high-quality crystals of membrane proteins, demonstrated by obtaining high-resolution diffraction images and solving a crystal structure. This robust method requires inexpensive equipment and supplies, should be especially suitable for use in individual laboratories, and could find applications in a number of areas that require chemical, biochemical, and biological screening and optimization. PMID:17159147

  10. Leveraging Crystal Anisotropy for Deterministic Growth of InAs Quantum Dots with Narrow Optical Linewidths

    DTIC Science & Technology

    2013-08-29

    similar layer thicknesses. This offset indicates that the electric field profile of our Schottky diode is different than for unpatterned samples, implying...sacrificing uniformity by further optimizing the substrate Figure 3. (a) Schematic of the Schottky diode heterostructure, indicating the patterned substrate...and negative (X−) trions are indicated . (c) Distribution of linewidths for 80 PL lines from dots grown in high density arrays such as those in Figure 2b

  11. Controlling Bottom Hole Flowing Pressure Within a Specific Range for Efficient Coalbed Methane Drainage

    NASA Astrophysics Data System (ADS)

    Zhao, Bin; Wang, Zhi-Yin; Hu, Ai-Mei; Zhai, Yu-Yang

    2013-11-01

    The stress state of coal surrounding a coalbed methane (CBM) production well is affected by the bottom hole flowing pressure (BHFP). The permeability of coal shows a marked change under compression. The BHFP must be restricted to a specific range to favor higher permeability in the surrounding coal and thus higher productivity of the well. A new method to determine this specific range is proposed in this paper. Coal has a rather low tensile strength, which induces tensile failure and rock disintegration. The deformation of coal samples under compression has four main stages: compaction, elastic deformation, strain hardening, and strain softening. Permeability is optimal when the coal samples are in the strain softening stage. The three critical values of BHFP, namely, p wmin, p wmid, and p wupper, which correspond to the occurrence of tensile failure, the start of strain softening, and the beginning of plastic deformation, respectively, are derived from theoretical principles. The permeability of coal is in an optimal state when the BHFP is between p wmin and p wmid. The BHFP should be confined to this specific range for the efficient drainage of CBM wells. This method was applied to field operations in three wells in the Hancheng CBM field in China. A comprehensive analysis of drainage data and of the BHFP indicates that the new method is effective and offers significant improvement to current practices.

  12. LAMPhimerus: A novel LAMP assay for detecting Amphimerus sp. DNA in human stool samples

    PubMed Central

    Calvopiña, Manuel; Fontecha-Cuenca, Cristina; Sugiyama, Hiromu; Sato, Megumi; López Abán, Julio; Vicente, Belén; Muro, Antonio

    2017-01-01

    Background Amphimeriasis is a fish-borne disease caused by the liver fluke Amphimerus spp. that has recently been reported as endemic in the tropical Pacific side of Ecuador with a high prevalence in humans and domestic animals. The diagnosis is based on the stool examination to identify parasite eggs, but it lacks sensitivity. Additionally, the morphology of the eggs may be confounded with other liver and intestinal flukes. No immunological or molecular methods have been developed to date. New diagnostic techniques for specific and sensitive detection of Amphimerus spp. DNA in clinical samples are needed. Methodology/Principal findings A LAMP targeting a sequence of the Amphimerus sp. internal transcribed spacer 2 region was designed. Amphimerus sp. DNA was obtained from adult worms recovered from animals and used to optimize the molecular assays. Conventional PCR was performed using outer primers F3-B3 to verify the proper amplification of the Amphimerus sp. DNA target sequence. LAMP was optimized using different reaction mixtures and temperatures, and it was finally set up as LAMPhimerus. The specificity and sensitivity of both PCR and LAMP were evaluated. The detection limit was 1 pg of genomic DNA. Field testing was done using 44 human stool samples collected from localities where fluke is endemic. Twenty-five samples were microscopy positive for Amphimerus sp. eggs detection. In molecular testing, PCR F3-B3 was ineffective when DNA from fecal samples was used. When testing all human stool samples included in our study, the diagnostic parameters for the sensitivity and specificity were calculated for our LAMPhimerus assay, which were 76.67% and 80.77%, respectively. Conclusions/Significance We have developed and evaluated, for the first time, a specific and sensitive LAMP assay for detecting Amphimerus sp. in human stool samples. The procedure has been named LAMPhimerus method and has the potential to be adapted for field diagnosis and disease surveillance in amphimeriasis-endemic areas. Future large-scale studies will assess the applicability of this novel LAMP assay. PMID:28628614

  13. Searching for quantum optimal controls under severe constraints

    DOE PAGES

    Riviello, Gregory; Tibbetts, Katharine Moore; Brif, Constantin; ...

    2015-04-06

    The success of quantum optimal control for both experimental and theoretical objectives is connected to the topology of the corresponding control landscapes, which are free from local traps if three conditions are met: (1) the quantum system is controllable, (2) the Jacobian of the map from the control field to the evolution operator is of full rank, and (3) there are no constraints on the control field. This paper investigates how the violation of assumption (3) affects gradient searches for globally optimal control fields. The satisfaction of assumptions (1) and (2) ensures that the control landscape lacks fundamental traps, butmore » certain control constraints can still prevent successful optimization of the objective. Using optimal control simulations, we show that the most severe field constraints are those that limit essential control resources, such as the number of control variables, the control duration, and the field strength. Proper management of these resources is an issue of great practical importance for optimization in the laboratory. For each resource, we show that constraints exceeding quantifiable limits can introduce artificial traps to the control landscape and prevent gradient searches from reaching a globally optimal solution. These results demonstrate that careful choice of relevant control parameters helps to eliminate artificial traps and facilitate successful optimization.« less

  14. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  15. Superconducting Quantum Interferometers for Nondestructive Evaluation

    PubMed Central

    Kostyurina, E. A.; Kalashnikov, K. V.; Maslennikov, Yu. V.; Koshelets, V. P.

    2017-01-01

    We review stationary and mobile systems that are used for the nondestructive evaluation of room temperature objects and are based on superconducting quantum interference devices (SQUIDs). The systems are optimized for samples whose dimensions are between 10 micrometers and several meters. Stray magnetic fields from small samples (10 µm–10 cm) are studied using a SQUID microscope equipped with a magnetic flux antenna, which is fed through the walls of liquid nitrogen cryostat and a hole in the SQUID’s pick-up loop and returned sidewards from the SQUID back to the sample. The SQUID microscope does not disturb the magnetization of the sample during image recording due to the decoupling of the magnetic flux antenna from the modulation and feedback coil. For larger samples, we use a hand-held mobile liquid nitrogen minicryostat with a first order planar gradiometric SQUID sensor. Low-Tc DC SQUID systems that are designed for NDE measurements of bio-objects are able to operate with sufficient resolution in a magnetically unshielded environment. High-Tc DC SQUID magnetometers that are operated in a magnetic shield demonstrate a magnetic field resolution of ~4 fT/√Hz at 77 K. This sensitivity is improved to ~2 fT/√Hz at 77 K by using a soft magnetic flux antenna. PMID:29210980

  16. Optimal background matching camouflage.

    PubMed

    Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C

    2017-07-12

    Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.

  17. A Rapid Protocol of Crude RNA/DNA Extraction for RT-qPCR Detection and Quantification of 'Candidatus Phytoplasma prunorum'

    PubMed Central

    Minguzzi, Stefano; Terlizzi, Federica; Lanzoni, Chiara; Poggi Pollini, Carlo; Ratti, Claudio

    2016-01-01

    Many efforts have been made to develop a rapid and sensitive method for phytoplasma and virus detection. Taking our cue from previous works, different rapid sample preparation methods have been tested and applied to Candidatus Phytoplasma prunorum (‘Ca. P. prunorum’) detection by RT-qPCR. A duplex RT-qPCR has been optimized using the crude sap as a template to simultaneously amplify a fragment of 16S rRNA of the pathogen and 18S rRNA of the host plant. The specific plant 18S rRNA internal control allows comparison and relative quantification of samples. A comparison between DNA and RNA contribution to qPCR detection is provided, showing higher contribution of the latter. The method presented here has been validated on more than a hundred samples of apricot, plum and peach trees. Since 2013, this method has been successfully applied to monitor ‘Ca. P. prunorum’ infections in field and nursery. A triplex RT-qPCR assay has also been optimized to simultaneously detect ‘Ca. P. prunorum’ and Plum pox virus (PPV) in Prunus. PMID:26742106

  18. Enhanced Uranium Ore Concentrate Analysis by Handheld Raman Sensor: FY15 Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan, Samuel A.; Johnson, Timothy J.; Orton, Christopher R.

    2015-11-11

    High-purity uranium ore concentrates (UOC) represent a potential proliferation concern. A cost-effective, “point and shoot” in-field analysis capability to identify ore types, phases of materials present, and impurities, as well as estimate the overall purity would be prudent. Handheld, Raman-based sensor systems are capable of identifying chemical properties of liquid and solid materials. While handheld Raman systems have been extensively applied to many other applications, they have not been broadly studied for application to UOC, nor have they been optimized for this class of chemical compounds. PNNL was tasked in Fiscal Year 2015 by the Office of International Safeguards (NA-241)more » to explore the use of Raman for UOC analysis and characterization. This report summarizes the activities in FY15 related to this project. The following tasks were included: creation of an expanded library of Raman spectra of a UOC sample set, creation of optimal chemometric analysis methods to classify UOC samples by their type and level of impurities, and exploration of the various Raman wavelengths to identify the ideal instrument settings for UOC sample interrogation.« less

  19. Fast, Safe, Propellant-Efficient Spacecraft Motion Planning Under Clohessy-Wiltshire-Hill Dynamics

    NASA Technical Reports Server (NTRS)

    Starek, Joseph A.; Schmerling, Edward; Maher, Gabriel D.; Barbee, Brent W.; Pavone, Marco

    2016-01-01

    This paper presents a sampling-based motion planning algorithm for real-time and propellant-optimized autonomous spacecraft trajectory generation in near-circular orbits. Specifically, this paper leverages recent algorithmic advances in the field of robot motion planning to the problem of impulsively actuated, propellant- optimized rendezvous and proximity operations under the Clohessy-Wiltshire-Hill dynamics model. The approach calls upon a modified version of the FMT* algorithm to grow a set of feasible trajectories over a deterministic, low-dispersion set of sample points covering the free state space. To enforce safety, the tree is only grown over the subset of actively safe samples, from which there exists a feasible one-burn collision-avoidance maneuver that can safely circularize the spacecraft orbit along its coasting arc under a given set of potential thruster failures. Key features of the proposed algorithm include 1) theoretical guarantees in terms of trajectory safety and performance, 2) amenability to real-time implementation, and 3) generality, in the sense that a large class of constraints can be handled directly. As a result, the proposed algorithm offers the potential for widespread application, ranging from on-orbit satellite servicing to orbital debris removal and autonomous inspection missions.

  20. Physics and material science of ultra-high quality factor superconducting resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vostrikov, Alexander

    2015-08-01

    The nitrogen doping into niobium superconducting radio frequency cavity walls aiming to improve the fundamental mode quality factor is the subject of the research in the given work. Quantitative nitrogen diffusion into niobium model calculating the concentration profile was developed. The model estimations were confirmed with secondary ion mass spectrometry technique measurements. The model made controlled nitrogen doping recipe optimization possible. As a result the robust reproducible recipe for SRF cavity walls treatment with nitrogen doping was developed. The cavities produced with optimized recipe met LCLS–II requirements on quality factor of 2.7 ∙ 10 10 at acceleration field of 16more » MV/m. The microscopic effects of nitrogen doping on superconducting niobium properties were studied with low energy muon spin rotation technique and magnetometer measurements. No significant effect of nitrogen on the following features was found: electron mean free path, magnetic field penetration depth, and upper and surface critical magnetic fields. It was detected that for nitrogen doped niobium samples magnetic flux starts to penetrate inside the superconductor at lower external magnetic field value compared to the low temperature baked niobium ones. This explains lower quench field of SRF cavities treated with nitrogen. Quality factor improvement of fundamental mode forced to analyze the high order mode (HOM) impact on the particle beam dynamics. Both resonant and cumulative effects caused by monopole and dipole HOMs respectively are found to be negligible within the requirements for LCLS–II.« less

  1. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Design considerations for highly effective fluorescence excitation and detection optical systems for molecular diagnostics

    NASA Astrophysics Data System (ADS)

    Kasper, Axel; Van Hille, Herbert; Kuk, Sola

    2018-02-01

    Modern instruments for molecular diagnostics are continuously optimized for diagnostic accuracy, versatility and throughput. The latest progress in LED technology together with tailored optics solutions allows developing highly efficient photonics engines perfectly adapted to the sample under test. Super-bright chip-on-board LED light sources are a key component for such instruments providing maximum luminous intensities in a multitude of narrow spectral bands. In particular the combination of white LEDs with other narrow band LEDs allows achieving optimum efficiency outperforming traditional Xenon light sources in terms of energy consumption, heat dissipation in the system, and switching time between spectral channels. Maximum sensitivity of the diagnostic system can only be achieved with an optimized optics system for the illumination and imaging of the sample. The illumination beam path must be designed for optimum homogeneity across the field while precisely limiting the angular distribution of the excitation light. This is a necessity for avoiding spill-over to the detection beam path and guaranteeing the efficiency of the spectral filtering. The imaging optics must combine high spatial resolution, high light collection efficiency and optimized suppression of excitation light for good signal-to-noise ratio. In order to achieve minimum cross-talk between individual wells in the sample, the optics design must also consider the generation of stray light and the formation of ghost images. We discuss what parameters and limitations have to be considered in an integrated system design approach covering the full path from the light source to the detector.

  3. The "Best Worst" Field Optimization and Focusing

    NASA Technical Reports Server (NTRS)

    Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark

    2008-01-01

    A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.

  4. Stabilized Acoustic Levitation of Dense Materials Using a High-Powered Siren

    NASA Technical Reports Server (NTRS)

    Gammell, P. M.; Croonquist, A.; Wang, T. G.

    1982-01-01

    Stabilized acoustic levitation and manipulation of dense (e.g., steel) objects of 1 cm diameter, using a high powered siren, was demonstrated in trials that investigated the harmonic content and spatial distribution of the acoustic field, as well as the effect of sample position and reflector geometries on the acoustic field. Although further optimization is possible, the most stable operation achieved is expected to be adequate for most containerless processing applications. Best stability was obtained with an open reflector system, using a flat lower reflector and a slightly concave upper one. Operation slightly below resonance enhances stability as this minimizes the second harmonic, which is suspected of being a particularly destabilizing influence.

  5. Doping- and irradiation-controlled pinning of vortices in BaFe 2 (As 1 - x P x ) 2 single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, L.; Jia, Y.; Schlueter, J. A.

    We report on the systematic evolution of vortex pinning behavior in isovalent doped single crystals of BaFe 2 (As 1 - x P x ) 2 . Proceeding from optimal doped to overdoped samples, we find a clear transformation of the magnetization hysteresis from a fishtail behavior to a distinct peak effect, followed by a reversible magnetization and Bean-Livingston surface barriers. Strong point pinning dominates the vortex behavior at low fields whereas weak collective pinning determines the behavior at higher fields. In addition to doping effects, we show that particle irradiation by energetic protons can tune vortex pinning in thesemore » materials.« less

  6. Doping- and irradiation-controlled pinning of vortices in BaFe{<_2}(As{<_1-x}P{<_x}){<_2} single crystals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, L.; Jia, Y.; Schlueter, J. A.

    We report on the systematic evolution of vortex pinning behavior in isovalent doped single crystals of BaFe{sub 2}(As{sub 1-x}P{sub x}){sub 2}. Proceeding from optimal doped to overdoped samples, we find a clear transformation of the magnetization hysteresis from a fishtail behavior to a distinct peak effect, followed by a reversible magnetization and Bean-Livingston surface barriers. Strong point pinning dominates the vortex behavior at low fields whereas weak collective pinning determines the behavior at higher fields. In addition to doping effects, we show that particle irradiation by energetic protons can tune vortex pinning in these materials.

  7. Brain refractive index measured in vivo with high-NA defocus-corrected full-field OCT and consequences for two-photon microscopy.

    PubMed

    Binding, Jonas; Ben Arous, Juliette; Léger, Jean-François; Gigan, Sylvain; Boccara, Claude; Bourdieu, Laurent

    2011-03-14

    Two-photon laser scanning microscopy (2PLSM) is an important tool for in vivo tissue imaging with sub-cellular resolution, but the penetration depth of current systems is potentially limited by sample-induced optical aberrations. To quantify these, we measured the refractive index n' in the somatosensory cortex of 7 rats in vivo using defocus optimization in full-field optical coherence tomography (ff-OCT). We found n' to be independent of imaging depth or rat age. From these measurements, we calculated that two-photon imaging beyond 200 µm into the cortex is limited by spherical aberration, indicating that adaptive optics will improve imaging depth.

  8. [Research on optimization of mathematical model of flow injection-hydride generation-atomic fluorescence spectrometry].

    PubMed

    Cui, Jian; Zhao, Xue-Hong; Wang, Yan; Xiao, Ya-Bing; Jiang, Xue-Hui; Dai, Li

    2014-01-01

    Flow injection-hydride generation-atomic fluorescence spectrometry was a widely used method in the industries of health, environmental, geological and metallurgical fields for the merit of high sensitivity, wide measurement range and fast analytical speed. However, optimization of this method was too difficult as there exist so many parameters affecting the sensitivity and broadening. Generally, the optimal conditions were sought through several experiments. The present paper proposed a mathematical model between the parameters and sensitivity/broadening coefficients using the law of conservation of mass according to the characteristics of hydride chemical reaction and the composition of the system, which was proved to be accurate as comparing the theoretical simulation and experimental results through the test of arsanilic acid standard solution. Finally, this paper has put a relation map between the parameters and sensitivity/broadening coefficients, and summarized that GLS volume, carrier solution flow rate and sample loop volume were the most factors affecting sensitivity and broadening coefficients. Optimizing these three factors with this relation map, the relative sensitivity was advanced by 2.9 times and relative broadening was reduced by 0.76 times. This model can provide a theoretical guidance for the optimization of the experimental conditions.

  9. Optimal Protocols and Optimal Transport in Stochastic Thermodynamics

    NASA Astrophysics Data System (ADS)

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-01

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  10. Optimal protocols and optimal transport in stochastic thermodynamics.

    PubMed

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-24

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  11. Minerva: An Integrated Geospatial/Temporal Toolset for Real-time Science Decision Making and Data Collection

    NASA Astrophysics Data System (ADS)

    Lees, D. S.; Cohen, T.; Deans, M. C.; Lim, D. S. S.; Marquez, J.; Heldmann, J. L.; Hoffman, J.; Norheim, J.; Vadhavk, N.

    2016-12-01

    Minerva integrates three capabilities that are critical to the success of NASA analogs. It combines NASA's Exploration Ground Data Systems (xGDS) and Playbook software, and MIT's Surface Exploration Traverse Analysis and Navigation Tool (SEXTANT). Together, they help to plan, optimize, and monitor traverses; schedule and track activity; assist with science decision-making and document sample and data collection. Pre-mission, Minerva supports planning with a priori map data (e.g., UAV and satellite imagery) and activity scheduling. During missions, xGDS records and broadcasts live data to a distributed team who take geolocated notes and catalogue samples. Playbook provides live schedule updates and multi-media chat. Post-mission, xGDS supports data search and visualization for replanning and analysis. NASA's BASALT (Biologic Analog Science Associated with Lava Terrains) and FINESSE (Field Investigations to Enable Solar System Science and Exploration) projects use Minerva to conduct field science under simulated Mars mission conditions including 5 and 15 minute one-way communication delays. During the recent BASALT-FINESSE mission, two field scientists (EVA team) executed traverses across volcanic terrain to characterize and sample basalts. They wore backpacks with communications and imaging capabilities, and carried field portable spectrometers. The Science Team was 40 km away in a simulated mission control center. The Science Team monitored imaging (video and still), spectral, voice, location and physiological data from the EVA team via the network from the field, under communication delays. Minerva provided the Science Team with a unified context of operations at the field site, so they could make meaningful remote contributions to the collection of 10's of geotagged samples. Minerva's mission architecture will be presented with technical details and capabilities. Through the development, testing and application of Minerva, we are defining requirements for the design of future capabilities to support human and human-robotic missions to deep space and Mars.

  12. Search for X-ray Emission from AGB Stars in the Coronal Graveyard

    NASA Astrophysics Data System (ADS)

    Montez, Rodolfo

    2013-10-01

    Maser observations demonstrate the existence of magnetic fields in the circumstellar envelopes of AGB stars. However, thus far, only 2-3 AGB stars have exhibited evidence for coronal X-ray emission. We have demonstrated that only the sensitivity of modern X-ray telescopes can detect magnetically-induced coronal emission and have identified a sample of AGB stars which are ideal candidates to search for such emission. Specifically, we have selected a sample of AGB stars with SiO maser emission, UV emission in at least one of the GALEX bandpasses, and low mass loss rates. The four selected AGB stars provide a pilot sample that optimally probes for coronal activity beyond the giant phase and that provides valuable tests for the launching and shaping of AGB mass loss.

  13. Numerical optimization of three-dimensional coils for NSTX-U

    NASA Astrophysics Data System (ADS)

    Lazerson, S. A.; Park, J.-K.; Logan, N.; Boozer, A.

    2015-10-01

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n  =  1 character can drive a large core torque. It is also shown that fields with n  =  3 features are capable of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. Comparison between error field correction experiments on DIII-D and the optimizer show good agreement. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive,paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  14. A traditional and a less-invasive robust design: choices in optimizing effort allocation for seabird population studies

    USGS Publications Warehouse

    Converse, S.J.; Kendall, W.L.; Doherty, P.F.; Naughton, M.B.; Hines, J.E.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    For many animal populations, one or more life stages are not accessible to sampling, and therefore an unobservable state is created. For colonially-breeding populations, this unobservable state could represent the subset of adult breeders that have foregone breeding in a given year. This situation applies to many seabird populations, notably albatrosses, where skipped breeders are either absent from the colony, or are present but difficult to capture or correctly assign to breeding state. Kendall et al. have proposed design strategies for investigations of seabird demography where such temporary emigration occurs, suggesting the use of the robust design to permit the estimation of time-dependent parameters and to increase the precision of estimates from multi-state models. A traditional robust design, where animals are subject to capture multiple times in a sampling season, is feasible in many cases. However, due to concerns that multiple captures per season could cause undue disturbance to animals, Kendall et al. developed a less-invasive robust design (LIRD), where initial captures are followed by an assessment of the ratio of marked-to-unmarked birds in the population or sampled plot. This approach has recently been applied in the Northwestern Hawaiian Islands to populations of Laysan (Phoebastria immutabilis) and black-footed (P. nigripes) albatrosses. In this paper, we outline the LIRD and its application to seabird population studies. We then describe an approach to determining optimal allocation of sampling effort in which we consider a non-robust design option (nRD), and variations of both the traditional robust design (RD), and the LIRD. Variations we considered included the number of secondary sampling occasions for the RD and the amount of total effort allocated to the marked-to-unmarked ratio assessment for the LIRD. We used simulations, informed by early data from the Hawaiian study, to address optimal study design for our example cases. We found that the LIRD performed as well or nearly as well as certain variations of the RD in terms of root mean square error, especially when relatively little of the total effort was allocated to the assessment of the marked-to-unmarked ratio versus to initial captures. For the RD, we found no clear benefit of using 2, 4, or 6 secondary sampling occasions per year, though this result will depend on the relative effort costs of captures versus recaptures and on the length of the study. We also found that field-readable bands, which may be affixed to birds in addition to standard metal bands, will be beneficial in longer-term studies of albatrosses in the Northwestern Hawaiian Islands. Field-readable bands reduce the effort cost of recapturing individuals, and in the long-term this cost reduction can offset the additional effort expended in affixing the bands. Finally, our approach to determining optimal study design can be generally applied by researchers, with little seed data, to design their studies at the outset.

  15. A Rapid Method for Optimizing Running Temperature of Electrophoresis through Repetitive On-Chip CE Operations

    PubMed Central

    Kaneda, Shohei; Ono, Koichi; Fukuba, Tatsuhiro; Nojima, Takahiko; Yamamoto, Takatoki; Fujii, Teruo

    2011-01-01

    In this paper, a rapid and simple method to determine the optimal temperature conditions for denaturant electrophoresis using a temperature-controlled on-chip capillary electrophoresis (CE) device is presented. Since on-chip CE operations including sample loading, injection and separation are carried out just by switching the electric field, we can repeat consecutive run-to-run CE operations on a single on-chip CE device by programming the voltage sequences. By utilizing the high-speed separation and the repeatability of the on-chip CE, a series of electrophoretic operations with different running temperatures can be implemented. Using separations of reaction products of single-stranded DNA (ssDNA) with a peptide nucleic acid (PNA) oligomer, the effectiveness of the presented method to determine the optimal temperature conditions required to discriminate a single-base substitution (SBS) between two different ssDNAs is demonstrated. It is shown that a single run for one temperature condition can be executed within 4 min, and the optimal temperature to discriminate the SBS could be successfully found using the present method. PMID:21845077

  16. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  17. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  18. Design of Field Experiments for Adaptive Sampling of the Ocean with Autonomous Vehicles

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Ooi, B. H.; Cho, W.; Dao, M. H.; Tkalich, P.; Patrikalakis, N. M.

    2010-05-01

    Due to the highly non-linear and dynamical nature of oceanic phenomena, the predictive capability of various ocean models depends on the availability of operational data. A practical method to improve the accuracy of the ocean forecast is to use a data assimilation methodology to combine in-situ measured and remotely acquired data with numerical forecast models of the physical environment. Autonomous surface and underwater vehicles with various sensors are economic and efficient tools for exploring and sampling the ocean for data assimilation; however there is an energy limitation to such vehicles, and thus effective resource allocation for adaptive sampling is required to optimize the efficiency of exploration. In this paper, we use physical oceanography forecasts of the coastal zone of Singapore for the design of a set of field experiments to acquire useful data for model calibration and data assimilation. The design process of our experiments relied on the oceanography forecast including the current speed, its gradient, and vorticity in a given region of interest for which permits for field experiments could be obtained and for time intervals that correspond to strong tidal currents. Based on these maps, resources available to our experimental team, including Autonomous Surface Craft (ASC) are allocated so as to capture the oceanic features that result from jets and vortices behind bluff bodies (e.g., islands) in the tidal current. Results are summarized from this resource allocation process and field experiments conducted in January 2009.

  19. Merging parallel tempering with sequential geostatistical resampling for improved posterior exploration of high-dimensional subsurface categorical fields

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Linde, Niklas; Jacques, Diederik; Mariethoz, Grégoire

    2016-04-01

    The sequential geostatistical resampling (SGR) algorithm is a Markov chain Monte Carlo (MCMC) scheme for sampling from possibly non-Gaussian, complex spatially-distributed prior models such as geologic facies or categorical fields. In this work, we highlight the limits of standard SGR for posterior inference of high-dimensional categorical fields with realistically complex likelihood landscapes and benchmark a parallel tempering implementation (PT-SGR). Our proposed PT-SGR approach is demonstrated using synthetic (error corrupted) data from steady-state flow and transport experiments in categorical 7575- and 10,000-dimensional 2D conductivity fields. In both case studies, every SGR trial gets trapped in a local optima while PT-SGR maintains an higher diversity in the sampled model states. The advantage of PT-SGR is most apparent in an inverse transport problem where the posterior distribution is made bimodal by construction. PT-SGR then converges towards the appropriate data misfit much faster than SGR and partly recovers the two modes. In contrast, for the same computational resources SGR does not fit the data to the appropriate error level and hardly produces a locally optimal solution that looks visually similar to one of the two reference modes. Although PT-SGR clearly surpasses SGR in performance, our results also indicate that using a small number (16-24) of temperatures (and thus parallel cores) may not permit complete sampling of the posterior distribution by PT-SGR within a reasonable computational time (less than 1-2 weeks).

  20. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-10-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  1. Improved explosive collection and detection with rationally assembled surface sampling materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chouyyok, Wilaiwan; Bays, J. Timothy; Gerasimenko, Aleksandr A.

    Sampling and detection of trace explosives is a key analytical process in modern transportation safety. In this work we have explored some of the fundamental analytical processes for collection and detection of trace level explosive on surfaces with the most widely utilized system, thermal desorption IMS. The performance of the standard muslin swipe material was compared with chemically modified fiberglass cloth. The fiberglass surface was modified to include phenyl functional groups. When compared to standard muslin, the phenyl functionalized fiberglass sampling material showed better analyte release from the sampling material as well as improved response and repeatability from multiple usesmore » of the same swipe. The improved sample release of the functionalized fiberglass swipes resulted in a significant increase in sensitivity. Various physical and chemical properties were systematically explored to determine optimal performance. The results herein have relevance to improving the detection of other explosive compounds and potentially to a wide range of other chemical sampling and field detection challenges.« less

  2. Determining optimal parameters of the self-referent encoding task: A large-scale examination of self-referent cognition and depression.

    PubMed

    Dainer-Best, Justin; Lee, Hae Yeon; Shumake, Jason D; Yeager, David S; Beevers, Christopher G

    2018-06-07

    Although the self-referent encoding task (SRET) is commonly used to measure self-referent cognition in depression, many different SRET metrics can be obtained. The current study used best subsets regression with cross-validation and independent test samples to identify the SRET metrics most reliably associated with depression symptoms in three large samples: a college student sample (n = 572), a sample of adults from Amazon Mechanical Turk (n = 293), and an adolescent sample from a school field study (n = 408). Across all 3 samples, SRET metrics associated most strongly with depression severity included number of words endorsed as self-descriptive and rate of accumulation of information required to decide whether adjectives were self-descriptive (i.e., drift rate). These metrics had strong intratask and split-half reliability and high test-retest reliability across a 1-week period. Recall of SRET stimuli and traditional reaction time (RT) metrics were not robustly associated with depression severity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumann, K; Weber, U; Simeonov, Y

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular andmore » thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.« less

  4. High-field asymmetric waveform ion mobility spectrometry for mass spectrometry-based proteomics.

    PubMed

    Swearingen, Kristian E; Moritz, Robert L

    2012-10-01

    High-field asymmetric waveform ion mobility spectrometry (FAIMS) is an atmospheric pressure ion mobility technique that separates gas-phase ions by their behavior in strong and weak electric fields. FAIMS is easily interfaced with electrospray ionization and has been implemented as an additional separation mode between liquid chromatography (LC) and mass spectrometry (MS) in proteomic studies. FAIMS separation is orthogonal to both LC and MS and is used as a means of on-line fractionation to improve the detection of peptides in complex samples. FAIMS improves dynamic range and concomitantly the detection limits of ions by filtering out chemical noise. FAIMS can also be used to remove interfering ion species and to select peptide charge states optimal for identification by tandem MS. Here, the authors review recent developments in LC-FAIMS-MS and its application to MS-based proteomics.

  5. A New Optical Design for Imaging Spectroscopy

    NASA Astrophysics Data System (ADS)

    Thompson, K. L.

    2002-05-01

    We present an optical design concept for imaging spectroscopy, with some advantages over current systems. The system projects monochromatic images onto the 2-D array detector(s). Faint object and crowded field spectroscopy can be reduced first using image processing techniques, then building the spectrum, unlike integral field units where one must first extract the spectra, build data cubes from these, then reconstruct the target's integrated spectral flux. Like integral field units, all photons are detected simultaneously, unlike tunable filters which must be scanned through the wavelength range of interest and therefore pay a sensitivity pentalty. Several sample designs are presented, including an instrument optimized for measuring intermediate redshift galaxy cluster velocity dispersions, one designed for near-infrared ground-based adaptive optics, and one intended for space-based rapid follow-up of transient point sources such as supernovae and gamma ray bursts.

  6. Optimization study on the magnetic field of superconducting Halbach Array magnet

    NASA Astrophysics Data System (ADS)

    Shen, Boyang; Geng, Jianzhao; Li, Chao; Zhang, Xiuchang; Fu, Lin; Zhang, Heng; Ma, Jun; Coombs, T. A.

    2017-07-01

    This paper presents the optimization on the strength and homogeneity of magnetic field from superconducting Halbach Array magnet. Conventional Halbach Array uses a special arrangement of permanent magnets which can generate homogeneous magnetic field. Superconducting Halbach Array utilizes High Temperature Superconductor (HTS) to construct an electromagnet to work below its critical temperature, which performs equivalently to the permanent magnet based Halbach Array. The simulations of superconducting Halbach Array were carried out using H-formulation based on B-dependent critical current density and bulk approximation, with the FEM platform COMSOL Multiphysics. The optimization focused on the coils' location, as well as the geometry and numbers of coils on the premise of maintaining the total amount of superconductor. Results show Halbach Array configuration based superconducting magnet is able to generate the magnetic field with intensity over 1 Tesla and improved homogeneity using proper optimization methods. Mathematical relation of these optimization parameters with the intensity and homogeneity of magnetic field was developed.

  7. Space-time interpolation of satellite winds in the tropics

    NASA Astrophysics Data System (ADS)

    Patoux, Jérôme; Levy, Gad

    2013-09-01

    A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.

  8. Enzymatic Activity Detection via Electrochemistry for Enceladus

    NASA Technical Reports Server (NTRS)

    Studemeister, Lucy; Koehne, Jessica; Quinn, Richard

    2017-01-01

    Electrochemical detection of biological molecules is a pertinent topic and application in many fields such as medicine, environmental spills, and life detection in space. Proteases, a class of molecules of interest in the search for life, catalyze the hydrolysis of peptides. Trypsin, a specific protease, was chosen to investigate an optimized enzyme detection system using electrochemistry. This study aims at providing the ideal functionalization of an electrode that can reliably detect a signal indicative of an enzymatic reaction from an Enceladus sample.

  9. A compact CCD-monitored atomic force microscope with optical vision and improved performances.

    PubMed

    Mingyue, Liu; Haijun, Zhang; Dongxian, Zhang

    2013-09-01

    A novel CCD-monitored atomic force microscope (AFM) with optical vision and improved performances has been developed. Compact optical paths are specifically devised for both tip-sample microscopic monitoring and cantilever's deflection detecting with minimized volume and optimal light-amplifying ratio. The ingeniously designed AFM probe with such optical paths enables quick and safe tip-sample approaching, convenient and effective tip-sample positioning, and high quality image scanning. An image stitching method is also developed to build a wider-range AFM image under monitoring. Experiments show that this AFM system can offer real-time optical vision for tip-sample monitoring with wide visual field and/or high lateral optical resolution by simply switching the objective; meanwhile, it has the elegant performances of nanometer resolution, high stability, and high scan speed. Furthermore, it is capable of conducting wider-range image measurement while keeping nanometer resolution. Copyright © 2013 Wiley Periodicals, Inc.

  10. High-Speed Terahertz Waveform Measurement for Intense Terahertz Light Using 100-kHz Yb-Doped Fiber Laser.

    PubMed

    Tsubouchi, Masaaki; Nagashima, Keisuke

    2018-06-14

    We demonstrate a high-speed terahertz (THz) waveform measurement system for intense THz light with a scan rate of 100 Hz. To realize the high scan rate, a loudspeaker vibrating at 50 Hz is employed to scan the delay time between THz light and electro-optic sampling light. Because the fast scan system requires a high data sampling rate, we develop an Yb-doped fiber laser with a repetition rate of 100 kHz optimized for effective THz light generation with the output electric field of 1 kV/cm. The present system drastically reduces the measurement time of the THz waveform from several minutes to 10 ms.

  11. Development of a simple and rapid method for the specific identification of organism causing anthrax by slide latex agglutination.

    PubMed

    Sumithra, T G; Chaturvedi, V K; Gupta, P K; Sunita, S C; Rai, A K; Kutty, M V H; Laxmi, U; Murugan, M S

    2014-05-01

    A specific latex agglutination test (LAT) based on anti-PA (protective antigen) antibodies having detection limit of 5 × 10(4) formalin treated Bacillus anthracis cells or 110 ng of PA was optimized in this study. The optimized LAT could detect anthrax toxin in whole blood as well as in serum from the animal models of anthrax infection. The protocol is a simple and promising method for the specific detection of bacteria causing anthrax under routine laboratory, as well as in field, conditions without any special equipments or expertise. The article presents the first report of a latex agglutination test for the specific identification of the cultures of bacteria causing anthrax. As the test is targeting one of anthrax toxic protein (PA), this can also be used to determine virulence of suspected organisms. At the same time, the same LAT can be used directly on whole blood or sera samples under field conditions for the specific diagnosis of anthrax. © 2013 The Society for Applied Microbiology.

  12. Exploiting Size-Dependent Drag and Magnetic Forces for Size-Specific Separation of Magnetic Nanoparticles

    PubMed Central

    Rogers, Hunter B.; Anani, Tareq; Choi, Young Suk; Beyers, Ronald J.; David, Allan E.

    2015-01-01

    Realizing the full potential of magnetic nanoparticles (MNPs) in nanomedicine requires the optimization of their physical and chemical properties. Elucidation of the effects of these properties on clinical diagnostic or therapeutic properties, however, requires the synthesis or purification of homogenous samples, which has proved to be difficult. While initial simulations indicated that size-selective separation could be achieved by flowing magnetic nanoparticles through a magnetic field, subsequent in vitro experiments were unable to reproduce the predicted results. Magnetic field-flow fractionation, however, was found to be an effective method for the separation of polydisperse suspensions of iron oxide nanoparticles with diameters greater than 20 nm. While similar methods have been used to separate magnetic nanoparticles before, no previous work has been done with magnetic nanoparticles between 20 and 200 nm. Both transmission electron microscopy (TEM) and dynamic light scattering (DLS) analysis were used to confirm the size of the MNPs. Further development of this work could lead to MNPs with the narrow size distributions necessary for their in vitro and in vivo optimization. PMID:26307980

  13. The Optimization of Electrophoresis on a Glass Microfluidic Chip and its Application in Forensic Science.

    PubMed

    Han, Jun P; Sun, Jing; Wang, Le; Liu, Peng; Zhuang, Bin; Zhao, Lei; Liu, Yao; Li, Cai X

    2017-11-01

    Microfluidic chips offer significant speed, cost, and sensitivity advantages, but numerous parameters must be optimized to provide microchip electrophoresis detection. Experiments were conducted to study the factors, including sieving matrices (the concentration and type), surface modification, analysis temperature, and electric field strengths, which all impact the effectiveness of microchip electrophoresis detection of DNA samples. Our results showed that the best resolution for ssDNA was observed using 4.5% w/v (7 M urea) lab-fabricated LPA gel, dynamic wall coating of the microchannel, electrophoresis temperatures between 55 and 60°C, and electrical fields between 350 and 450 V/cm on the microchip-based capillary electrophoresis (μCE) system. One base-pair resolution could be achieved in the 19-cm-length microchannel. Furthermore, both 9947A standard genomic DNA and DNA extracted from blood spots were demonstrated to be successfully separated with well-resolved DNA peaks in 8 min. Therefore, the microchip electrophoresis system demonstrated good potential for rapid forensic DNA analysis. © 2017 American Academy of Forensic Sciences.

  14. [Simulation on remediation of benzene contaminated groundwater by air sparging].

    PubMed

    Fan, Yan-Ling; Jiang, Lin; Zhang, Dan; Zhong, Mao-Sheng; Jia, Xiao-Yang

    2012-11-01

    Air sparging (AS) is one of the in situ remedial technologies which are used in groundwater remediation for pollutions with volatile organic compounds (VOCs). At present, the field design of air sparging system was mainly based on experience due to the lack of field data. In order to obtain rational design parameters, the TMVOC module in the Petrasim software package, combined with field test results on a coking plant in Beijing, is used to optimize the design parameters and simulate the remediation process. The pilot test showed that the optimal injection rate was 23.2 m3 x h(-1), while the optimal radius of influence (ROI) was 5 m. The simulation results revealed that the pressure response simulated by the model matched well with the field test results, which indicated a good representation of the simulation. The optimization results indicated that the optimal injection location was at the bottom of the aquifer. Furthermore, simulated at the optimized injection location, the optimal injection rate was 20 m3 x h(-1), which was in accordance with the field test result. Besides, 3 m was the optimal ROI, less than the field test results, and the main reason was that field test reflected the flow behavior at the upper space of groundwater and unsaturated area, in which the width of flow increased rapidly, and became bigger than the actual one. With the above optimized operation parameters, in addition to the hydro-geological parameters measured on site, the model simulation result revealed that 90 days were needed to remediate the benzene from 371 000 microg x L(-1) to 1 microg x L(-1) for the site, and that the opeation model in which the injection wells were progressively turned off once the groundwater around them was "clean" was better than the one in which all the wells were kept operating throughout the remediation process.

  15. Assessment and Optimization of Lidar Measurement Availability for Wind Turbine Control: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davoust, S.; Jehu, A.; Bouillet, M.

    2014-05-01

    Turbine-mounted lidars provide preview measurements of the incoming wind field. By reducing loads on critical components and increasing the potential power extracted from the wind, the performance of wind turbine controllers can be improved [2]. As a result, integrating a light detection and ranging (lidar) system has the potential to lower the cost of wind energy. This paper presents an evaluation of turbine-mounted lidar availability. Availability is a metric which measures the proportion of time the lidar is producing controller-usable data, and is essential when a wind turbine controller relies on a lidar. To accomplish this, researchers from Avent Lidarmore » Technology and the National Renewable Energy Laboratory first assessed and modeled the effect of extreme atmospheric events. This shows how a multirange lidar delivers measurements for a wide variety of conditions. Second, by using a theoretical approach and conducting an analysis of field feedback, we investigated the effects of the lidar setup on the wind turbine. This helps determine the optimal lidar mounting position at the back of the nacelle, and establishes a relationship between availability, turbine rpm, and lidar sampling time. Lastly, we considered the role of the wind field reconstruction strategies and the turbine controller on the definition and performance of a lidar's measurement availability.« less

  16. Ultrasound-assisted vapor generation of mercury.

    PubMed

    Ribeiro, Anderson S; Vieira, Mariana A; Willie, Scott; Sturgeon, Ralph E

    2007-06-01

    Cold vapor generation arising from reduction of both Hg(2+) and CH(3)Hg(+) occurs using ultrasonic (US) fields of sufficient density to achieve both localized heating as well as radical-based attack in solutions of formic and acetic acids and tetramethylammonium hydroxide (TMAH). A batch sonoreactor utilizing an ultrasonic probe as an energy source and a flow through system based on a US bath were optimized for this purpose. Reduction of CH(3)Hg(+) to Hg(0) occurs only at relatively high US field density (>10 W cm(-3) of sample solution) and is thus not observed when a conventional US bath is used for cold vapor generation. Speciation of mercury is thus possible by altering the power density during the measurement process. Thermal reduction of Hg(2+) is efficient in formic acid and TMAH at 70 degrees C and occurs in the absence of the US field. Room temperature studies with the batch sonoreactor reveal a slow reduction process, producing temporally broad signals having an efficiency of approximately 68% of that arising from use of a conventional SnCl(2) reduction system. Molecular species of mercury are generated at high concentrations of formic and acetic acid. Factors affecting the generation of Hg(0) were optimized and the batch sonoreactor used for the determination of total mercury in SLRS-4 river water reference material.

  17. Influence of a magnetic field during directional solidification of MAR-M 246 + Hf superalloy

    NASA Technical Reports Server (NTRS)

    Andrews, J. Barry; Alter, Wendy; Schmidt, Dianne

    1991-01-01

    An area that has been almost totally overlooked in the optimization of properties in directionally solidified superalloys is the control of microstructural features through the application of a magnetic field during solidification. The influence of a magnetic field on the microstructural features of a nickel-base superalloys is investigated. Studies were performed on the dendritic MAR-M 246+Hf alloy, which was solidified under both a 5 K gauss magnetic field and under no-applied-field conditions. The possible influences of the magnetic field on the solidification process were observed by studying variations in microstructural features including volume fraction, surface area, number, and shape of the carbide particles. Stereological factors analyzed also included primary and secondary dendrite arm spacing and the volume fraction of the interdendritic eutectic constituent. Microprobe analysis was performed to determine the chemistry of the carbides, dendrites, and interdendritic constituents, and how it varied between field and no-field solidification samples. Experiments involving periodic application and removal of the magnetic field were also performed in order to permit a comparison with structural variations observed in a MAR-M 246+Hf alloy solidified during KC-135 high-g, low-g maneuvers.

  18. Effects of anodizing conditions and annealing temperature on the morphology and crystalline structure of anodic oxide layers grown on iron

    NASA Astrophysics Data System (ADS)

    Pawlik, Anna; Hnida, Katarzyna; Socha, Robert P.; Wiercigroch, Ewelina; Małek, Kamilla; Sulka, Grzegorz D.

    2017-12-01

    Anodic iron oxide layers were formed by anodization of the iron foil in an ethylene glycol-based electrolyte containing 0.2 M NH4F and 0.5 M H2O at 40 V for 1 h. The anodizing conditions such as electrolyte composition and applied potential were optimized. In order to examine the influence of electrolyte stirring and applied magnetic field, the anodic samples were prepared under the dynamic and static conditions in the presence or absence of magnetic field. It was shown that ordered iron oxide nanopore arrays could be obtained at lower anodizing temperatures (10 and 20 °C) at the static conditions without the magnetic field or at the dynamic conditions with the applied magnetic field. Since the as-prepared anodic layers are amorphous in nature, the samples were annealed in air at different temperatures (200-500 °C) for a fixed duration of time (1 h). The morphology and crystal phases developed after anodization and subsequent annealing were characterized using field-emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and Raman spectroscopy. The results proved that the annealing process transforms the amorphous layer into magnetite and hematite phases. In addition, the heat treatment results in a substantial decrease in the fluorine content and increase in the oxygen content.

  19. Monitoring the interfacial electric field in pure and doped SrTiO3 surfaces by means of phase-resolved optical second harmonic generation

    NASA Astrophysics Data System (ADS)

    Rubano, Andrea; Mou, Sen; Paparo, Domenico

    2018-05-01

    Oxides and new functional materials such as oxide-based hetero-structures are very good candidates to achieve the goal of the next generation electronics. One of the main features that rules the electronic behavior of these compounds is the interfacial electric field which confines the charge carriers to a quasi-two-dimensional space region. The sign of the confined charge clearly depends on the electric field direction, which is however a very elusive quantity, as most techniques can only detect its absolute value. Even more valuable would be to access the sign of the interfacial electric field directly during the sample growth, being thus able to optimize the growth conditions directly looking at the feature of interest. For this aim, solid and reliable sensors are needed for monitoring the thin films while grown. Recently optical second harmonic generation has been proposed by us as a tool for non-invasive, non-destructive, real-time, in-situ imaging of oxide epitaxial film growth. The spatial resolution of this technique has been exploited to obtain real-time images of the sample under investigation. Here we propose to exploit another very important physical property of the second harmonic wave: its phase, which is directly coupled with the electric field direction, as shown by our measurements.

  20. Multi-kernel deconvolution for contrast improvement in a full field imaging system with engineered PSFs using conical diffraction

    NASA Astrophysics Data System (ADS)

    Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.

    2018-01-01

    The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.

  1. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  2. Statistical Searches for Microlensing Events in Large, Non-uniformly Sampled Time-Domain Surveys: A Test Using Palomar Transient Factory Data

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Agüeros, Marcel A.; Fournier, Amanda P.; Street, Rachel; Ofek, Eran O.; Covey, Kevin R.; Levitan, David; Laher, Russ R.; Sesar, Branimir; Surace, Jason

    2014-01-01

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ~20,000 deg2 footprint. While the median 7.26 deg2 PTF field has been imaged ~40 times in the R band, ~2300 deg2 have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 109 light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  3. Data on the synthesis processes optimization of novel β-NiS film modified CdS nanoflowers heterostructure nanocomposite for photocatalytic hydrogen evolution.

    PubMed

    Zhang, Yu; Peng, Zhijian; Guan, Shundong; Fu, Xiuli

    2018-02-01

    The data presented in this article are related to a research article entitled 'Novel β-NiS film modified CdS nanoflowers heterostructure nanocomposite: extraordinarily highly efficient photocatalysts for hydrogen evolution' (Zhang et al., 2018) [1]. In this article, we report original data on the synthesis processes optimization of the proposed nanocomposite on the basis of their optimum photocatalytic performance together with the comparison on the results of literatures and comparative experiments. The composition, microstructure, morphology, photocatalytic hydrogen evolution and photocatalytic stability of the corresponding samples are included in this report. The data are presented in this format in order to facilitate comparison with data from other researchers in the field and understanding the mechanism of similar catalysts.

  4. VizieR Online Data Catalog: Photometry of 3 open clusters (Cignoni+ 2011)

    NASA Astrophysics Data System (ADS)

    Cignoni, M.; Beccari, G.; Bragaglia, A.; Tosi, M.

    2012-02-01

    The three clusters were observed in service mode at the Large Binocular Telescope (LBT) on Mt Graham (Arizona) with the Large Binocular Camera (LBC) on 2008-Dec-02, and with the Device Optimized for the LOw RESolution (DOLORES) at the Italian Telescopio Nazionale Galileo (TNG) on 2009-Jan-03. There are two LBCs, one optimized for the UV-blue filters and one for the red-IR ones, mounted at each prime focus of the LBT. Each LBC uses four EEV chips (2048x4608 pixels) placed three in a row and the fourth rotated 90° with respect to the others. The field of view of the LBC is equivalent to 23x23 arcmin2, with a pixel sampling of 0.23 arcsec. (3 data files).

  5. Mean-Field Games for Marriage

    PubMed Central

    Bauso, Dario; Dia, Ben Mansour; Djehiche, Boualem; Tembine, Hamidou; Tempone, Raul

    2014-01-01

    This article examines mean-field games for marriage. The results support the argument that optimizing the long-term well-being through effort and social feeling state distribution (mean-field) will help to stabilize marriage. However, if the cost of effort is very high, the couple fluctuates in a bad feeling state or the marriage breaks down. We then examine the influence of society on a couple using mean-field sentimental games. We show that, in mean-field equilibrium, the optimal effort is always higher than the one-shot optimal effort. We illustrate numerically the influence of the couple’s network on their feeling states and their well-being. PMID:24804835

  6. Mean-field games for marriage.

    PubMed

    Bauso, Dario; Dia, Ben Mansour; Djehiche, Boualem; Tembine, Hamidou; Tempone, Raul

    2014-01-01

    This article examines mean-field games for marriage. The results support the argument that optimizing the long-term well-being through effort and social feeling state distribution (mean-field) will help to stabilize marriage. However, if the cost of effort is very high, the couple fluctuates in a bad feeling state or the marriage breaks down. We then examine the influence of society on a couple using mean-field sentimental games. We show that, in mean-field equilibrium, the optimal effort is always higher than the one-shot optimal effort. We illustrate numerically the influence of the couple's network on their feeling states and their well-being.

  7. Optimal control of the strong-field ionization of silver clusters in helium droplets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Truong, N. X.; Goede, S.; Przystawik, A.

    Optimal control techniques combined with femtosecond laser pulse shaping are applied to steer and enhance the strong-field induced emission of highly charged atomic ions from silver clusters embedded in helium nanodroplets. With light fields shaped in amplitude and phase we observe a substantial increase of the Ag{sup q+} yield for q>10 when compared to bandwidth-limited and optimally stretched pulses. A remarkably simple double-pulse structure, containing a low-intensity prepulse and a stronger main pulse, turns out to produce the highest atomic charge states up to Ag{sup 20+}. A negative chirp during the main pulse hints at dynamic frequency locking to themore » cluster plasmon. A numerical optimal control study on pure silver clusters with a nanoplasma model converges to a similar pulse structure and corroborates that the optimal light field adapts to the resonant excitation of cluster surface plasmons for efficient ionization.« less

  8. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-12-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  9. Application of support vector regression for optimization of vibration flow field of high-density polyethylene melts characterized by small angle light scattering

    NASA Astrophysics Data System (ADS)

    Xian, Guangming

    2018-03-01

    In this paper, the vibration flow field parameters of polymer melts in a visual slit die are optimized by using intelligent algorithm. Experimental small angle light scattering (SALS) patterns are shown to characterize the processing process. In order to capture the scattered light, a polarizer and an analyzer are placed before and after the polymer melts. The results reported in this study are obtained using high-density polyethylene (HDPE) with rotation speed at 28 rpm. In addition, support vector regression (SVR) analytical method is introduced for optimization the parameters of vibration flow field. This work establishes the general applicability of SVR for predicting the optimal parameters of vibration flow field.

  10. Asian Tracer Experiment and Atmospheric Modeling (TEAM) Project: Draft Field Work Plan for the Asian Long-Range Tracer Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allwine, K Jerry; Flaherty, Julia E.

    2007-08-01

    This report provides an experimental plan for a proposed Asian long-range tracer study as part of the international Tracer Experiment and Atmospheric Modeling (TEAM) Project. The TEAM partners are China, Japan, South Korea and the United States. Optimal times of year to conduct the study, meteorological measurements needed, proposed tracer release locations, proposed tracer sampling locations and the proposed durations of tracer releases and subsequent sampling are given. Also given are the activities necessary to prepare for the study and the schedule for completing the preparation activities leading to conducting the actual field operations. This report is intended to providemore » the TEAM members with the information necessary for planning and conducting the Asian long-range tracer study. The experimental plan is proposed, at this time, to describe the efforts necessary to conduct the Asian long-range tracer study, and the plan will undoubtedly be revised and refined as the planning goes forward over the next year.« less

  11. Magnetic field sensing with nitrogen-vacancy color centers in diamond

    NASA Astrophysics Data System (ADS)

    Pham, Linh My

    In recent years, the nitrogen-vacancy (NV) center has emerged as a promising magnetic sensor capable of measuring magnetic fields with high sensitivity and spatial resolution under ambient conditions. This combination of characteristics allows NV magnetometers to probe magnetic structures and systems that were previously inaccessible with alternative magnetic sensing technologies This dissertation presents and discusses a number of the initial efforts to demonstrate and improve NV magnetometry. In particular, a wide-field CCD based NV magnetic field imager capable of micron-scale spatial resolution is demonstrated; and magnetic field alignment, preferential NV orientation, and multipulse dynamical decoupling techniques are explored for enhancing magnetic sensitivity. The further application of dynamical decoupling control sequences as a spectral probe to extract information about the dynamics of the NV spin environment is also discussed; such information may be useful for determining optimal diamond sample parameters for different applications. Finally, several proposed and recently demonstrated applications which take advantage of NV magnetometers' sensitivity and spatial resolution at room temperature are presented, with particular focus on bio-magnetic field imaging.

  12. Development and optimization of the Suna trap as a tool for mosquito monitoring and control

    PubMed Central

    2014-01-01

    Background Monitoring of malaria vector populations provides information about disease transmission risk, as well as measures of the effectiveness of vector control. The Suna trap is introduced and evaluated with regard to its potential as a new, standardized, odour-baited tool for mosquito monitoring and control. Methods Dual-choice experiments with female Anopheles gambiae sensu lato in a laboratory room and semi-field enclosure, were used to compare catch rates of odour-baited Suna traps and MM-X traps. The relative performance of the Suna trap, CDC light trap and MM-X trap as monitoring tools was assessed inside a human-occupied experimental hut in a semi-field enclosure. Use of the Suna trap as a tool to prevent mosquito house entry was also evaluated in the semi-field enclosure. The optimal hanging height of Suna traps was determined by placing traps at heights ranging from 15 to 105 cm above ground outside houses in western Kenya. Results In the laboratory the mean proportion of An. gambiae s.l. caught in the Suna trap was 3.2 times greater than the MM-X trap (P < 0.001), but the traps performed equally in semi-field conditions (P = 0.615). As a monitoring tool , the Suna trap outperformed an unlit CDC light trap (P < 0.001), but trap performance was equal when the CDC light trap was illuminated (P = 0.127). Suspending a Suna trap outside an experimental hut reduced entry rates by 32.8% (P < 0.001). Under field conditions, suspending the trap at 30 cm above ground resulted in the greatest catch sizes (mean 25.8 An. gambiae s.l. per trap night). Conclusions The performance of the Suna trap equals that of the CDC light trap and MM-X trap when used to sample An. gambiae inside a human-occupied house under semi-field conditions. The trap is effective in sampling mosquitoes outside houses in the field, and the use of a synthetic blend of attractants negates the requirement of a human bait. Hanging a Suna trap outside a house can reduce An. gambiae house entry and its use as a novel tool for reducing malaria transmission risk will be evaluated in peri-domestic settings in sub-Saharan Africa. PMID:24998771

  13. Chasing the peak: optimal statistics for weak shear analyses

    NASA Astrophysics Data System (ADS)

    Smit, Merijn; Kuijken, Konrad

    2018-01-01

    Context. Weak gravitational lensing analyses are fundamentally limited by the intrinsic distribution of galaxy shapes. It is well known that this distribution of galaxy ellipticity is non-Gaussian, and the traditional estimation methods, explicitly or implicitly assuming Gaussianity, are not necessarily optimal. Aims: We aim to explore alternative statistics for samples of ellipticity measurements. An optimal estimator needs to be asymptotically unbiased, efficient, and robust in retaining these properties for various possible sample distributions. We take the non-linear mapping of gravitational shear and the effect of noise into account. We then discuss how the distribution of individual galaxy shapes in the observed field of view can be modeled by fitting Fourier modes to the shear pattern directly. This allows scientific analyses using statistical information of the whole field of view, instead of locally sparse and poorly constrained estimates. Methods: We simulated samples of galaxy ellipticities, using both theoretical distributions and data for ellipticities and noise. We determined the possible bias Δe, the efficiency η and the robustness of the least absolute deviations, the biweight, and the convex hull peeling (CHP) estimators, compared to the canonical weighted mean. Using these statistics for regression, we have shown the applicability of direct Fourier mode fitting. Results: We find an improved performance of all estimators, when iteratively reducing the residuals after de-shearing the ellipticity samples by the estimated shear, which removes the asymmetry in the ellipticity distributions. We show that these estimators are then unbiased in the absence of noise, and decrease noise bias by more than 30%. Our results show that the CHP estimator distribution is skewed, but still centered around the underlying shear, and its bias least affected by noise. We find the least absolute deviations estimator to be the most efficient estimator in almost all cases, except in the Gaussian case, where it's still competitive (0.83 < η < 5.1) and therefore robust. These results hold when fitting Fourier modes, where amplitudes of variation in ellipticity are determined to the order of 10-3. Conclusions: The peak of the ellipticity distribution is a direct tracer of the underlying shear and unaffected by noise, and we have shown that estimators that are sensitive to a central cusp perform more efficiently, potentially reducing uncertainties by more 0% and significantly decreasing noise bias. These results become increasingly important, as survey sizes increase and systematic issues in shape measurements decrease.

  14. Electromembrane extraction of gonadotropin-releasing hormone agonists from plasma and wastewater samples.

    PubMed

    Nojavan, Saeed; Bidarmanesh, Tina; Mohammadi, Ali; Yaripour, Saeid

    2016-03-01

    In the present study, for the first time electromembrane extraction followed by high performance liquid chromatography coupled with ultraviolet detection was optimized and validated for quantification of four gonadotropin-releasing hormone agonist anticancer peptides (alarelin, leuprolide, buserelin and triptorelin) in biological and aqueous samples. The parameters influencing electromigration were investigated and optimized. The membrane consists 95% of 1-octanol and 5% di-(2-ethylhexyl)-phosphate immobilized in the pores of a hollow fiber. A 20 V electrical field was applied to make the analytes migrate from sample solution with pH 7.0, through the supported liquid membrane into an acidic acceptor solution with pH 1.0 which was located inside the lumen of hollow fiber. Extraction recoveries in the range of 49 and 71% within 15 min extraction time were obtained in different biological matrices which resulted in preconcentration factors in the range of 82-118 and satisfactory repeatability (7.1 < RSD% < 19.8). The method offers good linearity (2.0-1000 ng/mL) with estimation of regression coefficient higher than 0.998. The procedure allows very low detection and quantitation limits of 0.2 and 0.6 ng/mL, respectively. Finally, it was applied to determination and quantification of peptides in human plasma and wastewater samples and satisfactory results were yielded. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Optimization and validation of sample preparation for metagenomic sequencing of viruses in clinical samples.

    PubMed

    Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael

    2017-08-08

    Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.

  16. Refinement of NMR structures using implicit solvent and advanced sampling techniques.

    PubMed

    Chen, Jianhan; Im, Wonpil; Brooks, Charles L

    2004-12-15

    NMR biomolecular structure calculations exploit simulated annealing methods for conformational sampling and require a relatively high level of redundancy in the experimental restraints to determine quality three-dimensional structures. Recent advances in generalized Born (GB) implicit solvent models should make it possible to combine information from both experimental measurements and accurate empirical force fields to improve the quality of NMR-derived structures. In this paper, we study the influence of implicit solvent on the refinement of protein NMR structures and identify an optimal protocol of utilizing these improved force fields. To do so, we carry out structure refinement experiments for model proteins with published NMR structures using full NMR restraints and subsets of them. We also investigate the application of advanced sampling techniques to NMR structure refinement. Similar to the observations of Xia et al. (J.Biomol. NMR 2002, 22, 317-331), we find that the impact of implicit solvent is rather small when there is a sufficient number of experimental restraints (such as in the final stage of NMR structure determination), whether implicit solvent is used throughout the calculation or only in the final refinement step. The application of advanced sampling techniques also seems to have minimal impact in this case. However, when the experimental data are limited, we demonstrate that refinement with implicit solvent can substantially improve the quality of the structures. In particular, when combined with an advanced sampling technique, the replica exchange (REX) method, near-native structures can be rapidly moved toward the native basin. The REX method provides both enhanced sampling and automatic selection of the most native-like (lowest energy) structures. An optimal protocol based on our studies first generates an ensemble of initial structures that maximally satisfy the available experimental data with conventional NMR software using a simplified force field and then refines these structures with implicit solvent using the REX method. We systematically examine the reliability and efficacy of this protocol using four proteins of various sizes ranging from the 56-residue B1 domain of Streptococcal protein G to the 370-residue Maltose-binding protein. Significant improvement in the structures was observed in all cases when refinement was based on low-redundancy restraint data. The proposed protocol is anticipated to be particularly useful in early stages of NMR structure determination where a reliable estimate of the native fold from limited data can significantly expedite the overall process. This refinement procedure is also expected to be useful when redundant experimental data are not readily available, such as for large multidomain biomolecules and in solid-state NMR structure determination.

  17. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  18. High-Throughput Analysis and Automation for Glycomics Studies.

    PubMed

    Shubhakar, Archana; Reiding, Karli R; Gardner, Richard A; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred

    This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing use of glycomics in Quality by Design studies to help optimize glycan profiles of drugs with a view to improving their clinical performance. Glycomics is also used in comparability studies to ensure consistency of glycosylation both throughout product development and between biosimilars and innovator drugs. In clinical studies there is as well an expanding interest in the use of glycomics-for example in Genome Wide Association Studies-to follow changes in glycosylation patterns of biological tissues and fluids with the progress of certain diseases. These include cancers, neurodegenerative disorders and inflammatory conditions. Despite rising activity in this field, there are significant challenges in performing large scale glycomics studies. The requirement is accurate identification and quantitation of individual glycan structures. However, glycoconjugate samples are often very complex and heterogeneous and contain many diverse branched glycan structures. In this article we cover HTP sample preparation and derivatization methods, sample purification, robotization, optimized glycan profiling by UHPLC, MS and multiplexed CE, as well as hyphenated techniques and automated data analysis tools. Throughout, we summarize the advantages and challenges with each of these technologies. The issues considered include reliability of the methods for glycan identification and quantitation, sample throughput, labor intensity, and affordability for large sample numbers.

  19. Non-destructive geometric and refractive index characterization of single and multi-element lenses using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    El-Haddad, Mohamed T.; Tao, Yuankai K.

    2018-02-01

    Design of optical imaging systems requires careful balancing of lens aberrations to optimize the point-spread function (PSF) and minimize field distortions. Aberrations and distortions are a result of both lens geometry and glass material. While most lens manufacturers provide optical models to facilitate system-level simulation, these models are often not reflective of true system performance because of manufacturing tolerances. Optical design can be further confounded when achromatic or proprietary lenses are employed. Achromats are ubiquitous in systems that utilize broadband sources due to their superior performance in balancing chromatic aberrations. Similarly, proprietary lenses may be custom-designed for optimal performance, but lens models are generally not available. Optical coherence tomography (OCT) provides non-contact, depth-resolved imaging with high axial resolution and sensitivity. OCT has been previously used to measure the refractive index of unknown materials. In a homogenous sample, the group refractive index is obtained as the ratio between the measured optical and geometric thicknesses of the sample. In heterogenous samples, a method called focus-tracking (FT) quantifies the effect of focal shift introduced by the sample. This enables simultaneous measurement of the thickness and refractive index of intermediate sample layers. Here, we extend the mathematical framework of FT to spherical surfaces, and describe a method based on OCT and FT for full characterization of lens geometry and refractive index. Finally, we validate our characterization method on commercially available singlet and doublet lenses.

  20. Composite Magnetic Nanoparticles (CuFe₂O₄) as a New Microsorbent for Extraction of Rhodamine B from Water Samples.

    PubMed

    Roostaie, Ali; Allahnoori, Farzad; Ehteshami, Shokooh

    2017-09-01

    In this work, novel composite magnetic nanoparticles (CuFe2O4) were synthesized based on sol-gel combustion in the laboratory. Next, a simple production method was optimized for the preparation of the copper nanoferrites (CuFe2O4), which are stable in water, magnetically active, and have a high specific area used as sorbent material for organic dye extraction in water solution. CuFe2O4 nanopowders were characterized by field-emission scanning electron microscopy (SEM), FTIR spectroscopy, and energy dispersive X-ray spectroscopy. The size range of the nanoparticles obtained in such conditions was estimated by SEM images to be 35-45 nm. The parameters influencing the extraction of CuFe2O4 nanoparticles, such as desorption solvent, amount of sorbent, desorption time, sample pH, ionic strength, and extraction time, were investigated and optimized. Under the optimum conditions, a linear calibration curve in the range of 0.75-5.00 μg/L with R2 = 0.9996 was obtained. The LOQ (10Sb) and LOD (3Sb) of the method were 0.75 and 0.25 μg/L (n = 3), respectively. The RSD for a water sample spiked with 1 μg/L rhodamine B was 3% (n = 5). The method was applied for the determination of rhodamine B in tap water, dishwashing foam, dishwashing liquid, and shampoo samples. The relative recovery percentages for these samples were in the range of 95-99%.

  1. The effect of resolidification on preform optimized infiltration growth processed (Y, Nd, Sm, Gd)BCO, multi-grain bulk superconductor

    NASA Astrophysics Data System (ADS)

    Pavan Kumar Naik, S.; Seshu Bai, V.

    2017-01-01

    Controlling the microstructure of superconductors by incorporating the flux pinning centers and reducing the macro-defects to improve high field performance is the topic of recent research. In continuation, the preform optimized infiltration growth (POIG) processed YBa2Cu3O7-δ (YBCO) system, Y-site substituted with three mixed RE (Nd, Sm, Gd) elements is investigated. 20 wt.% of (Nd, Sm, Gd)2BaCuO5 were mixed with Y2BaCuO5 and POIG processed in reduced oxygen atmosphere to obtain YNSG superconductor. No seed is employed for crystal growth; hence the processed samples are multi-grained. Microstructural and compositional investigations on YNSG revealed the presence of different phases in the matrix as well as in precipitates which are of the order of submicron to 4 μm. A large fraction of macro-defects (∼6% of porosity) was observed in the YNSG sample. For reducing the unwanted macro-defects and refine the non-superconducting precipitates, processed YNSG sample is pressed and resolidified (by infiltrating the liquid phases once again) in an argon atmosphere and the structural, microstructural, elemental and superconducting properties are compared with YNSG and undoped samples. Due to spatial scatter in superconducting critical temperatures, caused by the distribution of different REBCO unit cells in YBCO, superconducting transition curve is sharp in YNSG, whereas the resolidified sample showed the broad transition due to solidified liquid phases.

  2. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  3. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  4. Optimally resolving Lambertian surface orientation

    NASA Astrophysics Data System (ADS)

    Bertsatos, Ioannis; Makris, Nicholas C.

    2003-10-01

    Sonar images of remote surfaces are typically corrupted by signal-dependent noise known as speckle. Relative motion between source, surface, and receiver causes the received field to fluctuate over time with circular complex Gaussian random (CCGR) statistics. In many cases of practical importance, Lambert's law is appropriate to model radiant intensity from the surface. In a previous paper, maximum likelihood estimators (MLE) for Lambertian surface orientation have been derived based on CCGR measurements [N. C. Makris, SACLANT Conference Proceedings Series CP-45, 1997, pp. 339-346]. A Lambertian surface needs to be observed from more than one illumination direction for its orientation to be properly constrained. It is found, however, that MLE performance varies significantly with illumination direction due to the inherently nonlinear nature of this problem. It is shown that a large number of samples is often required to optimally resolve surface orientation using the optimality criteria of the MLE derived in Naftali and Makris [J. Acoust. Soc. Am. 110, 1917-1930 (2001)].

  5. Evaluating the soil physical quality under long-term field experiments in Southern Italy

    NASA Astrophysics Data System (ADS)

    Castellini, Mirko; Stellacci, Anna Maria; Iovino, Massimo; Rinaldi, Michele; Ventrella, Domenico

    2017-04-01

    Long-term field experiments performed in experimental farms are important research tools to assess the soil physical quality (SPQ) given that relatively stable conditions can be expected in these soils. However, different SPQ indicators may sometimes provide redundant or conflicting results, making difficult an SPQ evaluation (Castellini et al., 2014). As a consequence, it is necessary to apply appropriate statistical procedures to obtain a minimum set of key indicators. The study was carried out at the Experimental Farm of CREA-SCA (Foggia) in two long-term field experiments performed on durum wheat. The first long-term experiment is aiming at evaluating the effects of two residue management systems (burning, B or soil incorporation of crop residues, I) while the second at comparing the effect of tillage (conventional tillage, CT) and sod-seeding (direct drilling, DD). In order to take into account both optimal and non-optimal soil conditions, five SPQ indicators were monitored at 5-6 sampling dates during the crop season (i.e., between November and June): soil bulk density (BD), macroporosity (PMAC), air capacity (AC), plant available water capacity (PAWC) and relative field capacity (RFC). Two additional data sets, collected on DD plot in different cropping seasons and in Sicilian soils differing for texture, depth and land use (N=140), were also used with the aim to check the correlation among indicators. Impact of soil management was assessed by comparing SPQ evaluated under different management systems with optimal reference values reported in literature. Two techniques of multivariate analysis (principal component analysis, PCA and stepwise discriminant analysis, SDA) were applied to select the most suitable indicator to facilitate the judgment on SPQ. Regardless of the considered management system, sampling date or auxiliary data set, correlation matrices always showed significant negative relationships between RFC and AC. Decreasing RFC at increasing AC is expected as both indicators depends on soil water contents at saturation and field capacity. Our results reinforce the suggestion that one of the two indicators can be neglected (Cullotta et al., 2016) even if further investigations are necessary to choose the most accurate and/or widely applicable indicator since different optimal ranges were suggested in literature. A positive significant correlation was also generally found between PMAC and AC. PCA analysis identified RFC and AC as the main indicators that explain most of the data variation. When the data collected at the different sampling dates were pooled together, in both experiments the first principal component explained the highest proportion of total variance (67.9% and 81.5%, respectively for residue management and tillage) and RFC showed the highest loadings, followed by AC and PMAC. SDA provided consistent results and RFC was selected as the main variable to assess the effects of tillage. Conversely, the residue management had no effect on SPQ as indicated by negligible differences between indicators. Finally, our results suggest that RFC always reached optimal and steady values between April and June. *The work was supported by the projects "STRATEGA, Sperimentazione e TRAsferimento di TEcniche innovative di aGricoltura conservativA", financed by Regione Puglia - Servizio Agricoltura, and "DESERT, Low-cost water desalination and sensor technology compact module" financed by ERANET-WATERWORKS 2014. References Castellini, M., M. Niedda, M. Pirastru, and D. Ventrella. 2014. Temporal changes of soil physical quality under two residue management systems. Soil Use Management. 30:423-434. doi:10.1111/sum.12137 Cullotta, S., V. Bagarello, G. Baiamonte, G. Gugliuzza, M. Iovino, D.S. La Mela Veca, F. Maetzke, V. Palmeri, and S. Sferlazza. 2016. Comparing Different Methods to Determine Soil Physical Quality in a Mediterranean Forest and Pasture Land. Soil Sci. Soc. Am. J. 80:1038-1056. doi:10.2136/sssaj2015.12.0447

  6. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.

  7. Computational design optimization for microfluidic magnetophoresis

    PubMed Central

    Plouffe, Brian D.; Lewis, Laura H.; Murthy, Shashi K.

    2011-01-01

    Current macro- and microfluidic approaches for the isolation of mammalian cells are limited in both efficiency and purity. In order to design a robust platform for the enumeration of a target cell population, high collection efficiencies are required. Additionally, the ability to isolate pure populations with minimal biological perturbation and efficient off-chip recovery will enable subcellular analyses of these cells for applications in personalized medicine. Here, a rational design approach for a simple and efficient device that isolates target cell populations via magnetic tagging is presented. In this work, two magnetophoretic microfluidic device designs are described, with optimized dimensions and operating conditions determined from a force balance equation that considers two dominant and opposing driving forces exerted on a magnetic-particle-tagged cell, namely, magnetic and viscous drag. Quantitative design criteria for an electromagnetic field displacement-based approach are presented, wherein target cells labeled with commercial magnetic microparticles flowing in a central sample stream are shifted laterally into a collection stream. Furthermore, the final device design is constrained to fit on standard rectangular glass coverslip (60 (L)×24 (W)×0.15 (H) mm3) to accommodate small sample volume and point-of-care design considerations. The anticipated performance of the device is examined via a parametric analysis of several key variables within the model. It is observed that minimal currents (<500 mA) are required to generate magnetic fields sufficient to separate cells from the sample streams flowing at rate as high as 7 ml∕h, comparable to the performance of current state-of-the-art magnet-activated cell sorting systems currently used in clinical settings. Experimental validation of the presented model illustrates that a device designed according to the derived rational optimization can effectively isolate (∼100%) a magnetic-particle-tagged cell population from a homogeneous suspension even in a low abundance. Overall, this design analysis provides a rational basis to select the operating conditions, including chamber and wire geometry, flow rates, and applied currents, for a magnetic-microfluidic cell separation device. PMID:21526007

  8. Nanodosimetry-Based Plan Optimization for Particle Therapy

    PubMed Central

    Schulte, Reinhard W.

    2015-01-01

    Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202

  9. Field-design optimization with triangular heliostat pods

    NASA Astrophysics Data System (ADS)

    Domínguez-Bravo, Carmen-Ana; Bode, Sebastian-James; Heiming, Gregor; Richter, Pascal; Carrizosa, Emilio; Fernández-Cara, Enrique; Frank, Martin; Gauché, Paul

    2016-05-01

    In this paper the optimization of a heliostat field with triangular heliostat pods is addressed. The use of structures which allow the combination of several heliostats into a common pod system aims to reduce the high costs associated with the heliostat field and therefore reduces the Levelized Cost of Electricity value. A pattern-based algorithm and two pattern-free algorithms are adapted to handle the field layout problem with triangular heliostat pods. Under the Helio100 project in South Africa, a new small-scale Solar Power Tower plant has been recently constructed. The Helio100 plant has 20 triangular pods (each with 6 heliostats) whose positions follow a linear pattern. The obtained field layouts after optimization are compared against the reference field Helio100.

  10. AlGaN Nanostructures with Extremely High Room-Temperature Internal Quantum Efficiency of Emission Below 300 nm

    NASA Astrophysics Data System (ADS)

    Toropov, A. A.; Shevchenko, E. A.; Shubina, T. V.; Jmerik, V. N.; Nechaev, D. V.; Evropeytsev, E. A.; Kaibyshev, V. Kh.; Pozina, G.; Rouvimov, S.; Ivanov, S. V.

    2017-07-01

    We present theoretical optimization of the design of a quantum well (QW) heterostructure based on AlGaN alloys, aimed at achievement of the maximum possible internal quantum efficiency of emission in the mid-ultraviolet spectral range below 300 nm at room temperature. A sample with optimized parameters was fabricated by plasma-assisted molecular beam epitaxy using the submonolayer digital alloying technique for QW formation. High-angle annular dark-field scanning transmission electron microscopy confirmed strong compositional disordering of the thus-fabricated QW, which presumably facilitates lateral localization of charge carriers in the QW plane. Stress evolution in the heterostructure was monitored in real time during growth using a multibeam optical stress sensor intended for measurements of substrate curvature. Time-resolved photoluminescence spectroscopy confirmed that radiative recombination in the fabricated sample dominated in the whole temperature range up to 300 K. This leads to record weak temperature-induced quenching of the QW emission intensity, which at 300 K does not exceed 20% of the low-temperature value.

  11. Simultaneous detection of antibacterial sulfonamides in a microfluidic device with amperometry.

    PubMed

    Won, So-Young; Chandra, Pranjal; Hee, Tak Seong; Shim, Yoon-Bo

    2013-01-15

    A highly sensitive and robust method for simultaneous detection of five sulfonamide drugs is developed by integrating the preconcentration and separation steps in a microfluidic device. An ampetrometry is performed for the selective detection of sulfonamides using an aluminum oxide-gold nanoparticle (Al(2)O(3)-AuNPs) modified carbon paste (CP) electrode at the end of separation channel. The preconcentration capacity of the channel is enhanced by using the field amplified sample stacking and the field amplified sample injection techniques. The experimental parameters affecting the analytical performances, such as pH, % of Al(2)O(3), volume of AuNPs, buffer concentration, and water plug length are optimized. A reproducible response is observed during the multiple injections of samples with RSDs<4%. The calibration plots are linear with the correlation coefficient between 0.991 and 0.997 over the range between 0.01 and 2025pM. The detection limits of five drugs are determined to be between 0.91 (±0.03) and 2.21 (±0.09)fM. The interference effects of common biological compounds are also investigated and the applicability of the method to the direct analysis of sulfonamides in real meat samples is successfully demonstrated. Long term stability of the modified electrode was also investigated. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. A temperature-jump NMR probe setup using rf heating optimized for the analysis of temperature-induced biomacromolecular kinetic processes

    NASA Astrophysics Data System (ADS)

    Rinnenthal, Jörg; Wagner, Dominic; Marquardsen, Thorsten; Krahn, Alexander; Engelke, Frank; Schwalbe, Harald

    2015-02-01

    A novel temperature jump (T-jump) probe operational at B0 fields of 600 MHz (14.1 Tesla) with an integrated cage radio-frequency (rf) coil for rapid (<1 s) heating in high-resolution (HR) liquid-state NMR-spectroscopy is presented and its performance investigated. The probe consists of an inner 2.5 mm "heating coil" designed for generating rf-electric fields of 190-220 MHz across a lossy dielectric sample and an outer two coil assembly for 1H-, 2H- and 15N-nuclei. High B0 field homogeneities (0.7 Hz at 600 MHz) are combined with high heating rates (20-25 K/s) and only small temperature gradients (<±1.5 K, 3 s after 20 K T-jump). The heating coil is under control of a high power rf-amplifier within the NMR console and can therefore easily be accessed by the pulse programmer. Furthermore, implementation of a real-time setup including synchronization of the NMR spectrometer's air flow heater with the rf-heater used to maintain the temperature of the sample is described. Finally, the applicability of the real-time T-jump setup for the investigation of biomolecular kinetic processes in the second-to-minute timescale is demonstrated for samples of a model 14mer DNA hairpin and a 15N-selectively labeled 40nt hsp17-RNA thermometer.

  13. Analytical models integrated with satellite images for optimized pest management

    USDA-ARS?s Scientific Manuscript database

    The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...

  14. Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control

    NASA Astrophysics Data System (ADS)

    Hu, Juju; Ke, Qiang; Ji, Yinghua

    2018-02-01

    The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.

  15. Multimodel methods for optimal control of aeroacoustics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guoquan; Collis, Samuel Scott

    2005-01-01

    A new multidomain/multiphysics computational framework for optimal control of aeroacoustic noise has been developed based on a near-field compressible Navier-Stokes solver coupled with a far-field linearized Euler solver both based on a discontinuous Galerkin formulation. In this approach, the coupling of near- and far-field domains is achieved by weakly enforcing continuity of normal fluxes across a coupling surface that encloses all nonlinearities and noise sources. For optimal control, gradient information is obtained by the solution of an appropriate adjoint problem that involves the propagation of adjoint information from the far-field to the near-field. This computational framework has been successfully appliedmore » to study optimal boundary-control of blade-vortex interaction, which is a significant noise source for helicopters on approach to landing. In the model-problem presented here, the noise propagated toward the ground is reduced by 12dB.« less

  16. Simultaneous tuning of electric field intensity and structural properties of ZnO: Graphene nanostructures for FOSPR based nicotine sensor.

    PubMed

    Tabassum, Rana; Gupta, Banshi D

    2017-05-15

    We report theoretical and experimental realization of a SPR based fiber optic nicotine sensor having coatings of silver and graphene doped ZnO nanostructure onto the unclad core of the optical fiber. The volume fraction (f) of graphene in ZnO was optimized using simulation of electric field intensity. Four types of graphene doped ZnO nanostructures viz. nanocomposites, nanoflowers, nanotubes and nanofibers were prepared using optimized value of f. The morphology, photoluminescence (PL) spectra and UV-vis spectra of these nanostructures were studied. The peak PL intensity was found to be highest for ZnO: graphene nanofibers. The optimized value of f in ZnO: graphene nanofiber was reconfirmed using UV-vis spectroscopy. The experiments were performed on the fiber optic probe fabricated with Ag/ZnO: graphene layer and optimized parameters for in-situ detection of nicotine. The interaction of nicotine with ZnO: graphene nanostructures alters the dielectric function of ZnO: graphene nanostructure which is manifested in terms of shift in resonance wavelength. From the sensing signal, the performance parameters were measured including sensitivity, limit of detection (LOD), limit of quantification (LOQ), stability, repeatability and selectivity. The real sample prepared using cigarette tobacco leaves and analyzed using the fabricated sensor makes it suitable for practical applications. The achieved values of LOD and LOQ are found to be unrivalled in comparison to the reported ones. The sensor possesses additional advantages such as, immunity to electromagnetic interference, low cost, capability of online monitoring, remote sensing. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Progress toward the determination of correct classification rates in fire debris analysis.

    PubMed

    Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E

    2013-07-01

    Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.

  18. Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection

    PubMed Central

    Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi

    2011-01-01

    The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237

  19. Three-dimensional dynamics of temperature fields in phantoms and biotissue under IR laser photothermal therapy using gold nanoparticles and ICG dye

    NASA Astrophysics Data System (ADS)

    Akchurin, Georgy G.; Garif, Akchurin G.; Maksimova, Irina L.; Skaptsov, Alexander A.; Terentyuk, Georgy S.; Khlebtsov, Boris N.; Khlebtsov, Nikolai G.; Tuchin, Valery V.

    2010-02-01

    We describe applications of silica (core)/gold (shell) nanoparticles and ICG dye to photothermal treatment of phantoms, biotissue and spontaneous tumor of cats and dogs. The laser irradiation parameters were optimized by preliminary experiments with laboratory rats. Three dimensional dynamics of temperature fields in tissue and solution samples was measured with a thermal imaging system. It is shown that the temperature in the volume region of nanoparticles localization can substantially exceed the surface temperature recorded by the thermal imaging system. We have demonstrated effective optical destruction of cancer cells by local injection of plasmon-resonant gold nanoshells and ICG dye followed by continuous wave (CW) diode laser irradiation at wavelength 808 nm.

  20. New design of a cryostat-mounted scanning near-field optical microscope for single molecule spectroscopy

    NASA Astrophysics Data System (ADS)

    Durand, Yannig; Woehl, Jörg C.; Viellerobe, Bertrand; Göhde, Wolfgang; Orrit, Michel

    1999-02-01

    Due to the weakness of the fluorescence signal from a single fluorophore, a scanning near-field optical microscope for single molecule spectroscopy requires a very efficient setup for the collection and detection of emitted photons. We have developed a home-built microscope for operation in a l-He cryostat which uses a solid parabolic mirror in order to optimize the fluorescence collection efficiency. This microscope works with Al-coated, tapered optical fibers in illumination mode. The tip-sample separation is probed by an optical shear-force detection. First results demonstrate the capability of the microscope to image single molecules and achieve a topographical resolution of a few nanometers vertically and better than 50 nm laterally.

  1. Comparing microarrays and next-generation sequencing technologies for microbial ecology research.

    PubMed

    Roh, Seong Woon; Abell, Guy C J; Kim, Kyoung-Ho; Nam, Young-Do; Bae, Jin-Woo

    2010-06-01

    Recent advances in molecular biology have resulted in the application of DNA microarrays and next-generation sequencing (NGS) technologies to the field of microbial ecology. This review aims to examine the strengths and weaknesses of each of the methodologies, including depth and ease of analysis, throughput and cost-effectiveness. It also intends to highlight the optimal application of each of the individual technologies toward the study of a particular environment and identify potential synergies between the two main technologies, whereby both sample number and coverage can be maximized. We suggest that the efficient use of microarray and NGS technologies will allow researchers to advance the field of microbial ecology, and importantly, improve our understanding of the role of microorganisms in their various environments.

  2. Modeling global vector fields of chaotic systems from noisy time series with the aid of structure-selection techniques.

    PubMed

    Xu, Daolin; Lu, Fangfang

    2006-12-01

    We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.

  3. Observing laser ablation dynamics with sub-picosecond temporal resolution

    NASA Astrophysics Data System (ADS)

    Tani, Shuntaro; Kobayashi, Yohei

    2017-04-01

    Laser ablation is one of the most fundamental processes in laser processing, and the understanding of its dynamics is of key importance for controlling and manipulating the outcome. In this study, we propose a novel way of observing the dynamics in the time domain using an electro-optic sampling technique. We found that an electromagnetic field was emitted during the laser ablation process and that the amplitude of the emission was closely correlated with the ablated volume. From the temporal profile of the electromagnetic field, we analyzed the motion of charged particles with subpicosecond temporal resolution. The proposed method can provide new access to observing laser ablation dynamics and thus open a new way to optimize the laser processing.

  4. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    PubMed

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-06-12

    Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.

  5. A framework for parallelized efficient global optimization with application to vehicle crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Hamza, Karim; Shalaby, Mohamed

    2014-09-01

    This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.

  6. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Effects of preservation method on canine (Canis lupus familiaris) fecal microbiota.

    PubMed

    Horng, Katti R; Ganz, Holly H; Eisen, Jonathan A; Marks, Stanley L

    2018-01-01

    Studies involving gut microbiome analysis play an increasing role in the evaluation of health and disease in humans and animals alike. Fecal sampling methods for DNA preservation in laboratory, clinical, and field settings can greatly influence inferences of microbial composition and diversity, but are often inconsistent and under-investigated between studies. Many laboratories have utilized either temperature control or preservation buffers for optimization of DNA preservation, but few studies have evaluated the effects of combining both methods to preserve fecal microbiota. To determine the optimal method for fecal DNA preservation, we collected fecal samples from one canine donor and stored aliquots in RNAlater, 70% ethanol, 50:50 glycerol:PBS, or without buffer at 25 °C, 4 °C, and -80 °C. Fecal DNA was extracted, quantified, and 16S rRNA gene analysis performed on Days 0, 7, 14, and 56 to evaluate changes in DNA concentration, purity, and bacterial diversity and composition over time. We detected overall effects on bacterial community of storage buffer ( F -value = 6.87, DF  = 3, P  < 0.001), storage temperature ( F -value=1.77, DF  = 3, P  = 0.037), and duration of sample storage ( F -value = 3.68, DF  = 3, P  < 0.001). Changes in bacterial composition were observed in samples stored in -80 °C without buffer, a commonly used method for fecal DNA storage, suggesting that simply freezing samples may be suboptimal for bacterial analysis. Fecal preservation with 70% ethanol and RNAlater closely resembled that of fresh samples, though RNAlater yielded significantly lower DNA concentrations ( DF  = 8.57, P  < 0.001). Although bacterial composition varied with temperature and buffer storage, 70% ethanol was the best method for preserving bacterial DNA in canine feces, yielding the highest DNA concentration and minimal changes in bacterial diversity and composition. The differences observed between samples highlight the need to consider optimized post-collection methods in microbiome research.

  8. Role of nano and micron-sized inclusions on the oxygen controlled preform optimized infiltration growth processed YBCO superconductors

    NASA Astrophysics Data System (ADS)

    Pavan Kumar Naik, S.; Bai, V. Seshu

    2017-02-01

    In the present work, with the aim of improving the local flux pinning at the unit cell level in the YBa2Cu3O7-δ (YBCO) bulk superconductors, 20 wt% of nanoscale Sm2O3 and micron sized (Nd, Sm, Gd)2BaCuO5 secondary phase particles were added to YBCO and processed in oxygen controlled preform optimized infiltration growth process. Nano Dispersive Sol Casting method is employed to homogeneously distribute the nano Sm2O3 particles of 30-50 nm without any agglomeration in the precursor powder. Microstructural investigations on doped samples show the chemical fluctuations as annuli cores in the 211 phase particles. The introduction of mixed rare earth elements at Y-site resulted in compositional fluctuations in the superconducting matrix. The associated lattice mismatch defects have provided flux pinning up to large magnetic fields. Magnetic field dependence of current density (Jc(H)) at different temperatures revealed that the dominant pinning mechanism is caused by spatial variations of critical temperatures, due to the spatial fluctuations in the matrix composition. As the number of rare earth elements increased in the YBCO, the peak field position in the scaling of the normalized pinning force density (Fp/Fp max) significantly gets shifted towards the higher fields. The curves of Jc(H) and Fp/Fp max at different temperatures clearly indicate the LRE substitution for LRE' or Ba-sites for δTc pinning.

  9. Fast words boundaries localization in text fields for low quality document images

    NASA Astrophysics Data System (ADS)

    Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry

    2018-04-01

    The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3

  10. The Role of Cyanobacteria in Stromatolite Morphogenesis, Highborn Cay Bahamas: An Integrated Field and Laboratory Simulation Study

    NASA Technical Reports Server (NTRS)

    Prufert-Bebout, Leslie; Shepard, Rebekah; Reid, Pamela R.; Fonda, Mark (Technical Monitor)

    2001-01-01

    Geomicrobiological phenomena are among the most fundamental of interactions between Earth and its biosphere. Actively growing and lithifying stromatolites at Highborne Cay Bahamas, have recently been documented and allow for detailed examination of the roles microbes play in the mineralization process. These stromatolites contain a variety of complex microbial communities with distinct distribution patterns for different microbial groups. Cyanobacteria are the primary producers in this system providing energy, directly or indirectly, for the entire stromatolite microbial community. They also play key roles in the trapping and binding of sediments. Most of these species are highly motile and can adjust their position and orientation within the sediment matrix in order to optimize their access to irradiance and nutrients. As individual species have different physical and metabolic properties, this motility generally results in segregated distributions of species, which in turn contributes to the laminated textures observed in these actively forming stromatolites. Increasingly our studies suggest that the activities and locations of various cyanobacterial species also contribute greatly to the localization of new mineral precipitation through a variety of processes. We are investigating these contributions using an integrated approach combining detailed observations of field samples with manipulative experiments using both field samples and cultures of specific organisms isolated from these stromatolites. Experiments are conducted both in standard laboratory conditions and in outdoor running seawater flumes. A variety of standard techniques; SEM (scanning electron microscopy), petrographic analyses, TEM (transmission electron microscopy), are used to compare mineralization processes in field samples with those generated in laboratory-flume simulations. Using this approach we are able to more thoroughly investigate the effects of irradiance, CaCO3 saturation, and hydrodynamic regime on cyanobacterial distribution, trapping and binding and mineral precipitation. Simulation results will be presented and compared with community and mineralization distribution patterns seen in the field samples from which these communities were isolated.

  11. Paleomagnetism studies at micrometer scales using quantum diamond microscopy

    NASA Astrophysics Data System (ADS)

    Kehayias, P.; Fu, R. R.; Glenn, D. R.; Lima, E. A.; Men, M.; Merryman, H.; Walsworth, A.; Weiss, B. P.; Walsworth, R. L.

    2017-12-01

    Traditional paleomagnetic experiments generally measure the net magnetic moment of cm-size rock samples. Field tests such as the conglomerate and fold tests, based on the measurements of such cm-size samples, are frequently used to constrain the timing of magnetization. However, structures permitting such field tests may occur at the micron scale in geological samples, precluding paleomagnetic field tests using traditional bulk measurement techniques. The quantum diamond microscope (QDM) is a recently developed technology that uses magnetically-sensitive nitrogen-vacancy (NV) color centers in diamond for magnetic mapping with micron resolution [1]. QDM data were previously used to identify the ferromagnetic carriers in chondrules and terrestrial zircons and to image the magnetization distribution in multi-domain dendritic magnetite. Taking advantage of new hardware components, we have developed an optimized QDM setup with a 1E-15 J/T moment sensitivity over a measurement area of several millimeters squared. The improved moment sensitivity of the new QDM setup permits us to image natural remanent magnetization (NRM) in weakly magnetized samples, thereby enabling paleomagnetic field tests at the millimeter scale. We will present recent and ongoing QDM measurements of (1) the Renazzo class carbonaceous (CR) chondrite GRA 95229 and (2) 1 cm scale folds in a post-Bitter Springs Stage ( 790 Ma) carbonate from the Svanbergfjellet Formation (Svalbard). Results from the GRA 95229 micro-conglomerate test, performed on single chondrules containing dusty olivine metals crystallized during chondrule formation, hold implications for the preservation of nebular magnetic field records. The Svanbergfjellet Formation micro-fold test can help confirm the primary origin of a paleomagnetic pole at 790 Ma, which has been cited as evidence for rapid true polar wander in the 820-790 Ma interval. In addition, we will detail technical aspects of the new QDM setup, emphasizing key elements that enable improved sensitivity. [1] D. R. Glenn et al., arXiv:1707.06714 (2017).

  12. Investigation on the optimal magnetic field of a cusp electron gun for a W-band gyro-TWA

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; He, Wenlong; Donaldson, Craig R.; Cross, Adrian W.

    2018-05-01

    High efficiency and broadband operation of a gyrotron traveling wave amplifier (gyro-TWA) require a high-quality electron beam with low-velocity spreads. The beam velocity spreads are mainly due to the differences of the electric and magnetic fields that the electrons withstand the electron gun. This paper investigates the possibility to decouple the design of electron gun geometry and the magnet system while still achieving optimal results, through a case study of designing a cusp electron gun for a W-band gyro-TWA. A global multiple-objective optimization routing was used to optimize the electron gun geometry for different predefined magnetic field profiles individually. Their results were compared and the properties of the required magnetic field profile are summarized.

  13. Helicopter TEM parameters analysis and system optimization based on time constant

    NASA Astrophysics Data System (ADS)

    Xiao, Pan; Wu, Xin; Shi, Zongyang; Li, Jutao; Liu, Lihua; Fang, Guangyou

    2018-03-01

    Helicopter transient electromagnetic (TEM) method is a kind of common geophysical prospecting method, widely used in mineral detection, underground water exploration and environment investigation. In order to develop an efficient helicopter TEM system, it is necessary to analyze and optimize the system parameters. In this paper, a simple and quantitative method is proposed to analyze the system parameters, such as waveform, power, base frequency, measured field and sampling time. A wire loop model is used to define a comprehensive 'time constant domain' that shows a range of time constant, analogous to a range of conductance, after which the characteristics of the system parameters in this domain is obtained. It is found that the distortion caused by the transmitting base frequency is less than 5% when the ratio of the transmitting period to the target time constant is greater than 6. When the sampling time window is less than the target time constant, the distortion caused by the sampling time window is less than 5%. According to this method, a helicopter TEM system, called CASHTEM, is designed, and flight test has been carried out in the known mining area. The test results show that the system has good detection performance, verifying the effectiveness of the method.

  14. Use of magnetic effervescent tablet-assisted ionic liquid dispersive liquid-liquid microextraction to extract fungicides from environmental waters with the aid of experimental design methodology.

    PubMed

    Yang, Miyi; Wu, Xiaoling; Jia, Yuhan; Xi, Xuefei; Yang, Xiaoling; Lu, Runhua; Zhang, Sanbing; Gao, Haixiang; Zhou, Wenfeng

    2016-02-04

    In this work, a novel effervescence-assisted microextraction technique was proposed for the detection of four fungicides. This method combines ionic liquid-based dispersive liquid-liquid microextraction with the magnetic retrieval of the extractant. A magnetic effervescent tablet composed of Fe3O4 magnetic nanoparticles, sodium carbonate, sodium dihydrogen phosphate and 1-hexyl-3-methylimidazolium bis(trifluoromethanesulfonimide) was used for extractant dispersion and retrieval. The main factors affecting the extraction efficiency were screened by a Plackett-Burman design and optimized by a central composite design. Under the optimum conditions, good linearity was obtained for all analytes in pure water model and real water samples. Just for the pure water, the recoveries were between 84.6% and 112.8%, the limits of detection were between 0.02 and 0.10 μg L(-1) and the intra-day precision and inter-day precision both are lower than 4.9%. This optimized method was successfully applied in the analysis of four fungicides (azoxystrobin, triazolone, cyprodinil, trifloxystrobin) in environmental water samples and the recoveries ranged between 70.7% and 105%. The procedure promising to be a time-saving, environmentally friendly, and efficient field sampling technique. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Quantitation of zolpidem in biological fluids by electro-driven microextraction combined with HPLC-UV analysis.

    PubMed

    Yaripour, Saeid; Mohammadi, Ali; Esfanjani, Isa; Walker, Roderick B; Nojavan, Saeed

    2018-01-01

    In this study, for the first time, an electro-driven microextraction method named electromembrane extraction combined with a simple high performance liquid chromatography and ultraviolet detection was developed and validated for the quantitation of zolpidem in biological samples. Parameters influencing electromembrane extraction were evaluated and optimized. The membrane consisted of 2-ethylhexanol immobilized in the pores of a hollow fiber. As a driving force, a 150 V electric field was applied to facilitate the analyte migration from the sample matrix to an acceptor solution through a supported liquid membrane. The pHs of donor and acceptor solutions were optimized to 6.0 and 2.0, respectively. The enrichment factor was obtained >75 within 15 minutes. The effect of carbon nanotubes (as solid nano-sorbents) on the membrane performance and EME efficiency was evaluated. The method was linear over the range of 10-1000 ng/mL for zolpidem (R 2 >0.9991) with repeatability ( %RSD) between 0.3 % and 7.3 % ( n = 3). The limits of detection and quantitation were 3 and 10 ng/mL, respectively. The sensitivity of HPLC-UV for the determination of zolpidem was enhanced by electromembrane extraction. Finally, the method was employed for the quantitation of zolpidem in biological samples with relative recoveries in the range of 60-79 %.

  16. Quantitation of zolpidem in biological fluids by electro-driven microextraction combined with HPLC-UV analysis

    PubMed Central

    Yaripour, Saeid; Mohammadi, Ali; Esfanjani, Isa; Walker, Roderick B.; Nojavan, Saeed

    2018-01-01

    In this study, for the first time, an electro-driven microextraction method named electromembrane extraction combined with a simple high performance liquid chromatography and ultraviolet detection was developed and validated for the quantitation of zolpidem in biological samples. Parameters influencing electromembrane extraction were evaluated and optimized. The membrane consisted of 2-ethylhexanol immobilized in the pores of a hollow fiber. As a driving force, a 150 V electric field was applied to facilitate the analyte migration from the sample matrix to an acceptor solution through a supported liquid membrane. The pHs of donor and acceptor solutions were optimized to 6.0 and 2.0, respectively. The enrichment factor was obtained >75 within 15 minutes. The effect of carbon nanotubes (as solid nano-sorbents) on the membrane performance and EME efficiency was evaluated. The method was linear over the range of 10-1000 ng/mL for zolpidem (R2 >0.9991) with repeatability ( %RSD) between 0.3 % and 7.3 % (n = 3). The limits of detection and quantitation were 3 and 10 ng/mL, respectively. The sensitivity of HPLC-UV for the determination of zolpidem was enhanced by electromembrane extraction. Finally, the method was employed for the quantitation of zolpidem in biological samples with relative recoveries in the range of 60-79 %. PMID:29805344

  17. Noninvasive individual and species identification of jaguars (Panthera onca), pumas (Puma concolor) and ocelots (Leopardus pardalis) in Belize, Central America using cross-species microsatellites and faecal DNA.

    PubMed

    Wultsch, Claudia; Waits, Lisette P; Kelly, Marcella J

    2014-11-01

    There is a great need to develop efficient, noninvasive genetic sampling methods to study wild populations of multiple, co-occurring, threatened felids. This is especially important for molecular scatology studies occurring in challenging tropical environments where DNA degrades quickly and the quality of faecal samples varies greatly. We optimized 14 polymorphic microsatellite loci for jaguars (Panthera onca), pumas (Puma concolor) and ocelots (Leopardus pardalis) and assessed their utility for cross-species amplification. Additionally, we tested their reliability for species and individual identification using DNA from faeces of wild felids detected by a scat detector dog across Belize in Central America. All microsatellite loci were successfully amplified in the three target species, were polymorphic with average expected heterozygosities of HE = 0.60 ± 0.18 (SD) for jaguars, HE = 0.65 ± 0.21 (SD) for pumas and HE = 0.70 ± 0.13 (SD) for ocelots and had an overall PCR amplification success of 61%. We used this nuclear DNA primer set to successfully identify species and individuals from 49% of 1053 field-collected scat samples. This set of optimized microsatellite multiplexes represents a powerful tool for future efforts to conduct noninvasive studies on multiple, wild Neotropical felids. © 2014 John Wiley & Sons Ltd.

  18. Optimization of multiplexed PCR on an integrated microfluidic forensic platform for rapid DNA analysis.

    PubMed

    Estes, Matthew D; Yang, Jianing; Duane, Brett; Smith, Stan; Brooks, Carla; Nordquist, Alan; Zenhausern, Frederic

    2012-12-07

    This study reports the design, prototyping, and assay development of multiplexed polymerase chain reaction (PCR) on a plastic microfluidic device. Amplification of 17 DNA loci is carried out directly on-chip as part of a system for continuous workflow processing from sample preparation (SP) to capillary electrophoresis (CE). For enhanced performance of on-chip PCR amplification, improved control systems have been developed making use of customized Peltier assemblies, valve actuators, software, and amplification chemistry protocols. Multiple enhancements to the microfluidic chip design have been enacted to improve the reliability of sample delivery through the various on-chip modules. This work has been enabled by the encapsulation of PCR reagents into a solid phase material through an optimized Solid Phase Encapsulating Assay Mix (SPEAM) bead-based hydrogel fabrication process. SPEAM bead technology is reliably coupled with precise microfluidic metering and dispensing for efficient amplification and subsequent DNA short tandem repeat (STR) fragment analysis. This provides a means of on-chip reagent storage suitable for microfluidic automation, with the long shelf-life necessary for point-of-care (POC) or field deployable applications. This paper reports the first high quality 17-plex forensic STR amplification from a reference sample in a microfluidic chip with preloaded solid phase reagents, that is designed for integration with up and downstream processing.

  19. Optimized Geometry for Superconducting Sensing Coils

    NASA Technical Reports Server (NTRS)

    Eom, Byeong Ho; Pananen, Konstantin; Hahn, Inseob

    2008-01-01

    An optimized geometry has been proposed for superconducting sensing coils that are used in conjunction with superconducting quantum interference devices (SQUIDs) in magnetic resonance imaging (MRI), magnetoencephalography (MEG), and related applications in which magnetic fields of small dipoles are detected. In designing a coil of this type, as in designing other sensing coils, one seeks to maximize the sensitivity of the detector of which the coil is a part, subject to geometric constraints arising from the proximity of other required equipment. In MRI or MEG, the main benefit of maximizing the sensitivity would be to enable minimization of measurement time. In general, to maximize the sensitivity of a detector based on a sensing coil coupled with a SQUID sensor, it is necessary to maximize the magnetic flux enclosed by the sensing coil while minimizing the self-inductance of this coil. Simply making the coil larger may increase its self-inductance and does not necessarily increase sensitivity because it also effectively increases the distance from the sample that contains the source of the signal that one seeks to detect. Additional constraints on the size and shape of the coil and on the distance from the sample arise from the fact that the sample is at room temperature but the coil and the SQUID sensor must be enclosed within a cryogenic shield to maintain superconductivity.

  20. Design of angle-resolved illumination optics using nonimaging bi-telecentricity for 193 nm scatterfield microscopy.

    PubMed

    Sohn, Martin Y; Barnes, Bryan M; Silver, Richard M

    2018-03-01

    Accurate optics-based dimensional measurements of features sized well-below the diffraction limit require a thorough understanding of the illumination within the optical column and of the three-dimensional scattered fields that contain the information required for quantitative metrology. Scatterfield microscopy can pair simulations with angle-resolved tool characterization to improve agreement between the experiment and calculated libraries, yielding sub-nanometer parametric uncertainties. Optimized angle-resolved illumination requires bi-telecentric optics in which a telecentric sample plane defined by a Köhler illumination configuration and a telecentric conjugate back focal plane (CBFP) of the objective lens; scanning an aperture or an aperture source at the CBFP allows control of the illumination beam angle at the sample plane with minimal distortion. A bi-telecentric illumination optics have been designed enabling angle-resolved illumination for both aperture and source scanning modes while yielding low distortion and chief ray parallelism. The optimized design features a maximum chief ray angle at the CBFP of 0.002° and maximum wavefront deviations of less than 0.06 λ for angle-resolved illumination beams at the sample plane, holding promise for high quality angle-resolved illumination for improved measurements of deep-subwavelength structures using deep-ultraviolet light.

  1. Determination of ephedrine and pseudoephedrine by field-amplified sample injection capillary electrophoresis.

    PubMed

    Deng, Dongli; Deng, Hao; Zhang, Lichun; Su, Yingying

    2014-04-01

    A simple and rapid capillary electrophoresis method was developed for the separation and determination of ephedrine (E) and pseudoephedrine (PE) in a buffer solution containing 80 mM of NaH2PO4 (pH 3.0), 15 mM of β-cyclodextrin and 0.3% of hydroxypropyl methylcellulose. The field-amplified sample injection (FASI) technique was applied to the online concentration of the alkaloids. With FASI in the presence of a low conductivity solvent plug (water), an approximately 1,000-fold improvement in sensitivity was achieved without any loss of separation efficiency when compared to conventional sample injection. Under these optimized conditions, a baseline separation of the two analytes was achieved within 16 min and the detection limits for E and PE were 0.7 and 0.6 µg/L, respectively. Without expensive instruments or labeling of the compounds, the limits of detection for E and PE obtained by the proposed method are comparable with (or even lower than) those obtained by capillary electrophoresis laser-induced fluorescence, liquid chromatography-tandem mass spectrometry and gas chromatography-mass spectrometry. The method was validated in terms of precision, linearity and accuracy, and successfully applied for the determination of the two alkaloids in Ephedra herbs.

  2. Magnetic microscopic imaging with an optically pumped magnetometer and flux guides

    DOE PAGES

    Kim, Young Jin; Savukov, Igor Mykhaylovich; Huang, Jen -Huang; ...

    2017-01-23

    Here, by combining an optically pumped magnetometer (OPM) with flux guides (FGs) and by installing a sample platform on automated translation stages, we have implemented an ultra-sensitive FG-OPM scanning magnetic imaging system that is capable of detecting magnetic fields of ~20 pT with spatial resolution better than 300 μm (expected to reach ~10 pT sensitivity and ~100 μm spatial resolution with optimized FGs). As a demonstration of one possible application of the FG-OPM device, we conducted magnetic imaging of micron-size magnetic particles. Magnetic imaging of such particles, including nano-particles and clusters, is very important for many fields, especially for medicalmore » cancer diagnostics and biophysics applications. For rapid, precise magnetic imaging, we constructed an automatic scanning system, which holds and moves a target sample containing magnetic particles at a given stand-off distance from the FG tips. We show that the device was able to produce clear microscopic magnetic images of 10 μm-size magnetic particles. In addition, we also numerically investigated how the magnetic flux from a target sample at a given stand-off distance is transmitted to the OPM vapor cell.« less

  3. Active focus stabilization for upright selective plane illumination microscopy.

    PubMed

    Hedde, Per Niklas; Gratton, Enrico

    2015-06-01

    Due to its sectioning capability, large field of view, and minimal light exposure, selective plane illumination microscopy has become the preferred choice for 3D time lapse imaging. Single cells in a dish can be conveniently imaged using an upright/inverted configuration. However, for measurements on long time scales (hours to days), mechanical drift is a problem; especially for studies of mammalian cells that typically require heating to 37°C which causes a thermal gradient across the instrument. Since the light sheet diverges towards the edges of the field of view, such a drift leads to a decrease in axial resolution over time. Or, even worse, the specimen could move out of the imaging volume. Here, we present a simple, cost-effective way to stabilize the axial position using the microscope camera to track the sample position. Thereby, sample loss is prevented and an optimal axial resolution is maintained by keeping the sample at the position where the light sheet is at its thinnest. We demonstrate the virtue of our approach by measurements of the light sheet thickness and 3D time lapse imaging of a cell monolayer at physiological conditions.

  4. Active focus stabilization for upright selective plane illumination microscopy

    PubMed Central

    Hedde, Per Niklas; Gratton, Enrico

    2015-01-01

    Due to its sectioning capability, large field of view, and minimal light exposure, selective plane illumination microscopy has become the preferred choice for 3D time lapse imaging. Single cells in a dish can be conveniently imaged using an upright/inverted configuration. However, for measurements on long time scales (hours to days), mechanical drift is a problem; especially for studies of mammalian cells that typically require heating to 37°C which causes a thermal gradient across the instrument. Since the light sheet diverges towards the edges of the field of view, such a drift leads to a decrease in axial resolution over time. Or, even worse, the specimen could move out of the imaging volume. Here, we present a simple, cost-effective way to stabilize the axial position using the microscope camera to track the sample position. Thereby, sample loss is prevented and an optimal axial resolution is maintained by keeping the sample at the position where the light sheet is at its thinnest. We demonstrate the virtue of our approach by measurements of the light sheet thickness and 3D time lapse imaging of a cell monolayer at physiological conditions. PMID:26072829

  5. Broadband infrared vibrational nano-spectroscopy using thermal blackbody radiation

    DOE PAGES

    O’Callahan, Brian T.; Lewis, William E.; Möbius, Silke; ...

    2015-12-03

    Infrared vibrational nano-spectroscopy based on scattering scanning near-field optical microscopy (s-SNOM) provides intrinsic chemical specificity with nanometer spatial resolution. Here we use incoherent infrared radiation from a 1400 K thermal blackbody emitter for broadband infrared (IR) nano-spectroscopy.With optimized interferometric heterodyne signal amplification we achieve few-monolayer sensitivity in phonon polariton spectroscopy and attomolar molecular vibrational spectroscopy. Near-field localization and nanoscale spatial resolution is demonstrated in imaging flakes of hexagonal boron nitride (hBN) and determination of its phonon polariton dispersion relation. The signal-to-noise ratio calculations and analysis for different samples and illumination sources provide a reference for irradiance requirements and the attainablemore » near-field signal levels in s-SNOM in general. As a result, the use of a thermal emitter as an IR source thus opens s-SNOM for routine chemical FTIR nano-spectroscopy.« less

  6. Broadband infrared vibrational nano-spectroscopy using thermal blackbody radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Callahan, Brian T.; Lewis, William E.; Möbius, Silke

    Infrared vibrational nano-spectroscopy based on scattering scanning near-field optical microscopy (s-SNOM) provides intrinsic chemical specificity with nanometer spatial resolution. Here we use incoherent infrared radiation from a 1400 K thermal blackbody emitter for broadband infrared (IR) nano-spectroscopy.With optimized interferometric heterodyne signal amplification we achieve few-monolayer sensitivity in phonon polariton spectroscopy and attomolar molecular vibrational spectroscopy. Near-field localization and nanoscale spatial resolution is demonstrated in imaging flakes of hexagonal boron nitride (hBN) and determination of its phonon polariton dispersion relation. The signal-to-noise ratio calculations and analysis for different samples and illumination sources provide a reference for irradiance requirements and the attainablemore » near-field signal levels in s-SNOM in general. As a result, the use of a thermal emitter as an IR source thus opens s-SNOM for routine chemical FTIR nano-spectroscopy.« less

  7. High Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS) for Mass Spectrometry-Based Proteomics

    PubMed Central

    Swearingen, Kristian E.; Moritz, Robert L.

    2013-01-01

    SUMMARY High field asymmetric waveform ion mobility spectrometry (FAIMS) is an atmospheric pressure ion mobility technique that separates gas-phase ions by their behavior in strong and weak electric fields. FAIMS is easily interfaced with electrospray ionization and has been implemented as an additional separation mode between liquid chromatography (LC) and mass spectrometry (MS) in proteomic studies. FAIMS separation is orthogonal to both LC and MS and is used as a means of on-line fractionation to improve detection of peptides in complex samples. FAIMS improves dynamic range and concomitantly the detection limits of ions by filtering out chemical noise. FAIMS can also be used to remove interfering ion species and to select peptide charge states optimal for identification by tandem MS. Here, we review recent developments in LC-FAIMS-MS and its application to MS-based proteomics. PMID:23194268

  8. Signal timing on a shoestring

    DOT National Transportation Integrated Search

    2005-03-01

    The conventional approach to signal timing optimization and field deployment requires current traffic flow data, experience with optimization models, familiarity with the signal controller hardware, and knowledge of field operations including signal ...

  9. Signal timing on a shoestring.

    DOT National Transportation Integrated Search

    2005-03-01

    The conventional approach to signal timing optimization and field deployment requires current traffic flow data, experience with optimization models, familiarity with the signal controller hardware, and knowledge of field operations including signal ...

  10. Quantifying the role that laboratory experiment sample scale has on observed material properties and mechanistic behaviors that cause well systems to fail

    NASA Astrophysics Data System (ADS)

    Huerta, N. J.; Fahrman, B.; Rod, K. A.; Fernandez, C. A.; Crandall, D.; Moore, J.

    2017-12-01

    Laboratory experiments provide a robust method to analyze well integrity. Experiments are relatively cheap, controlled, and repeatable. However, simplifying assumptions, apparatus limitations, and scaling are ubiquitous obstacles for translating results from the bench to the field. We focus on advancing the correlation between laboratory results and field conditions by characterizing how failure varies with specimen geometry using two experimental approaches. The first approach is designed to measure the shear bond strength between steel and cement in a down-scaled (< 3" diameter) well geometry. We use several cylindrical casing-cement-casing geometries that either mimic the scaling ratios found in the field or maximize the amount of metal and cement in the sample. We subject the samples to thermal shock cycles to simulate damage to the interfaces from operations. The bond was then measured via a push-out test. We found that not only did expected parameters, e.g. curing time, play a role in shear-bond strength but also that scaling of the geometry was important. The second approach is designed to observe failure of the well system due to pressure applied on the inside of a lab-scale (1.5" diameter) cylindrical casing-cement-rock geometry. The loading apparatus and sample are housed within an industrial X-ray CT scanner capable of imaging the system while under pressure. Radial tension cracks were observed in the cement after an applied internal pressure of 3000 psi and propagated through the cement and into the rock as pressure was increased. Based on our current suite of tests we find that the relationship between sample diameters and thicknesses is an important consideration when observing the strength and failure of well systems. The test results contribute to our knowledge of well system failure, evaluation and optimization of new cements, as well as the applicability of using scaled-down tests as a proxy for understanding field-scale conditions.

  11. Path optimization method for the sign problem

    NASA Astrophysics Data System (ADS)

    Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji

    2018-03-01

    We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  12. A random optimization approach for inherent optic properties of nearshore waters

    NASA Astrophysics Data System (ADS)

    Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng

    2016-10-01

    Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.

  13. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    PubMed

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  14. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method

    PubMed Central

    Korvink, Jan G.

    2016-01-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766

  15. Neutron Scattering Studies on Large Length Scale Sample Structures

    NASA Astrophysics Data System (ADS)

    Feng, Hao

    Neutron scattering can be used to study structures of matter. Depending on the interested sample properties, different scattering techniques can be chosen. Neutron reflectivity is more often used to detect in-depth profile of layered structures and the interfacial roughness while transmission is more sensitive to sample bulk properties. Neutron Reflectometry (NR) technique, one technique in neutron reflectivity, is first discussed in this thesis. Both specular reflectivity and the first order Bragg intensity were measured in the NR experiment with a diffraction grating in order to study the in-depth and the lateral structure of a sample (polymer) deposited on the grating. However, the first order Bragg intensity solely is sometimes inadequate to determine the lateral structure and high order Bragg intensities are difficult to measure using traditional neutron scattering techniques due to the low brightness of the current neutron sources. Spin Echo Small Angle Neutron Scattering (SESANS) technique overcomes this resolution problem by measuring the Fourier transforms of all the Bragg intensities, resulting in measuring the real-space density correlations of samples and allowing the accessible length scale from few-tens of nanometers to several microns. SESANS can be implemented by using two pairs of magnetic Wollaston prims (WP) and the accessible length scale is proportional to the magnetic field intensity in WPs. To increase the magnetic field and thus increase the accessible length scale, an apparatus named Superconducting Wollaston Prisms (SWP) which has a series of strong, well-defined shaped magnetic fields created by superconducting coils was developed in Indiana University in 2016. Since then, various kinds of optimization have been implemented, which are addressed in this thesis. Finally, applications of SWPs in other neutron scattering techniques like Neutron Larmor Diffraction (NLD) are discussed.

  16. An approach for addressing hard-to-detect hot spots.

    PubMed

    Abelquist, Eric W; King, David A; Miller, Laurence F; Viars, James A

    2013-05-01

    The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) survey approach is comprised of systematic random sampling coupled with radiation scanning to assess acceptability of potential hot spots. Hot spot identification for some radionuclides may not be possible due to the very weak gamma or x-ray radiation they emit-these hard-to-detect nuclides are unlikely to be identified by field scans. Similarly, scanning technology is not yet available for chemical contamination. For both hard-to-detect nuclides and chemical contamination, hot spots are only identified via volumetric sampling. The remedial investigation and cleanup of sites under the Comprehensive Environmental Response, Compensation, and Liability Act typically includes the collection of samples over relatively large exposure units, and concentration limits are applied assuming the contamination is more or less uniformly distributed. However, data collected from contaminated sites demonstrate contamination is often highly localized. These highly localized areas, or hot spots, will only be identified if sample densities are high or if the environmental characterization program happens to sample directly from the hot spot footprint. This paper describes a Bayesian approach for addressing hard-to-detect nuclides and chemical hot spots. The approach begins using available data (e.g., as collected using the standard approach) to predict the probability that an unacceptable hot spot is present somewhere in the exposure unit. This Bayesian approach may even be coupled with the graded sampling approach to optimize hot spot characterization. Once the investigator concludes that the presence of hot spots is likely, then the surveyor should use the data quality objectives process to generate an appropriate sample campaign that optimizes the identification of risk-relevant hot spots.

  17. Realistic sampling of amino acid geometries for a multipolar polarizable force field

    PubMed Central

    Hughes, Timothy J.; Cardamone, Salvatore

    2015-01-01

    The Quantum Chemical Topological Force Field (QCTFF) uses the machine learning method kriging to map atomic multipole moments to the coordinates of all atoms in the molecular system. It is important that kriging operates on relevant and realistic training sets of molecular geometries. Therefore, we sampled single amino acid geometries directly from protein crystal structures stored in the Protein Databank (PDB). This sampling enhances the conformational realism (in terms of dihedral angles) of the training geometries. However, these geometries can be fraught with inaccurate bond lengths and valence angles due to artefacts of the refinement process of the X‐ray diffraction patterns, combined with experimentally invisible hydrogen atoms. This is why we developed a hybrid PDB/nonstationary normal modes (NM) sampling approach called PDB/NM. This method is superior over standard NM sampling, which captures only geometries optimized from the stationary points of single amino acids in the gas phase. Indeed, PDB/NM combines the sampling of relevant dihedral angles with chemically correct local geometries. Geometries sampled using PDB/NM were used to build kriging models for alanine and lysine, and their prediction accuracy was compared to models built from geometries sampled from three other sampling approaches. Bond length variation, as opposed to variation in dihedral angles, puts pressure on prediction accuracy, potentially lowering it. Hence, the larger coverage of dihedral angles of the PDB/NM method does not deteriorate the predictive accuracy of kriging models, compared to the NM sampling around local energetic minima used so far in the development of QCTFF. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26235784

  18. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  19. Modelling the RV jitter of early-M dwarfs using tomographic imaging

    NASA Astrophysics Data System (ADS)

    Hébrard, É. M.; Donati, J.-F.; Delfosse, X.; Morin, J.; Moutou, C.; Boisse, I.

    2016-09-01

    In this paper, we show how tomographic imaging (Zeeman-Doppler imaging, ZDI) can be used to characterize stellar activity and magnetic field topologies, ultimately allowing us to filter out the radial velocity (RV) activity jitter of M dwarf moderate rotators. This work is based on spectropolarimetric observations of a sample of five weakly active early-M dwarfs (GJ 205, GJ 358, GJ 410, GJ 479, GJ 846) with HARPS-Pol and NARVAL. These stars have v sin I and RV jitters in the range 1-2 km s-1 and 2.7-10.0 m s-1 rms, respectively. Using a modified version of ZDI applied to sets of phase-resolved least-squares deconvolved profiles of unpolarized spectral lines, we are able to characterize the distribution of active regions at the stellar surfaces. We find that dark spots cover less than 2 per cent of the total surface of the stars of our sample. Our technique is efficient at modelling the rotationally modulated component of the activity jitter, and succeeds at decreasing the amplitude of this component by typical factors of 2-3 and up to 6 in optimal cases. From the rotationally modulated time series of circularly polarized spectra and with ZDI, we also reconstruct the large-scale magnetic field topology. These fields suggest that bistability of dynamo processes observed in active M dwarfs may also be at work for moderately active M dwarfs. Comparing spot distributions with field topologies suggest that dark spots causing activity jitter concentrate at the magnetic pole and/or equator, to be confirmed with future data on a larger sample.

  20. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  1. Method Development in Forensic Toxicology.

    PubMed

    Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona

    2017-01-01

    In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  2. Preparation of poly-L-lysine functionalized magnetic nanoparticles and their influence on viability of cancer cells

    NASA Astrophysics Data System (ADS)

    Khmara, I.; Koneracka, M.; Kubovcikova, M.; Zavisova, V.; Antal, I.; Csach, K.; Kopcansky, P.; Vidlickova, I.; Csaderova, L.; Pastorekova, S.; Zatovicova, M.

    2017-04-01

    This study was aimed at development of biocompatible amino-functionalized magnetic nanoparticles as carriers of specific antibodies able to detect and/or target cancer cells. Poly-L-lysine (PLL)-modified magnetic nanoparticle samples with different PLL/Fe3O4 content were prepared and tested to define the optimal PLL/Fe3O4 weight ratio. The samples were characterized for particle size and morphology (SEM, TEM and DLS), and surface properties (zeta potential measurements). The optimal PLL/Fe3O4 weight ratio of 1.0 based on both zeta potential and DLS measurements was in agreement with the UV/VIS measurements. Magnetic nanoparticles with the optimal PLL content were conjugated with antibody specific for the cancer biomarker carbonic anhydrase IX (CA IX), which is induced by hypoxia, a physiologic stress present in solid tumors and linked with aggressive tumor behavior. CA IX is localized on the cell surface with the antibody-binding epitope facing the extracellular space and is therefore suitable for antibody-based targeting of tumor cells. Here we showed that PLL/Fe3O4 magnetic nanoparticles exhibit cytotoxic activities in a cell type-dependent manner and bind to cells expressing CA IX when conjugated with the CA IX-specific antibody. These data support further investigations of the CA IX antibody-conjugated, magnetic field-guided/activated nanoparticles as tools in anticancer strategies.

  3. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  4. Drug-drug interaction predictions with PBPK models and optimal multiresponse sampling time designs: application to midazolam and a phase I compound. Part 1: comparison of uniresponse and multiresponse designs using PopDes.

    PubMed

    Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode

    2008-12-01

    To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.

  5. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    NASA Astrophysics Data System (ADS)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.

  6. Metaheuristic optimization approaches to predict shear-wave velocity from conventional well logs in sandstone and carbonate case studies

    NASA Astrophysics Data System (ADS)

    Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi

    2018-06-01

    Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee colony algorithm in VS prediction is slightly higher than two other alternative employed approaches.

  7. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  8. Environmental optimization and shielding for NMR experiments and imaging in the earth's magnetic field.

    PubMed

    Favre, B; Bonche, J P; Meheir, H; Peyrin, J O

    1990-02-01

    For many years, a number of laboratories have been working on the applications of very low field NMR. In 1985, our laboratory presented the first NMR images using the earth's magnetic field. However, the use of this technique was limited by the weakness of the signal and the disturbing effects of the environment on the signal-to-noise ratio and on the homogeneity of the static magnetic field. Therefore experiments has to be performed in places with low environmental disturbances, such as open country or large parks. In 1986, we installed a new station in Lyon, in the town's hostile environment. Good NMR signals can now be obtained (with a signal-to-noise ratio better than 200 and a time constant T2 better than 3s for 200-mnl water samples and at a temperature of about 40 degrees C). We report the terrace roof of our faculty building. Gradient coils were used to correct the local inhomogeneities of the earth's magnetic field. We show FIDs and MR images of water-filled tubes made with or without these improvements.

  9. High field superconducting properties of Ba(Fe1-xCox)2As2 thin films

    NASA Astrophysics Data System (ADS)

    Hänisch, Jens; Iida, Kazumasa; Kurth, Fritz; Reich, Elke; Tarantini, Chiara; Jaroszynski, Jan; Förster, Tobias; Fuchs, Günther; Hühne, Ruben; Grinenko, Vadim; Schultz, Ludwig; Holzapfel, Bernhard

    2015-11-01

    In general, the critical current density, Jc, of type II superconductors and its anisotropy with respect to magnetic field orientation is determined by intrinsic and extrinsic properties. The Fe-based superconductors of the ‘122’ family with their moderate electronic anisotropies and high yet accessible critical fields (Hc2 and Hirr) are a good model system to study this interplay. In this paper, we explore the vortex matter of optimally Co-doped BaFe2As2 thin films with extended planar and c-axis correlated defects. The temperature and angular dependence of the upper critical field is well explained by a two-band model in the clean limit. The dirty band scenario, however, cannot be ruled out completely. Above the irreversibility field, the flux motion is thermally activated, where the activation energy U0 is going to zero at the extrapolated zero-kelvin Hirr value. The anisotropy of the critical current density Jc is both influenced by the Hc2 anisotropy (and therefore by multi-band effects) as well as the extended planar and columnar defects present in the sample.

  10. Exploring the extremely low surface brightness sky: distances to 23 newly discovered objects in Dragonfly fields

    NASA Astrophysics Data System (ADS)

    van Dokkum, Pieter

    2016-10-01

    We are obtaining deep, wide field images of nearby galaxies with the Dragonfly Telephoto Array. This telescope is optimized for low surface brightness imaging, and we are finding many low surface brightness objects in the Dragonfly fields. In Cycle 22 we obtained ACS imaging for 7 galaxies that we had discovered in a Dragonfly image of the galaxy M101. Unexpectedly, the ACS data show that only 3 of the galaxies are members of the M101 group, and the other 4 are very large Ultra Diffuse Galaxies (UDGs) at much greater distance. Building on our Cycle 22 program, here we request ACS imaging for 23 newly discovered low surface brightness objects in four Dragonfly fields centered on the galaxies NGC 1052, NGC 1084, NGC 3384, and NGC 4258. The immediate goals are to construct the satellite luminosity functions in these four fields and to constrain the number density of UDGs that are not in rich clusters. More generally, this complete sample of extremely low surface brightness objects provides the first systematic insight into galaxies whose brightness peaks at >25 mag/arcsec^2.

  11. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  12. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.

  13. Mixed integer simulation optimization for optimal hydraulic fracturing and production of shale gas fields

    NASA Astrophysics Data System (ADS)

    Li, J. C.; Gong, B.; Wang, H. G.

    2016-08-01

    Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.

  14. Optimization of Pockels electric field in transverse modulated optical voltage sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yifan; Xu, Qifeng; Chen, Kun-Long; Zhou, Jie

    2018-05-01

    This paper investigates the possibilities of optimizing the Pockels electric field in a transverse modulated optical voltage sensor with a spherical electrode structure. The simulations show that due to the edge effect and the electric field concentrations and distortions, the electric field distributions in the crystal are non-uniform. In this case, a tiny variation in the light path leads to an integral error of more than 0.5%. Moreover, a 2D model cannot effectively represent the edge effect, so a 3D model is employed to optimize the electric field distributions. Furthermore, a new method to attach a quartz crystal to the electro-optic crystal along the electric field direction is proposed to improve the non-uniformity of the electric field. The integral error is reduced therefore from 0.5% to 0.015% and less. The proposed method is simple, practical and effective, and it has been validated by numerical simulations and experimental tests.

  15. Acquisition of High Field Nuclear Magnetic Resonance Spectrometers for Research in Molecular Structure, Function and Dynamics

    DTIC Science & Technology

    2011-09-01

    Fbg αC 242-424. DNA for expressing Fbg αC 242-424 and FXIII A2 in Ecoli have been obtained from collaborators. Strategies for expressing and...the coming months. It will be important to 11 verify that the expressed FXIII A2 is active and that the Fbg αC 242-424 can serve as an effective...optimized. For the larger substrate Fbg αC 242-424, we will need to proteolytically digest the quenched kinetic samples with chymotrypsin prior to

  16. Conditional Optimal Design in Three- and Four-Level Experiments

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Borenstein, Michael

    2014-01-01

    The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…

  17. Hydraulic Fracturing of 403 Shallow Diatomite Wells in South Belridge Oil Field, Kern County, California, in 2014

    NASA Astrophysics Data System (ADS)

    Wynne, D. B.; Agusiegbe, V.

    2015-12-01

    We examine all 403 Hydraulic Fracture (HF) jobs performed by Aera Energy, LLC, in the South Belridge oil field, Kern County, CA in 2014. HFs in the South Belridge oil field are atypical amongst North American plays because the reservoir is shallow and produced via vertical wells. Our data set constitutes 88% of all HF jobs performed in CA oil fields in calendar-2014. The South Belridge field produces 11% of California's oil and the shallow HFs performed here differ from most HFs performed elsewhere. We discuss fracture modeling and methods and summary statistics, and modelled dimensions of fractures and their relationships to depth and reservoir properties. The 403 HFs were made in the diatomite-dominated Reef Ridge member of the Monterey Formation. The HFs began at an average depth of 1047 feet below ground (ft TVD) and extended an average of 626 ft vertically downward. The deepest initiation of HF was at 2380 ft and the shallowest cessation was at 639 ft TVD. The average HF was performed using 1488 BBL (62,496 gallons) of water. The HFs were performed in no more than 6 stages and nearly all were completed within one day. We (1) compare metrics of the South Belridge sample group with recent, larger "all-CA" and nationwide samples; and (2) conclude that if relationships of reservoir properties, well completion and HF are well understood, shallow diatomite HF may be optimized to enhance production while minimizing environmental impact.

  18. Optimization of pre-sowing magnetic field doses through RSM in pea

    NASA Astrophysics Data System (ADS)

    Iqbal, M.; Ahmad, I.; Hussain, S. M.; Khera, R. A.; Bokhari, T. H.; Shehzad, M. A.

    2013-09-01

    Seed pre-sowing magnetic field treatment was reported to induce biochemical and physiological changes. In the present study, response surface methodology was used for deduction of optimal magnetic field doses. Improved growth and yield responses in the pea cultivar were achieved using a rotatable central composite design and multivariate data analysis. The growth parameters such as root and shoot fresh masses and lengths as well as yield were enhanced at a certain magnetic field level. The chlorophyll contents were also enhanced significantly vs. the control. The low magnetic field strength for longer duration of exposure/ high strength for shorter exposure were found to be optimal points for maximum responses in root fresh mass, chlorophyll `a' contents, and green pod yield/plant, respectively and a similar trend was observed for other measured parameters. The results indicate that the magnetic field pre-sowing seed treatment can be used practically to enhance the growth and yield in pea cultivar and response surface methodology was found an efficient experimental tool for optimization of the treatment level to obtain maximum response of interest.

  19. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  20. Determination of trace labile copper in environmental waters by magnetic nanoparticle solid phase extraction and high-performance chelation ion chromatography.

    PubMed

    Wei, Z; Sandron, S; Townsend, A T; Nesterenko, P N; Paull, B

    2015-04-01

    Cobalt magnetic nanoparticles surface functionalised with iminodiacetic acid were evaluated as a nano-particulate solid phase extraction absorbent for copper ions (Cu(2+)) from environmental water samples. Using an external magnetic field, the collector nanoparticles could be separated from the aqueous phase, and adsorbed ions simply decomplexed using dilute HNO3. Effects of pH, buffer concentration, sample and sorbent volume, extraction equilibrium time, and interfering ion concentration on extraction efficiency were investigated. Optimal conditions were then applied to the extraction of Cu(2+) ions from natural water samples, prior to their quantitation using high-performance chelation ion chromatography. The limits of detection (LOD) of the combined extraction and chromatographic method were ~0.1 ng ml(-1), based upon a 100-fold preconcentration factor (chromatographic performance; LOD=9.2 ng ml(-1) Cu(2+)), analytical linear range from 20 to 5000 ng mL(-1), and relative standard deviations=4.9% (c=1000 ng ml(-1), n=7). Accuracy and precision of the combined approach was verified using a certified reference standard estuarine water sample (SLEW-2) and comparison of sample determinations with sector field inductively coupled plasma mass spectrometry. Recoveries from the addition of Cu(2+) to impacted estuarine and rain water samples were 103.5% and 108.5%, respectively. Coastal seawater samples, both with and without prior UV irradiation and dissolved organic matter removal were also investigated using the new methodology. The effect of DOM concentration on copper availability was demonstrated. Copyright © 2015. Published by Elsevier B.V.

  1. Solid-Phase Extraction (SPE): Principles and Applications in Food Samples.

    PubMed

    Ötles, Semih; Kartal, Canan

    2016-01-01

    Solid-Phase Extraction (SPE) is a sample preparation method that is practised on numerous application fields due to its many advantages compared to other traditional methods. SPE was invented as an alternative to liquid/liquid extraction and eliminated multiple disadvantages, such as usage of large amount of solvent, extended operation time/procedure steps, potential sources of error, and high cost. Moreover, SPE can be plied to the samples combined with other analytical methods and sample preparation techniques optionally. SPE technique is a useful tool for many purposes through its versatility. Isolation, concentration, purification and clean-up are the main approaches in the practices of this method. Food structures represent a complicated matrix and can be formed into different physical stages, such as solid, viscous or liquid. Therefore, sample preparation step particularly has an important role for the determination of specific compounds in foods. SPE offers many opportunities not only for analysis of a large diversity of food samples but also for optimization and advances. This review aims to provide a comprehensive overview on basic principles of SPE and its applications for many analytes in food matrix.

  2. Planning for the Paleomagnetic Investigations of Returned Samples from Mars

    NASA Astrophysics Data System (ADS)

    Weiss, B. P.; Beaty, D. W.; McSween, H. Y., Jr.; Czaja, A. D.; Goreva, Y.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Pratt, L. M.; Sephton, M. A.; Steele, A.; Hays, L. E.; Meyer, M. A.

    2016-12-01

    The red planet is a magnetic planet. Mars' iron-rich surface is strongly magnetized, likely dating back to the Noachian period when the surface may have been habitable. Paleomagnetic measurements of returned samples could transform our understanding of the Martian dynamo and its connection to climatic and planetary thermal evolution. Because the original orientations of Martian meteorites are unknown, all Mars paleomagnetic studies to date have only been able to measure the paleointensity of the Martian field. Paleomagnetic studies from returned Martian bedrock samples would provide unprecedented geologic context and the first paleodirectional information on Martian fields. The Mars 2020 rover mission seeks to accomplish the first leg by preparing for the potential return of 31 1 cm-diameter cores of Martian rocks. The Returned Sample Science Board (RSSB) has been tasked to advise the Mars 2020 mission in how to best select and preserve samples optimized for paleomagnetic measurements. A recent community-based study (Weiss et al., 2014) produced a ranked list of key paleomagnetism science objectives, which included: 1) Determine the intensity of the Martian dynamo 2) Characterize the dynamo reversal frequency with magnetostratigraphy 3) Constrain the effects of heating and aqueous alteration on the samples 4) Constrain the history of Martian tectonics Guided by these objectives, the RSSB has proposed four key sample quality criteria to the Mars 2020 mission: (a) no exposure to fields >200 mT, (b) no exposure to temperatures >100 °C, (c) no exposure to pressures >0.1 GPa, and (d) acquisition of samples that are absolutely oriented with respect to bedrock with a half-cone uncertainty of <5°. Our measurements of a Mars 2020 prototype drill have found that criteria (a-c) should be met by the drilling process. Furthermore, the core plate strike and dip will be measured to better than 5° for intact drill cores; we are working with the mission to establish ways to determine the core's angular orientation with respect to rotation around the drill hole axis. The next stage of our work is to establish whether and how these sample criteria would be maintained throughout the potential downstream missions that would return the samples to Earth.

  3. Field Performance of an Optimized Stack of YBCO Square “Annuli” for a Compact NMR Magnet

    PubMed Central

    Hahn, Seungyong; Voccio, John; Bermond, Stéphane; Park, Dong-Keun; Bascuñán, Juan; Kim, Seok-Beom; Masaru, Tomita; Iwasa, Yukikazu

    2011-01-01

    The spatial field homogeneity and time stability of a trapped field generated by a stack of YBCO square plates with a center hole (square “annuli”) was investigated. By optimizing stacking of magnetized square annuli, we aim to construct a compact NMR magnet. The stacked magnet consists of 750 thin YBCO plates, each 40-mm square and 80- μm thick with a 25-mm bore, and has a Ø10 mm room-temperature access for NMR measurement. To improve spatial field homogeneity of the 750-plate stack (YP750) a three-step optimization was performed: 1) statistical selection of best plates from supply plates; 2) field homogeneity measurement of multi-plate modules; and 3) optimal assembly of the modules to maximize field homogeneity. In this paper, we present analytical and experimental results of field homogeneity and temporal stability at 77 K, performed on YP750 and those of a hybrid stack, YPB750, in which two YBCO bulk annuli, each Ø46 mm and 16-mm thick with a 25-mm bore, are added to YP750, one at the top and the other at the bottom. PMID:22081753

  4. Sound-field reproduction in-room using optimal control techniques: simulations in the frequency domain.

    PubMed

    Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw

    2005-02-01

    This paper describes the simulations and results obtained when applying optimal control to progressive sound-field reproduction (mainly for audio applications) over an area using multiple monopole loudspeakers. The model simulates a reproduction system that operates either in free field or in a closed space approaching a typical listening room, and is based on optimal control in the frequency domain. This rather simple approach is chosen for the purpose of physical investigation, especially in terms of sensing microphones and reproduction loudspeakers configurations. Other issues of interest concern the comparison with wave-field synthesis and the control mechanisms. The results suggest that in-room reproduction of sound field using active control can be achieved with a residual normalized squared error significantly lower than open-loop wave-field synthesis in the same situation. Active reproduction techniques have the advantage of automatically compensating for the room's natural dynamics. For the considered cases, the simulations show that optimal control results are not sensitive (in terms of reproduction error) to wall absorption in the reproduction room. A special surrounding configuration of sensors is introduced for a sensor-free listening area in free field.

  5. Optimized Pan-species and Speciation Duplex Real-time PCR Assays for Plasmodium Parasites Detection in Malaria Vectors

    PubMed Central

    Sandeu, Maurice Marcel; Moussiliou, Azizath; Moiroux, Nicolas; Padonou, Gilles G.; Massougbodji, Achille; Corbel, Vincent; Tuikue Ndam, Nicaise

    2012-01-01

    Background An accurate method for detecting malaria parasites in the mosquito’s vector remains an essential component in the vector control. The Enzyme linked immunosorbent assay specific for circumsporozoite protein (ELISA-CSP) is the gold standard method for the detection of malaria parasites in the vector even if it presents some limitations. Here, we optimized multiplex real-time PCR assays to accurately detect minor populations in mixed infection with multiple Plasmodium species in the African malaria vectors Anopheles gambiae and Anopheles funestus. Methods Complementary TaqMan-based real-time PCR assays that detect Plasmodium species using specific primers and probes were first evaluated on artificial mixtures of different targets inserted in plasmid constructs. The assays were further validated in comparison with the ELISA-CSP on 200 field caught Anopheles gambiae and Anopheles funestus mosquitoes collected in two localities in southern Benin. Results The validation of the duplex real-time PCR assays on the plasmid mixtures demonstrated robust specificity and sensitivity for detecting distinct targets. Using a panel of mosquito specimen, the real-time PCR showed a relatively high sensitivity (88.6%) and specificity (98%), compared to ELISA-CSP as the referent standard. The agreement between both methods was “excellent” (κ = 0.8, P<0.05). The relative quantification of Plasmodium DNA between the two Anopheles species analyzed showed no significant difference (P = 0, 2). All infected mosquito samples contained Plasmodium falciparum DNA and mixed infections with P. malariae and/or P. ovale were observed in 18.6% and 13.6% of An. gambiae and An. funestus respectively. Plasmodium vivax was found in none of the mosquito samples analyzed. Conclusion This study presents an optimized method for detecting the four Plasmodium species in the African malaria vectors. The study highlights substantial discordance with traditional ELISA-CSP pointing out the utility of employing an accurate molecular diagnostic tool for detecting malaria parasites in field mosquito populations. PMID:23285168

  6. Constrained optimization for position calibration of an NMR field camera.

    PubMed

    Chang, Paul; Nassirpour, Sahar; Eschelbach, Martin; Scheffler, Klaus; Henning, Anke

    2018-07-01

    Knowledge of the positions of field probes in an NMR field camera is necessary for monitoring the B 0 field. The typical method of estimating these positions is by switching the gradients with known strengths and calculating the positions using the phases of the FIDs. We investigated improving the accuracy of estimating the probe positions and analyzed the effect of inaccurate estimations on field monitoring. The field probe positions were estimated by 1) assuming ideal gradient fields, 2) using measured gradient fields (including nonlinearities), and 3) using measured gradient fields with relative position constraints. The fields measured with the NMR field camera were compared to fields acquired using a dual-echo gradient recalled echo B 0 mapping sequence. Comparisons were done for shim fields from second- to fourth-order shim terms. The position estimation was the most accurate when relative position constraints were used in conjunction with measured (nonlinear) gradient fields. The effect of more accurate position estimates was seen when compared to fields measured using a B 0 mapping sequence (up to 10%-15% more accurate for some shim fields). The models acquired from the field camera are sensitive to noise due to the low number of spatial sample points. Position estimation of field probes in an NMR camera can be improved using relative position constraints and nonlinear gradient fields. Magn Reson Med 80:380-390, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Optimization of gas condensate Field A development on the basis of "reservoir - gathering facilities system" integrated model

    NASA Astrophysics Data System (ADS)

    Demidova, E. A.; Maksyutina, O. V.

    2015-02-01

    It is known that many gas condensate fields are challenged with liquid loading and condensate banking problems. Therefore, gas production is declining with time. In this paper hydraulic fracturing treatment was considered as a method to improve the productivity of wells and consequently to exclude the factors that lead to production decline. This paper presents the analysis of gas condensate Field A development optimization with the purpose of maintaining constant gas production at the 2013 level for 8 years taking into account mentioned factors . To optimize the development of the filed, an integrated model was created. The integrated model of the field implies constructing the uniform model of the field consisting of the coupling models of the reservoir, wells and surface facilities. This model allowed optimizing each of the elements of the model separately and also taking into account the mutual influence of these elements. Using the integrated model, five development scenarios were analyzed and an optimal scenario was chosen. The NPV of this scenario equals 7,277 mln RUR, cumulative gas production - 12,160.6 mln m3, cumulative condensate production - 1.8 mln tons.

  8. Epi-Fluorescence Microscopy

    PubMed Central

    Webb, Donna J.; Brown, Claire M.

    2012-01-01

    Epi-fluorescence microscopy is available in most life sciences research laboratories, and when optimized can be a central laboratory tool. In this chapter, the epi-fluorescence light path is introduced and the various components are discussed in detail. Recommendations are made for incident lamp light sources, excitation and emission filters, dichroic mirrors, objective lenses, and charge-coupled device (CCD) cameras in order to obtain the most sensitive epi-fluorescence microscope. The even illumination of metal-halide lamps combined with new “hard” coated filters and mirrors, a high resolution monochrome CCD camera, and a high NA objective lens are all recommended for high resolution and high sensitivity fluorescence imaging. Recommendations are also made for multicolor imaging with the use of monochrome cameras, motorized filter turrets, individual filter cubes, and corresponding dyes that are the best choice for sensitive, high resolution multicolor imaging. Images should be collected using Nyquist sampling and should be corrected for background intensity contributions and nonuniform illumination across the field of view. Photostable fluorescent probes and proteins that absorb a lot of light (i.e., high extinction co-efficients) and generate a lot of fluorescence signal (i.e., high quantum yields) are optimal. A neuronal immune-fluorescence labeling protocol is also presented. Finally, in order to maximize the utility of sensitive wide-field microscopes and generate the highest resolution images with high signal-to-noise, advice for combining wide-field epi-fluorescence imaging with restorative image deconvolution is presented. PMID:23026996

  9. Study optimizes gas lift in Gulf of Suez field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Waly, A.A.; Darwish, T.A.; Osman Salama, A.

    1996-06-24

    A study using PVT data combined with fluid and multiphase flow correlations optimized gas lift in the Ramadan field, Nubia C, oil wells, in the Gulf of Suez. Selection of appropriate correlations followed by multiphase flow calculations at various points of injection (POI) were the first steps in the study. After determining the POI for each well from actual pressure and temperature surveys, the study constructed lift gas performance curves for each well. Actual and optimum operating conditions were compared to determine the optimal gas lift. The study indicated a net 2,115 bo/d could be gained from implementing its recommendations.more » The actual net oil gained as a result of this optimization and injected gas reallocation was 2,024 bo/d. The paper discusses the Ramadan field, fluid properties, multiphase flow, production optimization, and results.« less

  10. Topology optimized gold nanostrips for enhanced near-infrared photon upconversion

    NASA Astrophysics Data System (ADS)

    Vester-Petersen, Joakim; Christiansen, Rasmus E.; Julsgaard, Brian; Balling, Peter; Sigmund, Ole; Madsen, Søren P.

    2017-09-01

    This letter presents a topology optimization study of metal nanostructures optimized for electric-field enhancement in the infrared spectrum. Coupling of such nanostructures with suitable ions allows for an increased photon-upconversion yield, with one application being an increased solar-cell efficiency by exploiting the long-wavelength part of the solar spectrum. In this work, topology optimization is used to design a periodic array of two-dimensional gold nanostrips for electric-field enhancements in a thin film doped with upconverting erbium ions. The infrared absorption band of erbium is utilized by simultaneously optimizing for two polarizations, up to three wavelengths, and three incident angles. Geometric robustness towards manufacturing variations is implemented considering three different design realizations simultaneously in the optimization. The polarization-averaged field enhancement for each design is evaluated over an 80 nm wavelength range and a ±15-degree incident angle span. The highest polarization-averaged field enhancement is 42.2 varying by maximally 2% under ±5 nm near-uniform design perturbations at three different wavelengths (1480 nm, 1520 nm, and 1560 nm). The proposed method is generally applicable to many optical systems and is therefore not limited to enhancing photon upconversion.

  11. Observation of the Field, Current and Force Distributions in an Optimized Superconducting Levitation with Translational Symmetry

    NASA Astrophysics Data System (ADS)

    Ye, Chang-Qing; Ma, Guang-Tong; Liu, Kun; Wang, Jia-Su

    2017-01-01

    The superconducting levitation realized by immersing the high-temperature superconductors (HTSs) into nonuniform magnetic field is deemed promising in a wide range of industrial applications such as maglev transportation and kinetic energy storage. Using a well-established electromagnetic model to mathematically describe the HTS, we have developed an efficient scheme that is capable of intelligently and globally optimizing the permanent magnet guideway (PMG) with single or multiple HTSs levitated above for the maglev transportation applications. With maximizing the levitation force as the principal objective, we optimized the dimensions of a Halbach-derived PMG to observe how the field, current and force distribute inside the HTSs when the optimized situation is achieved. Using a pristine PMG as a reference, we have analyzed the critical issues for enhancing the levitation force through comparing the field, current and force distributions between the optimized and pristine PMGs. It was also found that the optimized dimensions of the PMG are highly dependent upon the levitated HTS. Moreover, the guidance force is not always contradictory to the levitation force and may also be enhanced when the levitation force is prescribed to be the principle objective, depending on the configuration of levitation system and lateral displacement.

  12. Fiber laser-microscope system for femtosecond photodisruption of biological samples

    PubMed Central

    Yavaş, Seydi; Erdogan, Mutlu; Gürel, Kutan; Ilday, F. Ömer; Eldeniz, Y. Burak; Tazebay, Uygar H.

    2012-01-01

    We report on the development of a ultrafast fiber laser-microscope system for femtosecond photodisruption of biological targets. A mode-locked Yb-fiber laser oscillator generates few-nJ pulses at 32.7 MHz repetition rate, amplified up to ∼125 nJ at 1030 nm. Following dechirping in a grating compressor, ∼240 fs-long pulses are delivered to the sample through a diffraction-limited microscope, which allows real-time imaging and control. The laser can generate arbitrary pulse patterns, formed by two acousto-optic modulators (AOM) controlled by a custom-developed field-programmable gate array (FPGA) controller. This capability opens the route to fine optimization of the ablation processes and management of thermal effects. Sample position, exposure time and imaging are all computerized. The capability of the system to perform femtosecond photodisruption is demonstrated through experiments on tissue and individual cells. PMID:22435105

  13. Optimized exploration resource evaluation using the MDT tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zainun, K.; Trice, M.L.

    1995-10-01

    This paper discusses exploration cost reduction and improved resource delineation benefits that were realized by use of the MDT (Modular Formation Dynamic Tester) tool to evaluate exploration prospects in the Malay Basin of the South China Sea. Frequently, open hole logs do not clearly define fluid content due to low salinity of the connate water and the effect of shale laminae or bioturbation in the silty, shaley sandstones. Therefore, extensive pressure measurements and fluid sampling are required to define fluid type and contacts. This paper briefly describes the features of the MDT tool which were utilized to reduce rig timemore » usage while providing more representative fluid samples and illustrates usage of these features with field examples. The tool has been used on several exploration wells and a comparison of MDT pressures and samples to results obtained with earlier vintage tools and production tests is also discussed.« less

  14. Optimized Shielding and Fabrication Techniques for TiN and Al Microwave Resonators

    NASA Astrophysics Data System (ADS)

    Kreikebaum, John Mark; Kim, Eunseong; Livingston, William; Dove, Allison; Calusine, Gregory; Hover, David; Rosenberg, Danna; Oliver, William; Siddiqi, Irfan

    We present a systematic study of the effects of shielding and packaging on the internal quality factor (Qi) of Al and TiN microwave resonators designed for use in qubit readout. Surprisingly, Qi =1.3x106 TiN samples investigated at 100 mK exhibited no significant changes in linewidth when operated without magnetic shielding and in an open cryo-package. In contrast, Al resonators showed systematic improvement in Qi with each successive shield. Measurements were performed in an adiabatic demagnetization refrigerator, where typical ambient fields of 0.2 mT are present at the sample stage. We discuss the effect of 100 mK and 500 mK Cu radiation shields and cryoperm magnetic shielding on resonator Q as a function of temperature and input power in samples prepared with a variety of surface treatments, fabrication recipes, and embedding circuits. This research was supported by the ARO and IARPA.

  15. Origin and Correction of Magnetic Field Inhomogeneity at the Interface in Biphasic NMR Samples

    PubMed Central

    Martin, Bryan T.; Chingas, G. C.

    2012-01-01

    The use of susceptibility matching to minimize spectral distortion of biphasic samples layered in a standard 5 mm NMR tube is described. The approach uses magic angle spinning (MAS) to first extract chemical shift differences by suppressing bulk magnetization. Then, using biphasic coaxial samples, magnetic susceptibilities are matched by titration with a paramagnetic salt. The matched phases are then layered in a standard NMR tube where they can be shimmed and examined. Line widths of two distinct spectral lines, selected to characterize homogeneity in each phase, are simultaneously optimized. Two-dimensional distortion-free, slice-resolved spectra of an octanol/water system illustrate the method. These data are obtained using a 2D stepped-gradient pulse sequence devised for this application. Advantages of this sequence over slice-selective methods are that acquisition efficiency is increased and processing requires only conventional software. PMID:22459062

  16. Development and evaluation of a recombinant-glycoprotein-based latex agglutination test for rabies virus antibody assessment.

    PubMed

    Jemima, Ebenezer Angel; Manoharan, Seeralan; Kumanan, Kathaperumal

    2014-08-01

    The measurement of neutralizing antibodies induced by the glycoprotein of rabies virus is indispensable for assessing the level of neutralizing antibodies in animals or humans. A rapid fluorescent focus inhibition test (RFFIT) has been approved by WHO and is the most widely used method to measure the virus-neutralizing antibody content in serum, but a rapid test system would be of great value to screen large numbers of serum samples. To develop and evaluate a latex agglutination test (LAT) for measuring rabies virus antibodies, a recombinant glycoprotein was expressed in an insect cell system and purified, and the protein was coated onto latex beads at concentrations of 0.1, 0.25, 0.5, 0.75, and 1 mg/ml to find out the optimal concentration for coating latex beads. It was found that 0.5 mg/ml of recombinant protein was optimal for coating latex beads, and this concentration was used to sensitize the latex beads for screening of dog serum samples. Grading of LAT results was done with standard reference serum with known antibody titers. A total of 228 serum samples were tested, out of which 145 samples were positive by both RFFIT and LAT, and the specificity was found to be 100 %. In RFFIT, 151 samples were positive, the sensitivity was found to be 96.03 %, and the accuracy/concordance was found to be 97.39 %. A rapid field test-a latex agglutination test (LAT)-was developed and evaluated for rabies virus antibody assessment using recombinant glycoprotein of rabies virus expressed in an insect cell system.

  17. Optimal field-splitting algorithm in intensity-modulated radiotherapy: Evaluations using head-and-neck and female pelvic IMRT cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.

    2013-04-01

    To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less

  18. Heliostat field cost reduction by `slope drive' optimization

    NASA Astrophysics Data System (ADS)

    Arbes, Florian; Weinrebe, Gerhard; Wöhrbach, Markus

    2016-05-01

    An algorithm to optimize power tower heliostat fields employing heliostats with so-called slope drives is presented. It is shown that a field using heliostats with the slope drive axes configuration has the same performance as a field with conventional azimuth-elevation tracking heliostats. Even though heliostats with the slope drive configuration have a limited tracking range, field groups of heliostats with different axes or different drives are not needed for different positions in the heliostat field. The impacts of selected parameters on a benchmark power plant (PS10 near Seville, Spain) are analyzed.

  19. A microscopy method for scanning transmission electron microscopy imaging of the antibacterial activity of polymeric nanoparticles on a biofilm with an ionic liquid.

    PubMed

    Takahashi, Chisato; Muto, Shunsuke; Yamamoto, Hiromitsu

    2017-08-01

    In this study, we developed a scanning transmission electron microscopy (STEM) method for imaging the antibacterial activity of organic polymeric nanoparticles (NPs) toward biofilms formed by Staphylococcus epidermidis bacterial cells, for optimizing NPs to treat biofilm infections. The combination of sample preparation method using a hydrophilic ionic liquid (IL) and STEM observation using the cooling holder eliminates the need for specialized equipment and techniques for biological sample preparation. The annular dark-field STEM results indicated that the two types of biodegradable poly-(DL-lactide-co-glycolide) (PLGA) NPs: PLGA modified with chitosan (CS), and clarithromycin (CAM)-loaded + CS-modified PLGA, prepared by emulsion solvent diffusion exhibited different antibacterial activities in nanoscale. To confirm damage to the sample during STEM observation, we observed the PLGA NPs and the biofilm treated with PLGA NPs by both the conventional method and the newly developed method. The optimized method allows microstructure of the biofilm treated with PLGA NPs to be maintained for 25 min at a current flow of 40 pA. The developed simple sample preparation method would be helpful to understand the interaction of drugs with target materials. In addition, this technique could contribute to the visualization of other deformable composite materials at the nanoscale level. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 1432-1437, 2017. © 2016 Wiley Periodicals, Inc.

  20. Efficient design and inference for multistage randomized trials of individualized treatment policies.

    PubMed

    Dawson, Ree; Lavori, Philip W

    2012-01-01

    Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.

  1. Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles

    PubMed Central

    Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631

  2. Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.

  3. Optimization study of direct morphology observation by cold field emission SEM without gold coating.

    PubMed

    He, Dan; Fu, Cheng; Xue, Zhigang

    2018-06-01

    Gold coating is a general operation that is generally applied on non-conductive or low conductive materials, during which the morphology of the materials can be examined by scanning electron microscopy (SEM). However, fatal deficiencies in the materials can result in irreversible distortion and damage. The present study directly characterized different low conductive materials such as hydroxyapatite, modified poly(vinylidene fluoride) (PVDF) fiber, and zinc oxide nanopillar by cold field emission scanning electron microscopy (FE-SEM) without a gold coating. According to the characteristics of the low conductive materials, various test conditions, such as different working signal modes, accelerating voltages, electron beam spots, and working distances, were characterized to determine the best morphological observations of each sample. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Novel failure mechanism and improvement for split-gate trench MOSFET with large current under unclamped inductive switch stress

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Yang, Zhuo; Xu, Zhiyuan; Liu, Siyang; Sun, Weifeng; Shi, Longxing; Zhu, Yuanzheng; Ye, Peng; Zhou, Jincheng

    2018-04-01

    In this paper, a novel failure mechanism under unclamped inductive switch (UIS) for Split-Gate Trench Metal Oxide Semiconductor Field Effect Transistor (MOSFET) with large current is investigated. The device sample is tested and analyzed in detail. The simulation results demonstrate that the nonuniform potential distribution of the source poly should be responsible for the failure. Three structures are proposed and verified available to improve the device UIS ruggedness by TCAD simulation. The best one of the structures the device with source metal inserting into source poly through contacts in the field oxide is carried out and measured. The results demonstrate that the optimized structure can balance the trade-off between the UIS ruggedness and the static characteristics.

  5. Exchange bias and perpendicular anisotropy study of ultrathin Pt-Co-Pt-IrMn multilayers sputtered on float glass

    NASA Astrophysics Data System (ADS)

    Laval, M.; Lüders, U.; Bobo, J. F.

    2007-09-01

    We have prepared ultrathin Pt-Co-Pt-IrMn polycrystalline multilayers on float-glass substrates by DC magnetron sputtering. We have determined the optimal set of thickness for both Pt layers, the Co layer and the IrMn biasing layer so that these samples exhibit at the same time out-of-plane magnetic anisotropy and exchange bias. Kerr microscopy domain structure imaging evidences an increase of nucleation rate accompanied with inhomogeneous magnetic behavior in the case of exchange-biased films compared to Pt-Co-Pt trilayers. Polar hysteresis loops are measured in obliquely applied magnetic field conditions, allowing us to determine both perpendicular anisotropy effective constant Keff and exchange-bias coupling JE, which are significantly different from the ones determined by standard switching field measurements.

  6. Development of a SIDA-LC-MS/MS Method for the Determination of Phomopsin A in Legumes.

    PubMed

    Schloß, Svenja; Koch, Matthias; Rohn, Sascha; Maul, Ronald

    2015-12-09

    A novel method for the determination of phomopsin A (1) in lupin flour, pea flour, and bean flour as well as whole lupin plants was established based on stable isotope dilution assay (SIDA) LC-MS/MS using (15)N6-1 as an isotopically labeled internal standard. Artificially infected samples were used to develop an optimized extraction procedure and sample pretreatment. The limits of detection were 0.5-1 μg/kg for all matrices. The limits of quantitation were 2-4 μg/kg. The method was used to analyze flour samples generated from selected legume seeds and lupin plant samples that had been inoculated with Diaporthe toxica and two further fungal strains. Finally, growing lupin plants infected with D. toxica were investigated to simulate a naturally in-field mycotoxicosis. Toxin levels of up to 10.1 μg/kg of 1 were found in the pods and 7.2 μg/kg in the stems and leaves.

  7. Effect of template in MCM-41 on the adsorption of aniline from aqueous solution.

    PubMed

    Yang, Xinxin; Guan, Qingxin; Li, Wei

    2011-11-01

    The effect of the surfactant template cetyltrimethylammonium bromide (CTAB) in MCM-41 on the adsorption of aniline was investigated. Various MCM-41 samples were prepared by controlling template removal using an extraction method. The samples were then used as adsorbents for the removal of aniline from aqueous solution. The results showed that the MCM-41 samples with the template partially removed (denoted as C-MCM-41) exhibited better adsorption performance than MCM-41 with the template completely removed (denoted as MCM-41). The reason for this difference may be that the C-MCM-41 samples had stronger hydrophobic properties and selectivity for aniline because of the presence of the template. The porosity and cationic sites generated by the template play an important role in the adsorption process. The optimal adsorbent with moderate template was achieved by changing the ratio of extractant; it has the potential for promising applications in the field of water pollution control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Online analysis of five organic ultraviolet filters in environmental water samples using magnetism-enhanced monolith-based in-tube solid phase microextraction coupled with high-performance liquid chromatography.

    PubMed

    Mei, Meng; Huang, Xiaojia

    2017-11-24

    Due to the endocrine disrupting properties, organic UV filters have been a great risk for humans and other organisms. Therefore, development of accurate and effective analytical methods is needed for the determination of UV filters in environmental waters. In this work, a fast, sensitive and environmentally friendly method combining magnetism-enhanced monolith-based in-tube solid phase microextraction with high-performance liquid chromatography with diode array detection (DAD) (ME-MB-IT/SPME-HPLC-DAD) for the online analysis of five organic UV filters in environmental water samples was developed. To extract UV filters effectively, an ionic liquid-based monolithic capillary column doped with magnetic nanoparticles was prepared by in-situ polymerization and used as extraction medium of online ME-MB-IT/SPME-HPLC-DAD system. Several extraction conditions including the intensity of magnetic field, sampling and desorption flow rate, volume of sample and desorption solvent, pH value and ionic strength of sample matrix were optimized thoroughly. Under the optimized conditions, the extraction efficiencies for five organic UV filters were in the range of 44.0-100%. The limits of detection (S/N=3) and limits of quantification (S/N=10) were 0.04-0.26μg/L and 0.12-0.87μg/L, respectively. The precisions indicated by relative standard deviations (RSDs) were less than 10% for both intra- and inter-day variabilities. Finally, the developed method was successfully applied to the determination of UV filters in three environmental water samples and satisfactory results were obtained. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The spectral basis of optimal error field correction on DIII-D

    DOE PAGES

    Paz-Soldan, Carlos A.; Buttery, Richard J.; Garofalo, Andrea M.; ...

    2014-04-28

    Here, experimental optimum error field correction (EFC) currents found in a wide breadth of dedicated experiments on DIII-D are shown to be consistent with the currents required to null the poloidal harmonics of the vacuum field which drive the kink mode near the plasma edge. This allows the identification of empirical metrics which predict optimal EFC currents with accuracy comparable to that of first- principles modeling which includes the ideal plasma response. While further metric refinements are desirable, this work suggests optimal EFC currents can be effectively fed-forward based purely on knowledge of the vacuum error field and basic equilibriummore » properties which are routinely calculated in real-time.« less

  10. A Hypothesis-Driven Approach to Site Investigation

    NASA Astrophysics Data System (ADS)

    Nowak, W.

    2008-12-01

    Variability of subsurface formations and the scarcity of data lead to the notion of aquifer parameters as geostatistical random variables. Given an information need and limited resources for field campaigns, site investigation is often put into the context of optimal design. In optimal design, the types, numbers and positions of samples are optimized under case-specific objectives to meet the information needs. Past studies feature optimal data worth (balancing maximum financial profit in an engineering task versus the cost of additional sampling), or aim at a minimum prediction uncertainty of stochastic models for a prescribed investigation budget. Recent studies also account for other sources of uncertainty outside the hydrogeological range, such as uncertain toxicity, ingestion and behavioral parameters of the affected population when predicting the human health risk from groundwater contaminations. The current study looks at optimal site investigation from a new angle. Answering a yes/no question under uncertainty directly requires recasting the original question as a hypothesis test. Otherwise, false confidence in the resulting answer would be pretended. A straightforward example is whether a recent contaminant spill will cause contaminant concentrations in excess of a legal limit at a nearby drinking water well. This question can only be answered down to a specified chance of error, i.e., based on the significance level used in hypothesis tests. Optimal design is placed into the hypothesis-driven context by using the chance of providing a false yes/no answer as new criterion to be minimized. Different configurations apply for one-sided and two-sided hypothesis tests. If a false answer entails financial liability, the hypothesis-driven context can be re-cast in the context of data worth. The remaining difference is that failure is a hard constraint in the data worth context versus a monetary punishment term in the hypothesis-driven context. The basic principle is discussed and illustrated on the case of a hypothetical contaminant spill and the exceedance of critical contaminant levels at a downstream location. An tempting and important side question is whether site investigation could be tweaked towards a yes or no answer in maliciously biased campaigns by unfair formulation of the optimization objective.

  11. Optimization of magnetic field-assisted ultrasonication for the disintegration of waste activated sludge using Box-Behnken design with response surface methodology.

    PubMed

    Guan, Su; Deng, Feng; Huang, Si-Qi; Liu, Shu-Yang; Ai, Le-Xian; She, Pu-Ying

    2017-09-01

    This study investigated for the first time the feasibility of using a magnetic field for sludge disintegration. Approximately 41.01% disintegration degree (DD) was reached after 30min at 180mT magnetic field intensity upon separate magnetic field treatment. Protein and polysaccharide contents significantly increased. This test was optimized using a Box-Behnken design (BBD) with response surface methodology (RSM) to fit the multiple equation of the DD. The maximum DD was 43.75% and the protein and polysaccharide contents increased to 56.71 and 119.44mg/L, respectively, when the magnetic field strength was 119.69mT, reaction time was 30.49min, and pH was 9.82 in the optimization experiment. We then analyzed the effects of ultrasound alone. We are the first to combine magnetic field with ultrasound to disintegrate waste-activated sludge (WAS). The optimum effect was obtained with the application of ultrasound alone at 45kHz frequency, with a DD of about 58.09%. By contrast, 62.62% DD was reached in combined magnetic field and ultrasound treatment. This combined test was also optimized using BBD with RSM to fit the multiple equation of DD. The maximum DD of 64.59% was achieved when the magnetic field intensity was 197.87mT, ultrasonic frequency was 42.28kHz, reaction time was 33.96min, and pH was 8.90. These results were consistent with those of particle size and electron microscopy analyses. This research proved that a magnetic field can effectively disintegrate WAS and can be combined with other physical techniques such as ultrasound for optimal results. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The selection of the optimal baseline in the front-view monocular vision system

    NASA Astrophysics Data System (ADS)

    Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.

  13. REAL-TIME IDENTIFICATION AND CHARACTERIZATION OF ASBESTOS AND CONCRETE MATERIALS WITH RADIOACTIVE CONTAMINATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    XU, X. George; Zhang, X.C.

    Concrete and asbestos-containing materials were widely used in DOE building construction in the 1940s and 1950s. Over the years, many of these porous materials have been contaminated with radioactive sources, on and below the surface. To improve current practice in identifying hazardous materials and in characterizing radioactive contamination, an interdisciplinary team from Rensselaer has conducted research in two aspects: (1) to develop terahertz time-domain spectroscopy and imaging system that can be used to analyze environmental samples such as asbestos in the field, and (2) to develop algorithms for characterizing the radioactive contamination depth profiles in real-time in the field usingmore » gamma spectroscopy. The basic research focused on the following: (1) mechanism of generating of broadband pulsed radiation in terahertz region, (2) optimal free-space electro-optic sampling for asbestos, (3) absorption and transmission mechanisms of asbestos in THz region, (4) the role of asbestos sample conditions on the temporal and spectral distributions, (5) real-time identification and mapping of asbestos using THz imaging, (7) Monte Carlo modeling of distributed contamination from diffusion of radioactive materials into porous concrete and asbestos materials, (8) development of unfolding algorithms for gamma spectroscopy, and (9) portable and integrated spectroscopy systems for field testing in DOE. Final results of the project show that the combination of these innovative approaches has the potential to bring significant improvement in future risk reduction and cost/time saving in DOE's D and D activities.« less

  14. Droplet-based microfluidic washing module for magnetic particle-based assays

    PubMed Central

    Lee, Hun; Xu, Linfeng; Oh, Kwang W.

    2014-01-01

    In this paper, we propose a continuous flow droplet-based microfluidic platform for magnetic particle-based assays by employing in-droplet washing. The droplet-based washing was implemented by traversing functionalized magnetic particles across a laterally merged droplet from one side (containing sample and reagent) to the other (containing buffer) by an external magnetic field. Consequently, the magnetic particles were extracted to a parallel-synchronized train of washing buffer droplets, and unbound reagents were left in an original train of sample droplets. To realize the droplet-based washing function, the following four procedures were sequentially carried in a droplet-based microfluidic device: parallel synchronization of two trains of droplets by using a ladder-like channel network; lateral electrocoalescence by an electric field; magnetic particle manipulation by a magnetic field; and asymmetrical splitting of merged droplets. For the stable droplet synchronization and electrocoalescence, we optimized droplet generation conditions by varying the flow rate ratio (or droplet size). Image analysis was carried out to determine the fluorescent intensity of reagents before and after the washing step. As a result, the unbound reagents in sample droplets were significantly removed by more than a factor of 25 in the single washing step, while the magnetic particles were successfully extracted into washing buffer droplets. As a proof-of-principle, we demonstrate a magnetic particle-based immunoassay with streptavidin-coated magnetic particles and fluorescently labelled biotin in the proposed continuous flow droplet-based microfluidic platform. PMID:25379098

  15. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-03-28

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.

  16. Impact of 2′-hydroxyl sampling on the conformational properties of RNA: Update of the CHARMM all-atom additive force field for RNA

    PubMed Central

    Denning, Elizabeth J.; Priyakumar, U. Deva; Nilsson, Lennart; MacKerell, Alexander D.

    2011-01-01

    Here, we present an update of the CHARMM27 all-atom additive force field for nucleic acids that improves the treatment of RNA molecules. The original CHARMM27 force field parameters exhibit enhanced Watson-Crick (WC) base pair opening which is not consistent with experiment while analysis of MD simulations show the 2′-hydroxyl moiety to almost exclusively sample the O3′ orientation. Quantum mechanical studies of RNA related model compounds indicate the energy minimum associated with the O3′ orientation to be too favorable, consistent with the MD results. Optimization of the dihedral parameters dictating the energy of the 2′-hydroxyl proton targeting the QM data yielded several parameter sets, which sample both the base and O3′ orientations of the 2′-hydroxyl to varying degrees. Selection of the final dihedral parameters was based on reproduction of hydration behavior as related to a survey of crystallographic data and better agreement with experimental NMR J-coupling values. Application of the model, designated CHARMM36, to a collection of canonical and non-canonical RNA molecules reveals overall improved agreement with a range of experimental observables as compared to CHARMM27. The results also indicate the sensitivity of the conformational heterogeneity of RNA to the orientation of the 2′-hydroxyl moiety and support a model whereby the 2′-hydroxyl can enhance the probability of conformational transitions in RNA. PMID:21469161

  17. Carbon Nanotube Field Emitters Synthesized on Metal Alloy Substrate by PECVD for Customized Compact Field Emission Devices to Be Used in X-Ray Source Applications.

    PubMed

    Park, Sangjun; Gupta, Amar Prasad; Yeo, Seung Jun; Jung, Jaeik; Paik, Sang Hyun; Mativenga, Mallory; Kim, Seung Hoon; Shin, Ji Hoon; Ahn, Jeung Sun; Ryu, Jehwang

    2018-05-29

    In this study, a simple, efficient, and economical process is reported for the direct synthesis of carbon nanotube (CNT) field emitters on metal alloy. Given that CNT field emitters can be customized with ease for compact and cold field emission devices, they are promising replacements for thermionic emitters in widely accessible X-ray source electron guns. High performance CNT emitter samples were prepared in optimized plasma conditions through the plasma-enhanced chemical vapor deposition (PECVD) process and subsequently characterized by using a scanning electron microscope, tunneling electron microscope, and Raman spectroscopy. For the cathode current, field emission (FE) characteristics with respective turn on (1 μA/cm²) and threshold (1 mA/cm²) field of 2.84 and 4.05 V/μm were obtained. For a field of 5.24 V/μm, maximum current density of 7 mA/cm² was achieved and a field enhancement factor β of 2838 was calculated. In addition, the CNT emitters sustained a current density of 6.7 mA/cm² for 420 min under a field of 5.2 V/μm, confirming good operational stability. Finally, an X-ray generated image of an integrated circuit was taken using the compact field emission device developed herein.

  18. Spatial-temporal variability of soil moisture and its estimation across scales

    NASA Astrophysics Data System (ADS)

    Brocca, L.; Melone, F.; Moramarco, T.; Morbidelli, R.

    2010-02-01

    The soil moisture is a quantity of paramount importance in the study of hydrologic phenomena and soil-atmosphere interaction. Because of its high spatial and temporal variability, the soil moisture monitoring scheme was investigated here both for soil moisture retrieval by remote sensing and in view of the use of soil moisture data in rainfall-runoff modeling. To this end, by using a portable Time Domain Reflectometer, a sequence of 35 measurement days were carried out within a single year in seven fields located inside the Vallaccia catchment, central Italy, with area of 60 km2. Every sampling day, soil moisture measurements were collected at each field over a regular grid with an extension of 2000 m2. The optimization of the monitoring scheme, with the aim of an accurate mean soil moisture estimation at the field and catchment scale, was addressed by the statistical and the temporal stability. At the field scale, the number of required samples (NRS) to estimate the field-mean soil moisture within an accuracy of 2%, necessary for the validation of remotely sensed soil moisture, ranged between 4 and 15 for almost dry conditions (the worst case); at the catchment scale, this number increased to nearly 40 and it refers to almost wet conditions. On the other hand, to estimate the mean soil moisture temporal pattern, useful for rainfall-runoff modeling, the NRS was found to be lower. In fact, at the catchment scale only 10 measurements collected in the most "representative" field, previously determined through the temporal stability analysis, can reproduce the catchment-mean soil moisture with a determination coefficient, R2, higher than 0.96 and a root-mean-square error, RMSE, equal to 2.38%. For the "nonrepresentative" fields the accuracy in terms of RMSE decreased, but similar R2 coefficients were found. This insight can be exploited for the sampling in a generic field when it is sufficient to know an index of soil moisture temporal pattern to be incorporated in conceptual rainfall-runoff models. The obtained results can address the soil moisture monitoring network design from which a reliable soil moisture temporal pattern at the catchment scale can be derived.

  19. Field method for the determination of hexavalent chromium by ultrasonication and strong anion-exchange solid-phase extraction.

    PubMed

    Wang, J; Ashley, K; Marlow, D; England, E C; Carlton, G

    1999-03-01

    A simple, fast, sensitive, and economical field method was developed and evaluated for the determination of hexavalent chromium (CrVI) in environmental and workplace air samples. By means of ultrasonic extraction in combination with a strong anion-exchange solid-phase extraction (SAE-SPE) technique, the filtration, isolation, and determination of CrVI in the presence of trivalent chromium (CrIII) and potential interferents was achieved. The method entails (1) ultrasonication in basic ammonium buffer solution to extract CrVI from environmental matrixes; (2) SAE-SPE to separate CrVI from CrIII and interferences; (3) elution/acidification of the eluate; (4) complexation of chromium with 1,5-diphenylcarbazide; and (5) spectrophotometric determination of the colored chromium-diphenylcarbazone complex. Several critical parameters were optimized in order to effect the extraction of both soluble (K2CrO4) and insoluble (PbCrO4) forms of CrVI without inducing CrIII oxidation or CrVI reduction. The method allowed for the dissolution and purification of CrVI from environmental and workplace air sample matrixes for up to 24 samples simultaneously in less than 90 min (including ultrasonication). The results demonstrated that the method was simple, fast, quantitative, and sufficiently sensitive for the determination of occupational exposures of CrVI. The method is applicable for on-site monitoring of CrVI in environmental and industrial hygiene samples.

  20. Detection of Noble Gas Radionuclides from an Underground Nuclear Explosion During a CTBT On-Site Inspection

    NASA Astrophysics Data System (ADS)

    Carrigan, Charles R.; Sun, Yunwei

    2014-03-01

    The development of a technically sound approach to detecting the subsurface release of noble gas radionuclides is a critical component of the on-site inspection (OSI) protocol under the Comprehensive Nuclear Test Ban Treaty. In this context, we are investigating a variety of technical challenges that have a significant bearing on policy development and technical guidance regarding the detection of noble gases and the creation of a technically justifiable OSI concept of operation. The work focuses on optimizing the ability to capture radioactive noble gases subject to the constraints of possible OSI scenarios. This focus results from recognizing the difficulty of detecting gas releases in geologic environments—a lesson we learned previously from the non-proliferation experiment (NPE). Most of our evaluations of a sampling or transport issue necessarily involve computer simulations. This is partly due to the lack of OSI-relevant field data, such as that provided by the NPE, and partly a result of the ability of computer-based models to test a range of geologic and atmospheric scenarios far beyond what could ever be studied by field experiments, making this approach very highly cost effective. We review some highlights of the transport and sampling issues we have investigated and complete the discussion of these issues with a description of a preliminary design for subsurface sampling that addresses some of the sampling challenges discussed here.

  1. Converging free energies of binding in cucurbit[7]uril and octa-acid host-guest systems from SAMPL4 using expanded ensemble simulations

    NASA Astrophysics Data System (ADS)

    Monroe, Jacob I.; Shirts, Michael R.

    2014-04-01

    Molecular containers such as cucurbit[7]uril (CB7) and the octa-acid (OA) host are ideal simplified model test systems for optimizing and analyzing methods for computing free energies of binding intended for use with biologically relevant protein-ligand complexes. To this end, we have performed initially blind free energy calculations to determine the free energies of binding for ligands of both the CB7 and OA hosts. A subset of the selected guest molecules were those included in the SAMPL4 prediction challenge. Using expanded ensemble simulations in the dimension of coupling host-guest intermolecular interactions, we are able to show that our estimates in most cases can be demonstrated to fully converge and that the errors in our estimates are due almost entirely to the assigned force field parameters and the choice of environmental conditions used to model experiment. We confirm the convergence through the use of alternative simulation methodologies and thermodynamic pathways, analyzing sampled conformations, and directly observing changes of the free energy with respect to simulation time. Our results demonstrate the benefits of enhanced sampling of multiple local free energy minima made possible by the use of expanded ensemble molecular dynamics and may indicate the presence of significant problems with current transferable force fields for organic molecules when used for calculating binding affinities, especially in non-protein chemistries.

  2. Converging free energies of binding in cucurbit[7]uril and octa-acid host-guest systems from SAMPL4 using expanded ensemble simulations.

    PubMed

    Monroe, Jacob I; Shirts, Michael R

    2014-04-01

    Molecular containers such as cucurbit[7]uril (CB7) and the octa-acid (OA) host are ideal simplified model test systems for optimizing and analyzing methods for computing free energies of binding intended for use with biologically relevant protein-ligand complexes. To this end, we have performed initially blind free energy calculations to determine the free energies of binding for ligands of both the CB7 and OA hosts. A subset of the selected guest molecules were those included in the SAMPL4 prediction challenge. Using expanded ensemble simulations in the dimension of coupling host-guest intermolecular interactions, we are able to show that our estimates in most cases can be demonstrated to fully converge and that the errors in our estimates are due almost entirely to the assigned force field parameters and the choice of environmental conditions used to model experiment. We confirm the convergence through the use of alternative simulation methodologies and thermodynamic pathways, analyzing sampled conformations, and directly observing changes of the free energy with respect to simulation time. Our results demonstrate the benefits of enhanced sampling of multiple local free energy minima made possible by the use of expanded ensemble molecular dynamics and may indicate the presence of significant problems with current transferable force fields for organic molecules when used for calculating binding affinities, especially in non-protein chemistries.

  3. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  4. Third International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-3)

    NASA Technical Reports Server (NTRS)

    Dulikravich, George S. (Editor)

    1991-01-01

    Papers from the Third International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES) are presented. The papers discuss current research in the general field of inverse, semi-inverse, and direct design and optimization in engineering sciences. The rapid growth of this relatively new field is due to the availability of faster and larger computing machines.

  5. Vitamin D in corticosteroid-naïve and corticosteroid-treated Duchenne muscular dystrophy: what dose achieves optimal 25(OH) vitamin D levels?

    PubMed

    Alshaikh, Nahla; Brunklaus, Andreas; Davis, Tracey; Robb, Stephanie A; Quinlivan, Ros; Munot, Pinki; Sarkozy, Anna; Muntoni, Francesco; Manzur, Adnan Y

    2016-10-01

    Assessment of the efficacy of vitamin D replenishment and maintenance doses required to attain optimal levels in boys with Duchenne muscular dystrophy (DMD). 25(OH)-vitamin D levels and concurrent vitamin D dosage were collected from retrospective case-note review of boys with DMD at the Dubowitz Neuromuscular Centre. Vitamin D levels were stratified as deficient at <25 nmol/L, insufficient at 25-49 nmol/L, adequate at 50-75 nmol/L and optimal at >75 nmol/L. 617 vitamin D samples were available from 197 boys (range 2-18 years)-69% from individuals on corticosteroids. Vitamin D-naïve boys (154 samples) showed deficiency in 28%, insufficiency in 42%, adequate levels in 24% and optimal levels in 6%. The vitamin D-supplemented group (463 samples) was tested while on different maintenance/replenishment doses. Three-month replenishment of daily 3000 IU (23 samples) or 6000 IU (37 samples) achieved optimal levels in 52% and 84%, respectively. 182 samples taken on 400 IU revealed deficiency in 19 (10%), insufficiency in 84 (47%), adequate levels in 67 (37%) and optimal levels in 11 (6%). 97 samples taken on 800 IU showed deficiency in 2 (2%), insufficiency in 17 (17%), adequate levels in 56 (58%) and optimal levels in 22 (23%). 81 samples were on 1000 IU and 14 samples on 1500 IU, with optimal levels in 35 (43%) and 9 (64%), respectively. No toxic level was seen (highest level 230 nmol/L). The prevalence of vitamin D deficiency and insufficiency in DMD is high. A 2-month replenishment regimen of 6000 IU and maintenance regimen of 1000-1500 IU/day was associated with optimal vitamin D levels. These data have important implications for optimising vitamin D dosing in DMD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Magnetic ionic liquid aqueous two-phase system coupled with high performance liquid chromatography: A rapid approach for determination of chloramphenicol in water environment.

    PubMed

    Yao, Tian; Yao, Shun

    2017-01-20

    A novel organic magnetic ionic liquid based on guanidinium cation was synthesized and characterized. A new method of magnetic ionic liquid aqueous two-phase system (MILATPs) coupled with high-performance liquid chromatography (HPLC) was established to preconcentrate and determine trace amount of chloramphenicol (CAP) in water environment for the first time. In the absence of volatile organic solvents, MILATPs not only has the excellent properties of rapid extraction, but also exhibits a response to an external magnetic field which can be applied to assist phase separation. The phase behavior of MILATPs was investigated and phase equilibrium data were correlated by Merchuk equation. Various influencing factors on CAP recovery were systematically investigated and optimized. Under the optimal conditions, the preconcentration factor was 147.2 with the precision values (RSD%) of 2.42% and 4.45% for intra-day (n=6) and inter-day (n=6), respectively. The limit of detection (LOD) and limit of quantitation (LOQ) were 0.14ngmL -1 and 0.42ngmL -1 , respectively. Fine linear range of 12.25ngmL -1 -2200ngmL -1 was obtained. Finally, the validated method was successfully applied for the analysis of CAP in some environmental waters with the recoveries for the spiked samples in the acceptable range of 94.6%-99.72%. Hopefully, MILATPs is showing great potential to promote new development in the field of extraction, separation and pretreatment of various biochemical samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Towards an automatic wind speed and direction profiler for Wide Field adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Sivo, G.; Turchi, A.; Masciadri, E.; Guesalaga, A.; Neichel, B.

    2018-05-01

    Wide Field Adaptive Optics (WFAO) systems are among the most sophisticated adaptive optics (AO) systems available today on large telescopes. Knowledge of the vertical spatio-temporal distribution of wind speed (WS) and direction (WD) is fundamental to optimize the performance of such systems. Previous studies already proved that the Gemini Multi-Conjugated AO system (GeMS) is able to retrieve measurements of the WS and WD stratification using the SLOpe Detection And Ranging (SLODAR) technique and to store measurements in the telemetry data. In order to assess the reliability of these estimates and of the SLODAR technique applied to such complex AO systems, in this study we compared WS and WD values retrieved from GeMS with those obtained with the atmospheric model Meso-NH on a rich statistical sample of nights. It has previously been proved that the latter technique provided excellent agreement with a large sample of radiosoundings, both in statistical terms and on individual flights. It can be considered, therefore, as an independent reference. The excellent agreement between GeMS measurements and the model that we find in this study proves the robustness of the SLODAR approach. To bypass the complex procedures necessary to achieve automatic measurements of the wind with GeMS, we propose a simple automatic method to monitor nightly WS and WD using Meso-NH model estimates. Such a method can be applied to whatever present or new-generation facilities are supported by WFAO systems. The interest of this study is, therefore, well beyond the optimization of GeMS performance.

  8. Magnetic field design for selecting and aligning immunomagnetic labeled cells.

    PubMed

    Tibbe, Arjan G J; de Grooth, Bart G; Greve, Jan; Dolan, Gerald J; Rao, Chandra; Terstappen, Leon W M M

    2002-03-01

    Recently we introduced the CellTracks cell analysis system, in which samples are prepared based on a combination of immunomagnetic selection, separation, and alignment of cells along ferromagnetic lines. Here we describe the underlying magnetic principles and considerations made in the magnetic field design to achieve the best possible cell selection and alignment of magnetically labeled cells. Materials and Methods Computer simulations, in combination with experimental data, were used to optimize the design of the magnets and Ni lines to obtain the optimal magnetic configuration. A homogeneous cell distribution on the upper surface of the sample chamber was obtained with a magnet where the pole faces were tilted towards each other. The spatial distribution of magnetically aligned objects in between the Ni lines was dependent on the ratio of the diameter of the aligned object and the line spacing, which was tested with magnetically and fluorescently labeled 6 microm polystyrene beads. The best result was obtained when the line spacing was equal to or smaller than the diameter of the aligned object. The magnetic gradient of the designed permanent magnet extracts magnetically labeled cells from any cell suspension to a desired plane, providing a homogeneous cell distribution. In addition, it magnetizes ferro-magnetic Ni lines in this plane whose additional local gradient adds to the gradient of the permanent magnet. The resultant gradient aligns the magnetically labeled cells first brought to this plane. This combination makes it possible, in a single step, to extract and align cells on a surface from any cell suspension. Copyright 2002 Wiley-Liss, Inc.

  9. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  10. Designing electronic properties of two-dimensional crystals through optimization of deformations

    NASA Astrophysics Data System (ADS)

    Jones, Gareth W.; Pereira, Vitor M.

    2014-09-01

    One of the enticing features common to most of the two-dimensional (2D) electronic systems that, in the wake of (and in parallel with) graphene, are currently at the forefront of materials science research is the ability to easily introduce a combination of planar deformations and bending in the system. Since the electronic properties are ultimately determined by the details of atomic orbital overlap, such mechanical manipulations translate into modified (or, at least, perturbed) electronic properties. Here, we present a general-purpose optimization framework for tailoring physical properties of 2D electronic systems by manipulating the state of local strain, allowing a one-step route from their design to experimental implementation. A definite example, chosen for its relevance in light of current experiments in graphene nanostructures, is the optimization of the experimental parameters that generate a prescribed spatial profile of pseudomagnetic fields (PMFs) in graphene. But the method is general enough to accommodate a multitude of possible experimental parameters and conditions whereby deformations can be imparted to the graphene lattice, and complies, by design, with graphene's elastic equilibrium and elastic compatibility constraints. As a result, it efficiently answers the inverse problem of determining the optimal values of a set of external or control parameters (such as substrate topography, sample shape, load distribution, etc) that result in a graphene deformation whose associated PMF profile best matches a prescribed target. The ability to address this inverse problem in an expedited way is one key step for practical implementations of the concept of 2D systems with electronic properties strain-engineered to order. The general-purpose nature of this calculation strategy means that it can be easily applied to the optimization of other relevant physical quantities which directly depend on the local strain field, not just in graphene but in other 2D electronic membranes.

  11. Fractionating power and outlet stream polydispersity in asymmetrical flow field-flow fractionation. Part II: programmed operation.

    PubMed

    Williams, P Stephen

    2017-01-01

    Asymmetrical flow field-flow fractionation (As-FlFFF) is a widely used technique for analyzing polydisperse nanoparticle and macromolecular samples. The programmed decay of cross flow rate is often employed. The interdependence of the cross flow rate through the membrane and the fluid flow along the channel length complicates the prediction of elution time and fractionating power. The theory for their calculation is presented. It is also confirmed for examples of exponential decay of cross flow rate with constant channel outlet flow rate that the residual sample polydispersity at the channel outlet is quite well approximated by the reciprocal of four times the fractionating power. Residual polydispersity is of importance when online MALS or DLS detection are used to extract quantitative information on particle size or molecular weight. The theory presented here provides a firm basis for the optimization of programmed flow conditions in As-FlFFF. Graphical abstract Channel outlet polydispersity remains significant following fractionation by As-FlFFF under conditions of programmed decay of cross flow rate.

  12. SiC-VJFETs power switching devices: an improved model and parameter optimization technique

    NASA Astrophysics Data System (ADS)

    Ben Salah, T.; Lahbib, Y.; Morel, H.

    2009-12-01

    Silicon carbide junction field effect transistor (SiC-JFETs) is a mature power switch newly applied in several industrial applications. SiC-JFETs are often simulated by Spice model in order to predict their electrical behaviour. Although such a model provides sufficient accuracy for some applications, this paper shows that it presents serious shortcomings in terms of the neglect of the body diode model, among many others in circuit model topology. Simulation correction is then mandatory and a new model should be proposed. Moreover, this paper gives an enhanced model based on experimental dc and ac data. New devices are added to the conventional circuit model giving accurate static and dynamic behaviour, an effect not accounted in the Spice model. The improved model is implemented into VHDL-AMS language and steady-state dynamic and transient responses are simulated for many SiC-VJFETs samples. Very simple and reliable optimization algorithm based on the optimization of a cost function is proposed to extract the JFET model parameters. The obtained parameters are verified by comparing errors between simulations results and experimental data.

  13. Remote Sensing for Detection of Prehistoric Landscape Use in NW Arizona, USA

    NASA Astrophysics Data System (ADS)

    Buck, P.; Sabol, D. E.

    2012-12-01

    Optimal maize field locations possibly used by prehistoric agriculturalists in the Mt. Trumbull portion of the Colorado Plateau in NW AZ were modeled using remotely sensed data and ground based observations. Over 400 prehistoric archaeological sites have been recorded in the study area; in some areas site density is ~120 sites/mi2, including many 1-2 room structures traditionally referred to as "field houses" that archaeologists have long assumed were located on or immediately adjacent to maize fields. Other site types are larger C-shaped pueblos with up to 20 rooms and somewhat smaller multi room structures. We collected and used ground-based field measurements and satellite image data from ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) to produce GIS layers to predict ancient maize fields and compare these with known "field house" sites. Input data layers for the model included early spring maximum solar illumination, surface gradient, surface radiant temperature, water surface flow collection, water infiltration, and soil type. We constructed 2 types of optimality models: "restrictive" (or classification) models and "fuzzy logic" (or grouping) models. Highest values were assigned to pixels with more surface water, warmer temperature, better soils, etc. and then assigned a color for display. Analyses of patterns for the "green" restrictive model shows a disproportionate number of large sites found within 200 m of the green optimal zone; for the yellow optimal zone there is a statistically significant relationship between larger sites and the yellow zones at 100 m or less. For the blue fuzzy logic model, again there is a strong relationship between the number of large sites and a blue zone both at 100 m and 200 m distances. So-called "field houses" are not located preferentially close to our optimal areas. Rather, there is a clear preference for larger sites to be found closer to optimal areas. Using the proportion of site types from the training area, we performed a chi square test using those proportions against the actual values found in a previously unknown area (area B). It was found that the proportions of large sites close to the fuzzy logic blue optimally zone is indistinguishable from the test area, meaning essentially the same pattern is found in area B; viz., there are disproportionally more large sites found closer to blue optimal areas in the fuzzy logic model than would be expected by chance alone. These smaller structural sites are not located closer to the most optimal places as might be expected if they are in fact "field houses". Smaller sites may have been established only after ~ AD 800 when the larger C- and L-shaped pueblos were settled near the most optimal field locations. These smaller structural sites did in fact act as field houses-- but in more marginal locations and later in time. As this portion of the Mt Trumbull area got increasingly "packed" during the later periods, it may be that kin groups from the larger residential sites established field houses to monitor their more marginal fields. This process might have intensified in the 12th and 13th centuries as environmental conditions deteriorated, or at any time when summer monsoonal rains needed for successful agriculture became reduced for long periods.

  14. Numerical simulation of dielectrophoretic separation of live/dead cells using a three-dimensional nonuniform AC electric field in micro-fabricated devices.

    PubMed

    Tada, Shigeru

    2015-01-01

    The analysis of cell separation has many important biological and medical applications. Dielectrophoresis (DEP) is one of the most effective and widely used techniques for separating and identifying biological species. In the present study, a DEP flow channel, a device that exploits the differences in the dielectric properties of cells in cell separation, was numerically simulated and its cell-separation performance examined. The samples of cells used in the simulation were modeled as human leukocyte (B cell) live and dead cells. The cell-separation analysis was carried out for a flow channel equipped with a planar electrode on the channel's top face and a pair of interdigitated counter electrodes on the bottom. This yielded a three-dimensional (3D) nonuniform AC electric field in the entire space of the flow channel. To investigate the optimal separation conditions for mixtures of live and dead cells, the strength of the applied electric field was varied. With appropriately selected conditions, the device was predicted to be very effective at separating dead cells from live cells. The major advantage of the proposed method is that a large volume of sample can be processed rapidly because of a large spacing of the channel height.

  15. A Continuous-Flow Polymerase Chain Reaction Microchip With Regional Velocity Control

    PubMed Central

    Li, Shifeng; Fozdar, David Y.; Ali, Mehnaaz F.; Li, Hao; Shao, Dongbing; Vykoukal, Daynene M.; Vykoukal, Jody; Floriano, Pierre N.; Olsen, Michael; McDevitt, John T.; Gascoyne, Peter R.C.; Chen, Shaochen

    2009-01-01

    This paper presents a continuous-flow polymerase chain reaction (PCR) microchip with a serpentine microchannel of varying width for “regional velocity control.” Varying the channel width by incorporating expanding and contracting conduits made it possible to control DNA sample velocities for the optimization of the exposure times of the sample to each temperature phase while minimizing the transitional periods during temperature transitions. A finite element analysis (FEA) and semi-analytical heat transfer model was used to determine the distances between the three heating assemblies that are responsible for creating the denaturation (96 °C), hybridization (60 °C), and extension (72 °C) temperature zones within the microchip. Predictions from the thermal FEA and semi-analytical model were compared with temperature measurements obtained from an infrared (IR) camera. Flow-field FEAs were also performed to predict the velocity distributions in the regions of the expanding and contracting conduits to study the effects of the microchannel geometry on flow recirculation and bubble nucleation. The flow fields were empirically studied using micro particle image velocimetry (μ-PIV) to validate the flow-field FEA’s and to determine experimental velocities in each of the regions of different width. Successful amplification of a 90 base pair (bp) bacillus anthracis DNA fragment was achieved. PMID:19829760

  16. Dose optimization of total or partial skin electron irradiation by thermoluminescent dosimetry.

    PubMed

    Schüttrumpf, Lars; Neumaier, Klement; Maihoefer, Cornelius; Niyazi, Maximilian; Ganswindt, Ute; Li, Minglun; Lang, Peter; Reiner, Michael; Belka, Claus; Corradini, Stefanie

    2018-05-01

    Due to the complex surface of the human body, total or partial skin irradiation using large electron fields is challenging. The aim of the present study was to quantify the magnitude of dose optimization required after the application of standard fields. Total skin electron irradiation (TSEI) was applied using the Stanford technique with six dual-fields. Patients presenting with localized lesions were treated with partial skin electron irradiation (PSEI) using large electron fields, which were individually adapted. In order to verify and validate the dose distribution, in vivo dosimetry with thermoluminescent dosimeters (TLD) was performed during the first treatment fraction to detect potential dose heterogeneity and to allow for an individual dose optimization with adjustment of the monitor units (MU). Between 1984 and 2017, a total of 58 patients were treated: 31 patients received TSEI using 12 treatment fields, while 27 patients underwent PSEI and were treated with 4-8 treatment fields. After evaluation of the dosimetric results, an individual dose optimization was necessary in 21 patients. Of these, 7 patients received TSEI (7/31). Monitor units (MU) needed to be corrected by a mean value of 117 MU (±105, range 18-290) uniformly for all 12 treatment fields, corresponding to a mean relative change of 12% of the prescribed MU. In comparison, the other 14 patients received PSEI (14/27) and the mean adjustment of monitor units was 282 MU (±144, range 59-500) to single or multiple fields, corresponding to a mean relative change of 22% of the prescribed MU. A second dose optimization to obtain a satisfying dose at the prescription point was need in 5 patients. Thermoluminescent dosimetry allows an individual dose optimization in TSEI and PSEI to enable a reliable adjustment of the MUs to obtain the prescription dose. Especially in PSEI in vivo dosimetry is of fundamental importance.

  17. Digital Curation of Earth Science Samples Starts in the Field

    NASA Astrophysics Data System (ADS)

    Lehnert, K. A.; Hsu, L.; Song, L.; Carter, M. R.

    2014-12-01

    Collection of physical samples in the field is an essential part of research in the Earth Sciences. Samples provide a basis for progress across many disciplines, from the study of global climate change now and over the Earth's history, to present and past biogeochemical cycles, to magmatic processes and mantle dynamics. The types of samples, methods of collection, and scope and scale of sampling campaigns are highly diverse, ranging from large-scale programs to drill rock and sediment cores on land, in lakes, and in the ocean, to environmental observation networks with continuous sampling, to single investigator or small team expeditions to remote areas around the globe or trips to local outcrops. Cyberinfrastructure for sample-related fieldwork needs to cater to the different needs of these diverse sampling activities, aligning with specific workflows, regional constraints such as connectivity or climate, and processing of samples. In general, digital tools should assist with capture and management of metadata about the sampling process (location, time, method) and the sample itself (type, dimension, context, images, etc.), management of the physical objects (e.g., sample labels with QR codes), and the seamless transfer of sample metadata to data systems and software relevant to the post-sampling data acquisition, data processing, and sample curation. In order to optimize CI capabilities for samples, tools and workflows need to adopt community-based standards and best practices for sample metadata, classification, identification and registration. This presentation will provide an overview and updates of several ongoing efforts that are relevant to the development of standards for digital sample management: the ODM2 project that has generated an information model for spatially-discrete, feature-based earth observations resulting from in-situ sensors and environmental samples, aligned with OGC's Observation & Measurements model (Horsburgh et al, AGU FM 2014); implementation of the IGSN (International Geo Sample Number) as a globally unique sample identifier via a distributed system of allocating agents and a central registry; and the EarthCube Research Coordination Network iSamplES (Internet of Samples in the Earth Sciences) that aims to improve sharing and curation of samples through the use of CI.

  18. Optimizing use of the structural chemical analyser (variable pressure FESEM-EDX Raman spectroscopy) on micro-size complex historical paintings characterization.

    PubMed

    Guerra, I; Cardell, C

    2015-10-01

    The novel Structural Chemical Analyser (hyphenated Raman spectroscopy and scanning electron microscopy equipped with an X-ray detector) is gaining popularity since it allows 3-D morphological studies and elemental, molecular, structural and electronic analyses of a single complex micro-sized sample without transfer between instruments. However, its full potential remains unexploited in painting heritage where simultaneous identification of inorganic and organic materials in paintings is critically yet unresolved. Despite benefits and drawbacks shown in literature, new challenges have to be faced analysing multifaceted paint specimens. SEM-Structural Chemical Analyser systems differ since they are fabricated ad hoc by request. As configuration influences the procedure to optimize analyses, likewise analytical protocols have to be designed ad hoc. This paper deals with the optimization of the analytical procedure of a Variable Pressure Field Emission scanning electron microscopy equipped with an X-ray detector Raman spectroscopy system to analyse historical paint samples. We address essential parameters, technical challenges and limitations raised from analysing paint stratigraphies, archaeological samples and loose pigments. We show that accurate data interpretation requires comprehensive knowledge of factors affecting Raman spectra. We tackled: (i) the in-FESEM-Raman spectroscopy analytical sequence, (ii) correlations between FESEM and Structural Chemical Analyser/laser analytical position, (iii) Raman signal intensity under different VP-FESEM vacuum modes, (iv) carbon deposition on samples under FESEM low-vacuum mode, (v) crystal nature and morphology, (vi) depth of focus and (vii) surface-enhanced Raman scattering effect. We recommend careful planning of analysis strategies prior to research which, although time consuming, guarantees reliable results. The ultimate goal of this paper is to help to guide future users of a FESEM-Structural Chemical Analyser system in order to increase applications. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  19. Testing the sensitivity of pumpage to increases in surficial aquifer system heads in the Cypress Creek well-field area, West-Central Florida : an optimization technique

    USGS Publications Warehouse

    Yobbi, Dann K.

    2002-01-01

    Tampa Bay depends on ground water for most of the water supply. Numerous wetlands and lakes in Pasco County have been impacted by the high demand for ground water. Central Pasco County, particularly the area within the Cypress Creek well field, has been greatly affected. Probable causes for the decline in surface-water levels are well-field pumpage and a decade-long drought. Efforts are underway to increase surface-water levels by developing alternative sources of water supply, thus reducing the quantity of well-field pumpage. Numerical ground-water flow simulations coupled with an optimization routine were used in a series of simulations to test the sensitivity of optimal pumpage to desired increases in surficial aquifer system heads in the Cypress Creek well field. The ground-water system was simulated using the central northern Tampa Bay ground-water flow model. Pumping solutions for 1987 equilibrium conditions and for a transient 6-month timeframe were determined for five test cases, each reflecting a range of desired target recovery heads at different head control sites in the surficial aquifer system. Results are presented in the form of curves relating average head recovery to total optimal pumpage. Pumping solutions are sensitive to the location of head control sites formulated in the optimization problem and as expected, total optimal pumpage decreased when desired target head increased. The distribution of optimal pumpage for individual production wells also was significantly affected by the location of head control sites. A pumping advantage was gained for test-case formulations where hydraulic heads were maximized in cells near the production wells, in cells within the steady-state pumping center cone of depression, and in cells within the area of the well field where confining-unit leakance is the highest. More water was pumped and the ratio of head recovery per unit decrease in optimal pumpage was more than double for test cases where hydraulic heads are maximized in cells located at or near the production wells. Additionally, the ratio of head recovery per unit decrease in pumpage was about three times more for the area where confining-unit leakance is the highest than for other leakance zone areas of the well field. For many head control sites, optimal heads corresponding to optimal pumpage deviated from the desired target recovery heads. Overall, pumping solutions were constrained by the limiting recovery values, initial head conditions, and by upper boundary conditions of the ground-water flow model.

  20. Optimization and evaluation of single-cell whole-genome multiple displacement amplification.

    PubMed

    Spits, C; Le Caignec, C; De Rycke, M; Van Haute, L; Van Steirteghem, A; Liebaers, I; Sermon, K

    2006-05-01

    The scarcity of genomic DNA can be a limiting factor in some fields of genetic research. One of the methods developed to overcome this difficulty is whole genome amplification (WGA). Recently, multiple displacement amplification (MDA) has proved very efficient in the WGA of small DNA samples and pools of cells, the reaction being catalyzed by the phi29 or the Bst DNA polymerases. The aim of the present study was to develop a reliable, efficient, and fast protocol for MDA at the single-cell level. We first compared the efficiency of phi29 and Bst polymerases on DNA samples and single cells. The phi29 polymerase generated accurately, in a short time and from a single cell, sufficient DNA for a large set of tests, whereas the Bst enzyme showed a low efficiency and a high error rate. A single-cell protocol was optimized using the phi29 polymerase and was evaluated on 60 single cells; the DNA obtained DNA was assessed by 22 locus-specific PCRs. This new protocol can be useful for many applications involving minute quantities of starting material, such as forensic DNA analysis, prenatal and preimplantation genetic diagnosis, or cancer research. (c) 2006 Wiley-Liss, Inc.

Top