Research on a lubricating grease print process for cylindrical cylinder
NASA Astrophysics Data System (ADS)
Yang, Liu; Zhang, Xuan; Wang, XianYan; Tan, XiaoYan
2017-09-01
In vehicle braking system and clutch system of transmission, there is always a kind of cylindrical component dose reciprocating motion. The main working method is the reciprocating motion between the rubber sealing parts and cylindrical parts, the main factor affects the service life of the product is the lubricating performance of the moving parts. So the lubricating performance between cylinders and rubber sealing rings is particularly important, same as the quality of the grease applies on the surface of the surface of cylinder. Traditional method of manually applying grease has some defects such as applying unevenly, applying tools like brush and cloth easily falls off and affect the cleanness of products, contact skin easily cause allergy, waste grease due to the uncontrollable of grease quantity using in applying, low efficiency of manual operation. An automatic, quantitative and high pressure applying equipment is introduced in this document to replace the traditional manually applying method, which can guarantee the applying quality of the grease which are painted on the surface of cylinder and bring economic benefits to the company.
Phonetic Symbols through Audiolingual Method to Improve the Students' Listening Skill
ERIC Educational Resources Information Center
Samawiyah, Zuhrotun; Saifuddin, Muhammad
2016-01-01
Phonetic symbols present linguistics feature to how the words are pronounced or spelled and they offer a way to easily identify and recognize the words. Phonetic symbols were applied in this research to give the students clear input and a comprehension toward English words. Moreover, these phonetic symbols were applied within audio-lingual method…
Measuring the Levels of Ribonucleotides Embedded in Genomic DNA.
Meroni, Alice; Nava, Giulia M; Sertic, Sarah; Plevani, Paolo; Muzi-Falconi, Marco; Lazzaro, Federico
2018-01-01
Ribonucleotides (rNTPs) are incorporated into genomic DNA at a relatively high frequency during replication. They have beneficial effects but, if not removed from the chromosomes, increase genomic instability. Here, we describe a fast method to easily estimate the amounts of embedded ribonucleotides into the genome. The protocol described is performed in Saccharomyces cerevisiae and allows us to quantify altered levels of rNMPs due to different mutations in the replicative polymerase ε. However, this protocol can be easily applied to cells derived from any organism.
A Chebyshev matrix method for spatial modes of the Orr-Sommerfeld equation
NASA Technical Reports Server (NTRS)
Danabasoglu, G.; Biringen, S.
1989-01-01
The Chebyshev matrix collocation method is applied to obtain the spatial modes of the Orr-Sommerfeld equation for Poiseuille flow and the Blausius boundary layer. The problem is linearized by the companion matrix technique for semi-infinite domain using a mapping transformation. The method can be easily adapted to problems with different boundary conditions requiring different transformations.
Clive, Derrick L J; Fletcher, Stephen P; Liu, Dazhan
2004-05-14
An indirect method is described for effecting radical cyclization onto a benzene ring. Cross-conjugated dienones 6, which are readily prepared from phenols, undergo radical cyclization (6 --> 7 --> 8), and the products (8) are easily aromatized. The method has been applied to the synthesis of ent-nocardione A (21).
Morita, K; Uchiyama, Y; Tominaga, S
1987-06-01
In order to evaluate the treatment results of radiotherapy it is important to estimate the degree of complications of the surrounding normal tissues as well as the frequency of tumor control. In this report, the cumulative incidence rate of the late radiation injuries of the normal tissues was calculated using the modified actuarial method (Cutler-Ederer's method) or Kaplan-Meier's method, which is usually applied to the calculation of the survival rate. By the use of this method of calculation, an accurate cumulative incidence rate over time can be easily obtained and applied to the statistical evaluation of the late radiation injuries.
Future animal improvement programs applied to global populations
USDA-ARS?s Scientific Manuscript database
Breeding programs evolved gradually from within-herd phenotypic selection to local and regional cooperatives to national evaluations and now international evaluations. In the future, breeders may adapt reproductive, computational, and genomic methods to global populations as easily as with national ...
Robust and Imperceptible Watermarking of Video Streams for Low Power Devices
NASA Astrophysics Data System (ADS)
Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.
With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.
Acoustic bubble removal method
NASA Technical Reports Server (NTRS)
Trinh, E. H.; Elleman, D. D.; Wang, T. G. (Inventor)
1983-01-01
A method is described for removing bubbles from a liquid bath such as a bath of molten glass to be used for optical elements. Larger bubbles are first removed by applying acoustic energy resonant to a bath dimension to drive the larger bubbles toward a pressure well where the bubbles can coalesce and then be more easily removed. Thereafter, submillimeter bubbles are removed by applying acoustic energy of frequencies resonant to the small bubbles to oscillate them and thereby stir liquid immediately about the bubbles to facilitate their breakup and absorption into the liquid.
An operational approach to high resolution agro-ecological zoning in West-Africa.
Le Page, Y; Vasconcelos, Maria; Palminha, A; Melo, I Q; Pereira, J M C
2017-01-01
The objective of this work is to develop a simple methodology for high resolution crop suitability analysis under current and future climate, easily applicable and useful in Least Developed Countries. The approach addresses both regional planning in the context of climate change projections and pre-emptive short-term rural extension interventions based on same-year agricultural season forecasts, while implemented with off-the-shelf resources. The developed tools are applied operationally in a case-study developed in three regions of Guinea-Bissau and the obtained results, as well as the advantages and limitations of methods applied, are discussed. In this paper we show how a simple approach can easily generate information on climate vulnerability and how it can be operationally used in rural extension services.
Zetlaoui, Mélanie; Feinberg, Max; Verger, Philippe; Clémençon, Stephan
2011-12-01
In Western countries where food supply is satisfactory, consumers organize their diets around a large combination of foods. It is the purpose of this article to examine how recent nonnegative matrix factorization (NMF) techniques can be applied to food consumption data to understand these combinations. Such data are nonnegative by nature and of high dimension. The NMF model provides a representation of consumption data through latent vectors with nonnegative coefficients, that we call consumption systems (CS), in a small number. As the NMF approach may encourage sparsity of the data representation produced, the resulting CS are easily interpretable. Beyond the illustration of its properties we provide through a simple simulation result, the NMF method is applied to data issued from a French consumption survey. The numerical results thus obtained are displayed and thoroughly discussed. A clustering based on the k-means method is also achieved in the resulting latent consumption space, to recover food consumption patterns easily usable for nutritionists. © 2011, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Sharma, Dinkar; Singh, Prince; Chauhan, Shubha
2017-06-01
In this paper, a combined form of the Laplace transform method with the homotopy perturbation method is applied to solve nonlinear fifth order Korteweg de Vries (KdV) equations. The method is known as homotopy perturbation transform method (HPTM). The nonlinear terms can be easily handled by the use of He's polynomials. Two test examples are considered to illustrate the present scheme. Further the results are compared with Homotopy perturbation method (HPM).
NASA Astrophysics Data System (ADS)
Pandey, Rishi Kumar; Mishra, Hradyesh Kumar
2017-11-01
In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.
NASA Astrophysics Data System (ADS)
Styk, Adam
2014-07-01
Classical time-averaging and stroboscopic interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an amplitude magnitude of at least 0.19λ to be able to detect resonant frequency of the object. Moreover the precision of measurement is limited. That puts strong constrains on the type of element to be tested. In this paper the comparison of two methods of microobject vibration measurements that overcome aforementioned problems are presented. Both methods maintain high speed measurement time and extend the range of amplitudes to be measured (below 0.19λ), moreover can be easily applied to MEMS/MOEMS dynamic parameters measurements.
Dey, Nilanjan; Bhattacharya, Santanu
2017-05-11
An easily synthesizable probe has been employed for dual mode sensing of glucosamine in pure water. The method was also applied for glucosamine estimation in blood serum samples and pharmaceutical tablets. Further, selective detection of glucosamine was also achieved using portable color strips.
Displacement control of an antagonistic-type twisted and coiled polymer actuator
NASA Astrophysics Data System (ADS)
Suzuki, Motoya; Kamamichi, Norihiro
2018-03-01
A novel artificial muscle actuator referred to as a twisted and coiled polymer actuator can be easily fabricated by commercially available nylon fibers. It can be thermally activated and has remarkable properties such as large deformation and flexibility. The actuator uses conductive nylon fibers and can be activated by Joule heating and is easily controlled electrically. However, asymmetric response characteristics due to a speed difference in heating-cooling are a problem. In the case of actuation in air, the cooling speed depends on the external temperature, and is slower than the heating speed. To solve these problems, we apply an antagonistic structure. The validity of the applied method is investigated through numerical simulations and experiments. The response characteristics of the PID feedback control and the 2-DOF control of the displacement are investigated.
Li, Xu; Xia, Rongmin; He, Bin
2008-01-01
A new tomographic algorithm for reconstructing a curl-free vector field, whose divergence serves as acoustic source is proposed. It is shown that under certain conditions, the scalar acoustic measurements obtained from a surface enclosing the source area can be vectorized according to the known measurement geometry and then be used to reconstruct the vector field. The proposed method is validated by numerical experiments. This method can be easily applied to magnetoacoustic tomography with magnetic induction (MAT-MI). A simulation study of applying this method to MAT-MI shows that compared to existing methods, the proposed method can give an accurate estimation of the induced current distribution and a better reconstruction of electrical conductivity within an object.
A new Lagrangian random choice method for steady two-dimensional supersonic/hypersonic flow
NASA Technical Reports Server (NTRS)
Loh, C. Y.; Hui, W. H.
1991-01-01
Glimm's (1965) random choice method has been successfully applied to compute steady two-dimensional supersonic/hypersonic flow using a new Lagrangian formulation. The method is easy to program, fast to execute, yet it is very accurate and robust. It requires no grid generation, resolves slipline and shock discontinuities crisply, can handle boundary conditions most easily, and is applicable to hypersonic as well as supersonic flow. It represents an accurate and fast alternative to the existing Eulerian methods. Many computed examples are given.
Evaluation of Analysis Techniques for Fluted-Core Sandwich Cylinders
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Schultz, Marc R.
2012-01-01
Buckling-critical launch-vehicle structures require structural concepts that have high bending stiffness and low mass. Fluted-core, also known as truss-core, sandwich construction is one such concept. In an effort to identify an analysis method appropriate for the preliminary design of fluted-core cylinders, the current paper presents and compares results from several analysis techniques applied to a specific composite fluted-core test article. The analysis techniques are evaluated in terms of their ease of use and for their appropriateness at certain stages throughout a design analysis cycle (DAC). Current analysis techniques that provide accurate determination of the global buckling load are not readily applicable early in the DAC, such as during preliminary design, because they are too costly to run. An analytical approach that neglects transverse-shear deformation is easily applied during preliminary design, but the lack of transverse-shear deformation results in global buckling load predictions that are significantly higher than those from more detailed analysis methods. The current state of the art is either too complex to be applied for preliminary design, or is incapable of the accuracy required to determine global buckling loads for fluted-core cylinders. Therefore, it is necessary to develop an analytical method for calculating global buckling loads of fluted-core cylinders that includes transverse-shear deformations, and that can be easily incorporated in preliminary design.
Liquid Galvanic Coatings for Protection of Imbedded Metals
NASA Technical Reports Server (NTRS)
MacDowell, Louis G. (Inventor); Curran, Joseph J. (Inventor)
2003-01-01
Coating compositions and methods of their use are described herein for the reduction of corrosion in imbedded metal structures. The coatings are applied as liquids to an external surface of a substrate in which the metal structures are imbedded. The coatings are subsequently allowed to dry. The liquid applied coatings provide galvanic protection to the imbedded metal structures. Continued protection can be maintained with periodic reapplication of the coating compositions, as necessary, to maintain electrical continuity. Because the coatings may be applied using methods similar to standard paints, and because the coatings are applied to external surfaces of the substrates in which the metal structures are imbedded, the corresponding corrosion protection may be easily maintained. The coating compositions are particularly useful in the protection of metal-reinforced concrete.
Robust regression on noisy data for fusion scaling laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be; Laboratoire de Physique des Plasmas de l'ERM - Laboratorium voor Plasmafysica van de KMS
2014-11-15
We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.
[Detecting fire smoke based on the multispectral image].
Wei, Ying-Zhuo; Zhang, Shao-Wu; Liu, Yan-Wei
2010-04-01
Smoke detection is very important for preventing forest-fire in the fire early process. Because the traditional technologies based on video and image processing are easily affected by the background dynamic information, three limitations exist in these technologies, i. e. lower anti-interference ability, higher false detection rate and the fire smoke and water fog being not easily distinguished. A novel detection method for detecting smoke based on the multispectral image was proposed in the present paper. Using the multispectral digital imaging technique, the multispectral image series of fire smoke and water fog were obtained in the band scope of 400 to 720 nm, and the images were divided into bins. The Euclidian distance among the bins was taken as a measurement for showing the difference of spectrogram. After obtaining the spectral feature vectors of dynamic region, the regions of fire smoke and water fog were extracted according to the spectrogram feature difference between target and background. The indoor and outdoor experiments show that the smoke detection method based on multispectral image can be applied to the smoke detection, which can effectively distinguish the fire smoke and water fog. Combined with video image processing method, the multispectral image detection method can also be applied to the forest fire surveillance, reducing the false alarm rate in forest fire detection.
The transient divided bar method for laboratory measurements of thermal properties
NASA Astrophysics Data System (ADS)
Bording, Thue S.; Nielsen, Søren B.; Balling, Niels
2016-12-01
Accurate information on thermal conductivity and thermal diffusivity of materials is of central importance in relation to geoscience and engineering problems involving the transfer of heat. Several methods, including the classical divided bar technique, are available for laboratory measurements of thermal conductivity, but much fewer for thermal diffusivity. We have generalized the divided bar technique to the transient case in which thermal conductivity, volumetric heat capacity and thereby also thermal diffusivity are measured simultaneously. As the density of samples is easily determined independently, specific heat capacity can also be determined. The finite element formulation provides a flexible forward solution for heat transfer across the bar, and thermal properties are estimated by inverse Monte Carlo modelling. This methodology enables a proper quantification of experimental uncertainties on measured thermal properties and information on their origin. The developed methodology was applied to various materials, including a standard ceramic material and different rock samples, and measuring results were compared with results applying traditional steady-state divided bar and an independent line-source method. All measurements show highly consistent results and with excellent reproducibility and high accuracy. For conductivity the obtained uncertainty is typically 1-3 per cent, and for diffusivity uncertainty may be reduced to about 3-5 per cent. The main uncertainty originates from the presence of thermal contact resistance associated with the internal interfaces in the bar. These are not resolved during inversion and it is imperative that they are minimized. The proposed procedure is simple and may quite easily be implemented to the many steady-state divided bar systems in operation. A thermally controlled bath, as applied here, may not be needed. Simpler systems, such as applying temperature-controlled water directly from a tap, may also be applied.
ROUTING DEMAND CHANGES TO USERS ON THE WM LATERAL CANAL WITH SACMAN
USDA-ARS?s Scientific Manuscript database
Most canals have either long travel times or insufficient in-canal storage to operate on-demand. Thus, most flow changes must be routed through the canal. Volume compensation has been proposed as a method for easily applying feedforward control to irrigation canals. SacMan (Software for Automated Ca...
ERIC Educational Resources Information Center
Andraos, John; Sayed, Murtuzaali
2007-01-01
A general analysis of reaction mass efficiency and raw material cost is developed using an Excel spread sheet format which can be applied to any chemical transformation. These new methods can be easily incorporated into standard laboratory exercises.
A simple method for the enrichment of bisphenols using boron nitride.
Fischnaller, Martin; Bakry, Rania; Bonn, Günther K
2016-03-01
A simple solid-phase extraction method for the enrichment of 5 bisphenol derivatives using hexagonal boron nitride (BN) was developed. BN was applied to concentrate bisphenol derivatives in spiked water samples and the compounds were analyzed using HPLC coupled to fluorescence detection. The effect of pH and organic solvents on the extraction efficiency was investigated. An enrichment factor up to 100 was achieved without evaporation and reconstitution. The developed method was applied for the determination of bisphenol A migrated from some polycarbonate plastic products. Furthermore, bisphenol derivatives were analyzed in spiked and non-spiked canned food and beverages. None of the analyzed samples exceeded the migration limit set by the European Union of 0.6mg/kg food. The method showed good recovery rates ranging from 80% to 110%. Validation of the method was performed in terms of accuracy and precision. The applied method is robust, fast, efficient and easily adaptable to different analytical problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lind, Cora; Gates, Stacy D.; Pedoussaut, Nathalie M.; Baiz, Tamam I.
2010-01-01
Low temperature methods have been applied to the synthesis of many advanced materials. Non-hydrolytic sol-gel (NHSG) processes offer an elegant route to stable and metastable phases at low temperatures. Excellent atomic level homogeneity gives access to polymorphs that are difficult or impossible to obtain by other methods. The NHSG approach is most commonly applied to the preparation of metal oxides, but can be easily extended to metal sulfides. Exploration of experimental variables allows control over product stoichiometry and crystal structure. This paper reviews the application of NHSG chemistry to the synthesis of negative thermal expansion oxides and selected metal sulfides.
An error reduction algorithm to improve lidar turbulence estimates for wind energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew
Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less
An error reduction algorithm to improve lidar turbulence estimates for wind energy
Newman, Jennifer F.; Clifton, Andrew
2017-02-10
Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less
Probabilistic fracture finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Lua, Y. J.
1991-01-01
The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.
Probabilistic fracture finite elements
NASA Astrophysics Data System (ADS)
Liu, W. K.; Belytschko, T.; Lua, Y. J.
1991-05-01
The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.
NASA Astrophysics Data System (ADS)
Zubiaga, A.; García, J. A.; Plazaola, F.; Tuomisto, F.; Zúñiga-Pérez, J.; Muñoz-Sanjosé, V.
2007-05-01
We present a method, based on positron annihilation spectroscopy, to obtain information on the defect depth profile of layers grown over high-quality substrates. We have applied the method to the case of ZnO layers grown on sapphire, but the method can be very easily generalized to other heterostructures (homostructures) where the positron mean diffusion length is small enough. Applying the method to the ratio of W and S parameters obtained from Doppler broadening measurements, W/S plots, it is possible to determine the thickness of the layer and the defect profile in the layer, when mainly one defect trapping positron is contributing to positron trapping at the measurement temperature. Indeed, the quality of such characterization is very important for potential technological applications of the layer.
Analysis of Brick Masonry Wall using Applied Element Method
NASA Astrophysics Data System (ADS)
Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen
2018-03-01
The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Deyst, J. J.; Crawford, B. S.
1975-01-01
The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Ice electrode electrolytic cell
Glenn, D.F.; Suciu, D.F.; Harris, T.L.; Ingram, J.C.
1993-04-06
This invention relates to a method and apparatus for removing heavy metals from waste water, soils, or process streams by electrolytic cell means. The method includes cooling a cell cathode to form an ice layer over the cathode and then applying an electric current to deposit a layer of the heavy metal over the ice. The metal is then easily removed after melting the ice. In a second embodiment, the same ice-covered electrode can be employed to form powdered metals.
Ice electrode electrolytic cell
Glenn, David F.; Suciu, Dan F.; Harris, Taryl L.; Ingram, Jani C.
1993-01-01
This invention relates to a method and apparatus for removing heavy metals from waste water, soils, or process streams by electrolytic cell means. The method includes cooling a cell cathode to form an ice layer over the cathode and then applying an electric current to deposit a layer of the heavy metal over the ice. The metal is then easily removed after melting the ice. In a second embodiment, the same ice-covered electrode can be employed to form powdered metals.
Friendly Extensible Transfer Tool Beta Version
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William P.; Gutierrez, Kenneth M.; McRee, Susan R.
2016-04-15
Often data transfer software is designed to meet specific requirements or apply to specific environments. Frequently, this requires source code integration for added functionality. An extensible data transfer framework is needed to more easily incorporate new capabilities, in modular fashion. Using FrETT framework, functionality may be incorporated (in many cases without need of source code) to handle new platform capabilities: I/O methods (e.g., platform specific data access), network transport methods, data processing (e.g., data compression.).
Smoothing of climate time series revisited
NASA Astrophysics Data System (ADS)
Mann, Michael E.
2008-08-01
We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.
Kanimozhi, K; Basha, S Khaleel; Kumari, V Sugantha; Kaviyarasu, K
2018-07-01
Freeze drying and salt leaching methods were applied to fabricate Chitosan/Poly(vinyl alcohol)/Carboxymethyl cellulose (CPCMC) biomimetic porous scaffolds for soft tissue engineering. The properties of these scaffolds were investigated and compared to those by freeze drying and salt leaching methods respectively. The salt-leached CS/PVA/CMC scaffolds were easily formed into desired shapes with a uniformly distributed and interconnected pore structure with an average pore size. The mechanical strength of the scaffolds increased with the porosity, and were easily modulated by the addition of carboxymethyl cellulose. The morphology of the porous scaffolds observed using a SEM exhibited good porosity and interconnectivity of pores. MTT assay using L929 fibroblast cells demonstrated that the cell viability of the porous scaffold was good. Scaffolds prepared by salt leached method show larger swelling capacity, and mechanical strength, potent antibacterial activity and more cell viability than freeze dried method. It is found that salt leaching method has distinguished characteristics of simple, efficient, feasible and less economic than freeze dried scaffolds.
Space-frame connection for small-diameter round timber
Ronald W. Wolfe; Agron E. Gjinolli; John R. King
2000-01-01
To promote more efficient use of small-diameter timber, research efforts are being focused on the development and evaluation of connection methods that can be easily applied to non-standard round wood profiles. This report summarizes an evaluation of a bdowel-nut connectionc as an option for the use of Douglas-fir peeler cores in three-dimensional truss or bspace-...
Radiant heat exchange calculations in radiantly heated and cooled enclosures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, K.S.; Zhang, P.
1995-08-01
This paper presents the development of a three-dimensional mathematical model to compute the radiant heat exchange between surfaces separated by a transparent and/or opaque medium. The model formulation accommodates arbitrary arrangements of the interior surfaces, as well as arbitrary placement of obstacles within the enclosure. The discrete ordinates radiation model is applied and has the capability to analyze the effect of irregular geometries and diverse surface temperatures and radiative properties. The model is verified by comparing calculated heat transfer rates to heat transfer rates determined from the exact radiosity method for four different enclosures. The four enclosures were selected tomore » provide a wide range of verification. This three-dimensional model based on the discrete ordinates method can be applied to a building to assist the design engineer in sizing a radiant heating system. By coupling this model with a convective and conductive heat transfer model and a thermal comfort model, the comfort levels throughout the room can be easily and efficiently mapped for a given radiant heater location. In addition, objects such as airplanes, trucks, furniture, and partitions can be easily incorporated to determine their effect on the performance of the radiant heating system.« less
An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring
NASA Technical Reports Server (NTRS)
Buratynski, E. K.; Caughey, D. A.
1984-01-01
An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.
Applications of rule-induction in the derivation of quantitative structure-activity relationships.
A-Razzak, M; Glen, R C
1992-08-01
Recently, methods have been developed in the field of Artificial Intelligence (AI), specifically in the expert systems area using rule-induction, designed to extract rules from data. We have applied these methods to the analysis of molecular series with the objective of generating rules which are predictive and reliable. The input to rule-induction consists of a number of examples with known outcomes (a training set) and the output is a tree-structured series of rules. Unlike most other analysis methods, the results of the analysis are in the form of simple statements which can be easily interpreted. These are readily applied to new data giving both a classification and a probability of correctness. Rule-induction has been applied to in-house generated and published QSAR datasets and the methodology, application and results of these analyses are discussed. The results imply that in some cases it would be advantageous to use rule-induction as a complementary technique in addition to conventional statistical and pattern-recognition methods.
Applications of rule-induction in the derivation of quantitative structure-activity relationships
NASA Astrophysics Data System (ADS)
A-Razzak, Mohammed; Glen, Robert C.
1992-08-01
Recently, methods have been developed in the field of Artificial Intelligence (AI), specifically in the expert systems area using rule-induction, designed to extract rules from data. We have applied these methods to the analysis of molecular series with the objective of generating rules which are predictive and reliable. The input to rule-induction consists of a number of examples with known outcomes (a training set) and the output is a tree-structured series of rules. Unlike most other analysis methods, the results of the analysis are in the form of simple statements which can be easily interpreted. These are readily applied to new data giving both a classification and a probability of correctness. Rule-induction has been applied to in-house generated and published QSAR datasets and the methodology, application and results of these analyses are discussed. The results imply that in some cases it would be advantageous to use rule-induction as a complementary technique in addition to conventional statistical and pattern-recognition methods.
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
Meloni, Domenico; Spina, Antonio; Satta, Gianluca; Chessa, Vittorio
2016-06-25
In recent years, besides the consumption of fresh sea urchin specimens, the demand of minimally-processed roe has grown considerably. This product has made frequent consumption in restaurants possible and frauds are becoming widespread with the partial replacement of sea urchin roe with surrogates that are similar in colour. One of the main factors that determines the quality of the roe is its colour and small differences in colour scale cannot be easily discerned by the consumers. In this study we have applied a rapid colorimetric method for reveal the fraudulent partial substitution of semi-solid sea urchin roe with liquid egg yolk. Objective assessment of whiteness (L*), redness (a*), yellowness (b*), hue (h*), and chroma (C*) was carried out with a digital spectrophotometer using the CIE L*a*b* colour measurement system. The colorimetric method highlighted statistically significant differences among sea urchin roe and liquid egg yolk that could be easily discerned quantitatively.
Nearshore Measurements From a Small UAV.
NASA Astrophysics Data System (ADS)
Holman, R. A.; Brodie, K. L.; Spore, N.
2016-02-01
Traditional measurements of nearshore hydrodynamics and evolving bathymetry are expensive and dangerous and must be frequently repeated to track the rapid changes of typical ocean beaches. However, extensive research into remote sensing methods using cameras or radars mounted on fixed towers has resulted in increasingly mature algorithms for estimating bathymetry, currents and wave characteristics. This naturally raises questions about how easily and effectively these algorithms can be applied to optical data from low-cost, easily-available UAV platforms. This paper will address the characteristics and quality of data taken from a small, low-cost UAV, the DJI Phantom. In particular, we will study the stability of imagery from a vehicle `parked' at 300 feet altitude, methods to stabilize remaining wander, and the quality of nearshore bathymetry estimates from the resulting image time series, computed using the cBathy algorithm. Estimates will be compared to ground truth surveys collected at the Field Research Facility at Duck, NC.
Positron lifetime beam for defect studies in thin epitaxial semiconductor structures
NASA Astrophysics Data System (ADS)
Laakso, A.; Saarinen, K.; Hautojärvi, P.
2001-12-01
Positron annihilation spectroscopies are methods for direct identification of vacancy-type defects by measuring positron lifetime and Doppler broadening of annihilation radiation and providing information about open volume, concentration and atoms surrounding the defect. Both these techniques are easily applied to bulk samples. Only the Doppler broadening spectroscopy can be employed in thin epitaxial samples by utilizing low-energy positron beams. Here we describe the positron lifetime beam which will provide us with a method to measure lifetime in thin semiconductor layers.
Naz, Saba; Sherazi, Sayed Tufail Hussain; Talpur, Farah N; Mahesar, Sarfaraz A; Kara, Huseyin
2012-01-01
A simple, rapid, economical, and environmentally friendly analytical method was developed for the quantitative assessment of free fatty acids (FFAs) present in deodorizer distillates and crude oils by single bounce-attenuated total reflectance-FTIR spectroscopy. Partial least squares was applied for the calibration model based on the peak region of the carbonyl group (C=O) from 1726 to 1664 cm(-1) associated with the FFAs. The proposed method totally avoided the use of organic solvents or costly standards and could be applied easily in the oil processing industry. The accuracy of the method was checked by comparison to a conventional standard American Oil Chemists' Society (AOCS) titrimetric procedure, which provided good correlation (R = 0.99980), with an SD of +/- 0.05%. Therefore, the proposed method could be used as an alternate to the AOCS titrimetric method for the quantitative determination of FFAs especially in deodorizer distillates.
The stress analysis method for three-dimensional composite materials
NASA Astrophysics Data System (ADS)
Nagai, Kanehiro; Yokoyama, Atsushi; Maekawa, Zen'ichiro; Hamada, Hiroyuki
1994-05-01
This study proposes a stress analysis method for three-dimensionally fiber reinforced composite materials. In this method, the rule-of mixture for composites is successfully applied to 3-D space in which material properties would change 3-dimensionally. The fundamental formulas for Young's modulus, shear modulus, and Poisson's ratio are derived. Also, we discuss a strength estimation and an optimum material design technique for 3-D composite materials. The analysis is executed for a triaxial orthogonally woven fabric, and their results are compared to the experimental data in order to verify the accuracy of this method. The present methodology can be easily understood with basic material mechanics and elementary mathematics, so it enables us to write a computer program of this theory without difficulty. Furthermore, this method can be applied to various types of 3-D composites because of its general-purpose characteristics.
Detecting novel genes with sparse arrays
Haiminen, Niina; Smit, Bart; Rautio, Jari; Vitikainen, Marika; Wiebe, Marilyn; Martinez, Diego; Chee, Christine; Kunkel, Joe; Sanchez, Charles; Nelson, Mary Anne; Pakula, Tiina; Saloheimo, Markku; Penttilä, Merja; Kivioja, Teemu
2014-01-01
Species-specific genes play an important role in defining the phenotype of an organism. However, current gene prediction methods can only efficiently find genes that share features such as sequence similarity or general sequence characteristics with previously known genes. Novel sequencing methods and tiling arrays can be used to find genes without prior information and they have demonstrated that novel genes can still be found from extensively studied model organisms. Unfortunately, these methods are expensive and thus are not easily applicable, e.g., to finding genes that are expressed only in very specific conditions. We demonstrate a method for finding novel genes with sparse arrays, applying it on the 33.9 Mb genome of the filamentous fungus Trichoderma reesei. Our computational method does not require normalisations between arrays and it takes into account the multiple-testing problem typical for analysis of microarray data. In contrast to tiling arrays, that use overlapping probes, only one 25mer microarray oligonucleotide probe was used for every 100 b. Thus, only relatively little space on a microarray slide was required to cover the intergenic regions of a genome. The analysis was done as a by-product of a conventional microarray experiment with no additional costs. We found at least 23 good candidates for novel transcripts that could code for proteins and all of which were expressed at high levels. Candidate genes were found to neighbour ire1 and cre1 and many other regulatory genes. Our simple, low-cost method can easily be applied to finding novel species-specific genes without prior knowledge of their sequence properties. PMID:20691772
Bias-Free Chemically Diverse Test Sets from Machine Learning.
Swann, Ellen T; Fernandez, Michael; Coote, Michelle L; Barnard, Amanda S
2017-08-14
Current benchmarking methods in quantum chemistry rely on databases that are built using a chemist's intuition. It is not fully understood how diverse or representative these databases truly are. Multivariate statistical techniques like archetypal analysis and K-means clustering have previously been used to summarize large sets of nanoparticles however molecules are more diverse and not as easily characterized by descriptors. In this work, we compare three sets of descriptors based on the one-, two-, and three-dimensional structure of a molecule. Using data from the NIST Computational Chemistry Comparison and Benchmark Database and machine learning techniques, we demonstrate the functional relationship between these structural descriptors and the electronic energy of molecules. Archetypes and prototypes found with topological or Coulomb matrix descriptors can be used to identify smaller, statistically significant test sets that better capture the diversity of chemical space. We apply this same method to find a diverse subset of organic molecules to demonstrate how the methods can easily be reapplied to individual research projects. Finally, we use our bias-free test sets to assess the performance of density functional theory and quantum Monte Carlo methods.
Choi, Dongchul; Hong, Sung-Jei; Son, Yongkeun
2014-11-27
In this study, indium-tin-oxide (ITO) nanoparticles were simply recovered from the thin film transistor-liquid crystal display (TFT-LCD) panel scraps by means of lift-off method. This can be done by dissolving color filter (CF) layer which is located between ITO layer and glass substrate. In this way the ITO layer was easily lifted off the glass substrate of the panel scrap without panel crushing. Over 90% of the ITO on the TFT-LCD panel was recovered by using this method. After separating, the ITO was obtained as particle form and their characteristics were investigated. The recovered product appeared as aggregates of particles less than 100 nm in size. The weight ratio of In/Sn is very close to 91/9. XRD analysis showed that the ITO nanoparticles have well crystallized structures with (222) preferred orientation even after recovery. The method described in this paper could be applied to the industrial recovery business for large size LCD scraps from TV easily without crushing the glass substrate.
Choi, Dongchul; Hong, Sung-Jei; Son, Yongkeun
2014-01-01
In this study, indium-tin-oxide (ITO) nanoparticles were simply recovered from the thin film transistor-liquid crystal display (TFT-LCD) panel scraps by means of lift-off method. This can be done by dissolving color filter (CF) layer which is located between ITO layer and glass substrate. In this way the ITO layer was easily lifted off the glass substrate of the panel scrap without panel crushing. Over 90% of the ITO on the TFT-LCD panel was recovered by using this method. After separating, the ITO was obtained as particle form and their characteristics were investigated. The recovered product appeared as aggregates of particles less than 100 nm in size. The weight ratio of In/Sn is very close to 91/9. XRD analysis showed that the ITO nanoparticles have well crystallized structures with (222) preferred orientation even after recovery. The method described in this paper could be applied to the industrial recovery business for large size LCD scraps from TV easily without crushing the glass substrate. PMID:28788267
PCR-mediated site-directed mutagenesis.
Carey, Michael F; Peterson, Craig L; Smale, Stephen T
2013-08-01
Unlike traditional site-directed mutagenesis, this protocol requires only a single PCR step using full plasmid amplification to generate point mutants. The method can introduce small mutations into promoter sites and is even better suited for introducing single or double mutations into proteins. It is elegant in its simplicity and can be applied quite easily in any laboratory using standard protein expression vectors and commercially available reagents.
Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm
NASA Astrophysics Data System (ADS)
Karaca, Yeliz; Cattani, Carlo
Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.
NASA Astrophysics Data System (ADS)
Olav Skøien, Jon; Laaha, Gregor; Koffler, Daniel; Blöschl, Günter; Pebesma, Edzer; Parajka, Juraj; Viglione, Alberto
2013-04-01
Geostatistical methods have been applied only to a limited extent for spatial interpolation in applications where the observations have an irregular support, such as runoff characteristics or population health data. Several studies have shown the potential of such methods (Gottschalk 1993, Sauquet et al. 2000, Gottschalk et al. 2006, Skøien et al. 2006, Goovaerts 2008), but these developments have so far not led to easily accessible, versatile, easy to apply and open source software. Based on the top-kriging approach suggested by Skøien et al. (2006), we will here present the package rtop, which has been implemented in the statistical environment R (R Core Team 2012). Taking advantage of the existing methods in R for analysis of spatial objects (Bivand et al. 2008), and the extensive possibilities for visualizing the results, rtop makes it easy to apply geostatistical interpolation methods when observations have a non-point spatial support. Although the package is flexible regarding data input, the main application so far has been for interpolation along river networks. We will present some examples showing how the package can easily be used for such interpolation. The model will soon be uploaded to CRAN, but is in the meantime also available from R-forge and can be installed by: > install.packages("rtop", repos="http://R-Forge.R-project.org") Bivand, R.S., Pebesma, E.J. & Gómez-Rubio, V., 2008. Applied spatial data analysis with r: Springer. Goovaerts, P., 2008. Kriging and semivariogram deconvolution in the presence of irregular geographical units. Mathematical Geosciences, 40 (1), 101-128. Gottschalk, L., 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., Krasovskaia, I., Leblois, E. & Sauquet, E., 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Core Team, 2012. R: A language and environment for statistical computing. Vienna, Austria, ISBN 3-900051-07-0. Sauquet, E., Gottschalk, L. & Leblois, E., 2000. Mapping average annual runoff: A hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J.O., Merz, R. & Blöschl, G., 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Xiao, Mingqing; Liang, Yajun; Tang, Xilang; Li, Chao
2018-01-01
The solenoid valve is a kind of basic automation component applied widely. It’s significant to analyze and predict its degradation failure mechanism to improve the reliability of solenoid valve and do research on prolonging life. In this paper, a three-dimensional finite element analysis model of solenoid valve is established based on ANSYS Workbench software. A sequential coupling method used to calculate temperature filed and mechanical stress field of solenoid valve is put forward. The simulation result shows the sequential coupling method can calculate and analyze temperature and stress distribution of solenoid valve accurately, which has been verified through the accelerated life test. Kalman filtering algorithm is introduced to the data processing, which can effectively reduce measuring deviation and restore more accurate data information. Based on different driving current, a kind of failure mechanism which can easily cause the degradation of coils is obtained and an optimization design scheme of electro-insulating rubbers is also proposed. The high temperature generated by driving current and the thermal stress resulting from thermal expansion can easily cause the degradation of coil wires, which will decline the electrical resistance of coils and result in the eventual failure of solenoid valve. The method of finite element analysis can be applied to fault diagnosis and prognostic of various solenoid valves and improve the reliability of solenoid valve’s health management.
Tracking perturbations in Boolean networks with spectral methods
NASA Astrophysics Data System (ADS)
Kesseli, Juha; Rämö, Pauli; Yli-Harja, Olli
2005-08-01
In this paper we present a method for predicting the spread of perturbations in Boolean networks. The method is applicable to networks that have no regular topology. The prediction of perturbations can be performed easily by using a presented result which enables the efficient computation of the required iterative formulas. This result is based on abstract Fourier transform of the functions in the network. In this paper the method is applied to show the spread of perturbations in networks containing a distribution of functions found from biological data. The advances in the study of the spread of perturbations can directly be applied to enable ways of quantifying chaos in Boolean networks. Derrida plots over an arbitrary number of time steps can be computed and thus distributions of functions compared with each other with respect to the amount of order they create in random networks.
Estimating the signal-to-noise ratio of AVIRIS data
NASA Technical Reports Server (NTRS)
Curran, Paul J.; Dungan, Jennifer L.
1988-01-01
To make the best use of narrowband airborne visible/infrared imaging spectrometer (AVIRIS) data, an investigator needs to know the ratio of signal to random variability or noise (signal-to-noise ratio or SNR). The signal is land cover dependent and varies with both wavelength and atmospheric absorption; random noise comprises sensor noise and intrapixel variability (i.e., variability within a pixel). The three existing methods for estimating the SNR are inadequate, since typical laboratory methods inflate while dark current and image methods deflate the SNR. A new procedure is proposed called the geostatistical method. It is based on the removal of periodic noise by notch filtering in the frequency domain and the isolation of sensor noise and intrapixel variability using the semi-variogram. This procedure was applied easily and successfully to five sets of AVIRIS data from the 1987 flying season and could be applied to remotely sensed data from broadband sensors.
Elemental Analysis in Biological Matrices Using ICP-MS.
Hansen, Matthew N; Clogston, Jeffrey D
2018-01-01
The increasing exploration of metallic nanoparticles for use as cancer therapeutic agents necessitates a sensitive technique to track the clearance and distribution of the material once introduced into a living system. Inductively coupled plasma mass spectrometry (ICP-MS) provides a sensitive and selective tool for tracking the distribution of metal components from these nanotherapeutics. This chapter presents a standardized method for processing biological matrices, ensuring complete homogenization of tissues, and outlines the preparation of appropriate standards and controls. The method described herein utilized gold nanoparticle-treated samples; however, the method can easily be applied to the analysis of other metals.
Spectrophotometric method for the determination of paraquat in water, grain and plant materials.
Shivhare, P; Gupta, V K
1991-04-01
A sensitive spectrophotometric method for the determination of paraquat using ascorbic acid (an easily available reducing agent) is described. Paraquat is reduced with ascorbic acid in alkaline solution to give a blue radical ion with an absorbance maximum at 600 nm. Beer's law is obeyed in the range 12-96 micrograms of paraquat in 10 ml of the final solution (1.2-9.6 ppm). The important analytical parameters and the optimum reaction conditions were evaluated. The method was applied successfully to the determination of paraquat in water, grain and plant materials.
Calculation method of spin accumulations and spin signals in nanostructures using spin resistors
NASA Astrophysics Data System (ADS)
Torres, Williams Savero; Marty, Alain; Laczkowski, Piotr; Jamet, Matthieu; Vila, Laurent; Attané, Jean-Philippe
2018-02-01
Determination of spin accumulations and spin currents is essential for a deep understanding of spin transport in nanostructures and further optimization of spintronic devices. So far, they are easily obtained using different approaches in nanostructures composed of few elements; however their calculation becomes complicated as the number of elements increases. Here, we propose a 1-D spin resistor approach to calculate analytically spin accumulations, spin currents and magneto-resistances in heterostructures. Our method, particularly applied to multi-terminal metallic nanostructures, provides a fast and systematic mean to determine such spin properties in structures where conventional methods remain complex.
Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam
2011-05-01
The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.
A second-order accurate kinetic-theory-based method for inviscid compressible flows
NASA Technical Reports Server (NTRS)
Deshpande, Suresh M.
1986-01-01
An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.
NASA Astrophysics Data System (ADS)
Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin
2011-03-01
The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.
High-speed autofocusing of a cell using diffraction pattern
NASA Astrophysics Data System (ADS)
Oku, Hiromasa; Ishikawa, Masatoshi; Theodorus; Hashimoto, Koichi
2006-05-01
This paper proposes a new autofocusing method for observing cells under a transmission illumination. The focusing method uses a quick and simple focus estimation technique termed “depth from diffraction,” which is based on a diffraction pattern in a defocused image of a biological specimen. Since this method can estimate the focal position of the specimen from only a single defocused image, it can easily realize high-speed autofocusing. To demonstrate the method, it was applied to continuous focus tracking of a swimming paramecium, in combination with two-dimensional position tracking. Three-dimensional tracking of the paramecium for 70 s was successfully demonstrated.
NASA Astrophysics Data System (ADS)
Schulz, Hartwig; Quilitzsch, Rolf; Krüger, Hans
2003-12-01
The essential oils obtained from various chemotypes of thyme, origano and chamomile species were studied by ATR/FT-IR as well as NIR spectroscopy. Application of multivariate statistics (PCA, PLS) in conjunction with analytical reference data leads to very good IR and NIR calibration results. For the main essential oil components (e.g. carvacrol, thymol, γ-terpinene, α-bisabolol and β-farnesene) standard errors are in the range of the applied GC reference method. In most cases the multiple coefficients of determination ( R2) are >0.97. Using the IR fingerprint region (900-1400 cm -1) a qualitative discrimination of the individual chemotypes is possible already by visual judgement without to apply any chemometric algorithms.The described rapid and non-destructive methods can be applied in industry to control very easily purifying, blending and redistillation processes of the mentioned essential oils.
Carlos A. Gonzalez-Benecke; Eric J. Jokela; Wendell P. Cropper; Rosvel Bracho; Daniel J. Leduc
2014-01-01
The forest simulation model, 3-PG, has been widely applied as a useful tool for predicting growth of forest species in many countries. The model has the capability to estimate the effects of management, climate and site characteristics on many stand attributes using easily available data. Currently, there is an increasing interest in estimating biomass and assessing...
Kalita, Dhruba J; Rao, Akshay; Rajvanshi, Ishir; Gupta, Ashish K
2011-06-14
We have applied parametric equations of motion (PEM) to study photodissociation dynamics of H(2)(+). The resonances are extracted using smooth exterior scaling method. This is the first application of PEM to non-Hermitian Hamiltonian that includes resonances and the continuum. Here, we have studied how the different resonance states behave with respect to the change in field amplitude. The advantage of this method is that one can easily trace the different states that are changing as the field parameter changes.
Theoretical and experimental aspects of chaos control by time-delayed feedback.
Just, Wolfram; Benner, Hartmut; Reibold, Ekkehard
2003-03-01
We review recent developments for the control of chaos by time-delayed feedback methods. While such methods are easily applied even in quite complex experimental context the theoretical analysis yields infinite-dimensional differential-difference systems which are hard to tackle. The essential ideas for a general theoretical approach are sketched and the results are compared to electronic circuits and to high power ferromagnetic resonance experiments. Our results show that the control performance can be understood on the basis of experimentally accessible quantities without resort to any model for the internal dynamics.
van der Kooij, Dick; Martijn, Bram; Schaap, Peter G; Hoogenboezem, Wim; Veenendaal, Harm R; van der Wielen, Paul W J J
2015-12-15
Assessment of drinking-water biostability is generally based on measuring bacterial growth in short-term batch tests. However, microbial growth in the distribution system is affected by multiple interactions between water, biofilms and sediments. Therefore a diversity of test methods was applied to characterize the biostability of drinking water distributed without disinfectant residual at a surface-water supply. This drinking water complied with the standards for the heterotrophic plate count and coliforms, but aeromonads periodically exceeded the regulatory limit (1000 CFU 100 mL(-1)). Compounds promoting growth of the biopolymer-utilizing Flavobacterium johnsoniae strain A3 accounted for c. 21% of the easily assimilable organic carbon (AOC) concentration (17 ± 2 μg C L(-1)) determined by growth of pure cultures in the water after granular activated-carbon filtration (GACF). Growth of the indigenous bacteria measured as adenosine tri-phosphate in water samples incubated at 25 °C confirmed the low AOC in the GACF but revealed the presence of compounds promoting growth after more than one week of incubation. Furthermore, the concentration of particulate organic carbon in the GACF (83 ± 42 μg C L(-1), including 65% carbohydrates) exceeded the AOC concentration. The increased biomass accumulation rate in the continuous biofouling monitor (CBM) at the distribution system reservoir demonstrated the presence of easily biodegradable by-products related to ClO2 dosage to the GACF and in the CBM at 42 km from the treatment plant an iron-associated biomass accumulation was observed. The various methods applied thus distinguished between easily assimilable compounds, biopolymers, slowly biodegradable compounds and biomass-accumulation potential, providing an improved assessment of the biostability of the water. Regrowth of aeromonads may be related to biomass-turnover processes in the distribution system, but establishment of quantitative relationships is needed for confirmation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kamensky, David; Evans, John A; Hsu, Ming-Chen; Bazilevs, Yuri
2017-11-01
This paper discusses a method of stabilizing Lagrange multiplier fields used to couple thin immersed shell structures and surrounding fluids. The method retains essential conservation properties by stabilizing only the portion of the constraint orthogonal to a coarse multiplier space. This stabilization can easily be applied within iterative methods or semi-implicit time integrators that avoid directly solving a saddle point problem for the Lagrange multiplier field. Heart valve simulations demonstrate applicability of the proposed method to 3D unsteady simulations. An appendix sketches the relation between the proposed method and a high-order-accurate approach for simpler model problems.
Improving the local wavenumber method by automatic DEXP transformation
NASA Astrophysics Data System (ADS)
Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni
2014-12-01
In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.
Determination of the solubility of inorganic salts by headspace gas chromatography.
Chai, X S; Zhu, J Y
2003-05-09
This work reports a novel method for determination of salt solubility using headspace gas chromatography. A very small amount of volatile compound (such as methanol) is added in the studied solution. Due to the molecular interaction in the solution, the vapor-liquid equilibrium (VLE) partitioning coefficient of the volatile species will change with different salt contents in the solution. Therefore, the concentration of volatile species in the vapor phase is proportional to the salt concentration in the liquid phase, which can be easily determined by headspace gas chromatography. Until the salt concentration in the solution is saturated, the concentration of volatile compound in the vapor phase will continue to increase further and a breakpoint will appear on the VLE curve. The solubility of the salts can be determined by the identification of the breakpoint. It was found that the measured solubility of sodium carbonate and sodium sulfate in aqueous solutions is slightly higher (about 6-7%) than those reported in the literature method. The present method can be easily applied to industrial solution systems.
Newton-Euler Dynamic Equations of Motion for a Multi-body Spacecraft
NASA Technical Reports Server (NTRS)
Stoneking, Eric
2007-01-01
The Magnetospheric MultiScale (MMS) mission employs a formation of spinning spacecraft with several flexible appendages and thruster-based control. To understand the complex dynamic interaction of thruster actuation, appendage motion, and spin dynamics, each spacecraft is modeled as a tree of rigid bodies connected by spherical or gimballed joints. The method presented facilitates assembling by inspection the exact, nonlinear dynamic equations of motion for a multibody spacecraft suitable for solution by numerical integration. The building block equations are derived by applying Newton's and Euler's equations of motion to an "element" consisting of two bodies and one joint (spherical and gimballed joints are considered separately). Patterns in the "mass" and L'force" matrices guide assembly by inspection of a general N-body tree-topology system. Straightforward linear algebra operations are employed to eliminate extraneous constraint equations, resulting in a minimum-dimension system of equations to solve. This method thus combines a straightforward, easily-extendable, easily-mechanized formulation with an efficient computer implementation.
Definition of NASTRAN sets by use of parametric geometry
NASA Technical Reports Server (NTRS)
Baughn, Terry V.; Tiv, Mehran
1989-01-01
Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.
Method for applying photographic resists to otherwise incompatible substrates
NASA Technical Reports Server (NTRS)
Fuhr, W. (Inventor)
1981-01-01
A method for applying photographic resists to otherwise incompatible substrates, such as a baking enamel paint surface, is described wherein the uncured enamel paint surface is coated with a non-curing lacquer which is, in turn, coated with a partially cured lacquer. The non-curing lacquer adheres to the enamel and a photo resist material satisfactorily adheres to the partially cured lacquer. Once normal photo etching techniques are employed the lacquer coats can be easily removed from the enamel leaving the photo etched image. In the case of edge lighted instrument panels, a coat of uncured enamel is placed over the cured enamel followed by the lacquer coats and the photo resists which is exposed and developed. Once the etched uncured enamel is cured, the lacquer coats are removed leaving an etched panel.
Dispersion interference in the pulsed-wire measurement method
NASA Astrophysics Data System (ADS)
Shahal, O.; Elkonin, B. V.; Sokolowski, J. S.
1990-10-01
The magnetic profile of the wiggler to be used in the planned Weizmann Institute FEL has been measured using the pulsed-wire method. The main transverse deflection pattern caused by an electrical current pulse in a wire placed along the wiggler was sometimes accompanied by minor faster and slower parasitic components. These components interfered with the main profile, resulting in distorted mapping of the wiggler magnetic field. Their periodical structure being very close to the main pattern could not be easily resolved by applying a numerical Fourier transform. A strong correlation between the wire tension and the amplitude of the parasitic patterns was found. Significant damping of these oscillations was achieved by applying high enough tension to the wire (close the yield point), allowing to disregard their contribution to the measurement accuracy.
Kinetics analysis and quantitative calculations for the successive radioactive decay process
NASA Astrophysics Data System (ADS)
Zhou, Zhiping; Yan, Deyue; Zhao, Yuliang; Chai, Zhifang
2015-01-01
The general radioactive decay kinetics equations with branching were developed and the analytical solutions were derived by Laplace transform method. The time dependence of all the nuclide concentrations can be easily obtained by applying the equations to any known radioactive decay series. Taking the example of thorium radioactive decay series, the concentration evolution over time of various nuclide members in the family has been given by the quantitative numerical calculations with a computer. The method can be applied to the quantitative prediction and analysis for the daughter nuclides in the successive decay with branching of the complicated radioactive processes, such as the natural radioactive decay series, nuclear reactor, nuclear waste disposal, nuclear spallation, synthesis and identification of superheavy nuclides, radioactive ion beam physics and chemistry, etc.
A forward model-based validation of cardiovascular system identification
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Cohen, R. J.
2001-01-01
We present a theoretical evaluation of a cardiovascular system identification method that we previously developed for the analysis of beat-to-beat fluctuations in noninvasively measured heart rate, arterial blood pressure, and instantaneous lung volume. The method provides a dynamical characterization of the important autonomic and mechanical mechanisms responsible for coupling the fluctuations (inverse modeling). To carry out the evaluation, we developed a computational model of the cardiovascular system capable of generating realistic beat-to-beat variability (forward modeling). We applied the method to data generated from the forward model and compared the resulting estimated dynamics with the actual dynamics of the forward model, which were either precisely known or easily determined. We found that the estimated dynamics corresponded to the actual dynamics and that this correspondence was robust to forward model uncertainty. We also demonstrated the sensitivity of the method in detecting small changes in parameters characterizing autonomic function in the forward model. These results provide confidence in the performance of the cardiovascular system identification method when applied to experimental data.
Applying Standard Interfaces to a Process-Control Language
NASA Technical Reports Server (NTRS)
Berthold, Richard T.
2005-01-01
A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
Tolerance allocation for an electronic system using neural network/Monte Carlo approach
NASA Astrophysics Data System (ADS)
Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque
2001-12-01
The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.
Method and system for automated on-chip material and structural certification of MEMS devices
Sinclair, Michael B.; DeBoer, Maarten P.; Smith, Norman F.; Jensen, Brian D.; Miller, Samuel L.
2003-05-20
A new approach toward MEMS quality control and materials characterization is provided by a combined test structure measurement and mechanical response modeling approach. Simple test structures are cofabricated with the MEMS devices being produced. These test structures are designed to isolate certain types of physical response, so that measurement of their behavior under applied stress can be easily interpreted as quality control and material properties information.
A note on an attempt at more efficient Poisson series evaluation. [for lunar libration
NASA Technical Reports Server (NTRS)
Shelus, P. J.; Jefferys, W. H., III
1975-01-01
A substantial reduction has been achieved in the time necessary to compute lunar libration series. The method involves eliminating many of the trigonometric function calls by a suitable transformation and applying a short SNOBOL processor to the FORTRAN coding of the transformed series, which obviates many of the multiplication operations during the course of series evaluation. It is possible to accomplish similar results quite easily with other Poisson series.
A simple method of obtaining concentration depth-profiles from X-ray diffraction
NASA Technical Reports Server (NTRS)
Wiedemann, K. E.; Unnam, J.
1984-01-01
The construction of composition profiles from X-ray intensity bands was investigated. The intensity band-to-composition profile transformation utilizes a solution which can be easily evaluated. The technique can be applied to thin films and thick speciments for which the variation of lattice parameters, linear absorption coefficient, and reflectivity with composition are known. A deconvolution scheme with corrections for the instrumental broadening and ak-alfadoublet is discussed.
Dose measurement based on threshold shift in MOSFET arrays in commercial SRAMS
NASA Technical Reports Server (NTRS)
Scheick, L. Z.; Swift, G.
2002-01-01
A new method using an array of MOS transistors isdescribed for measuring dose absorbed from ionizingradiation. Using the array of MOSFETs in a SRAM, a direct measurement of the number of MOS cells which change as a function of applied bias on the SRAM. Since the input and output of a SRAM used as a dosimeter is completely digital, the measurement of dose is easily accessible by a remote processing system.
A four-dimensional model with the fermionic determinant exactly evaluated
NASA Astrophysics Data System (ADS)
Mignaco, J. A.; Rego Monteiro, M. A.
1986-07-01
A method is presented to compute the fermion determinant of some class of field theories. By this method the following results of the fermion determinant in two dimensions are easily recovered: (i) Schwinger model without reference to a particular gauge. (ii) QCD in the light-cone gauge. (iii) Gauge invariant result of QCD. The method is finally applied to give an analytical solution of the fermion determinant of a four-dimensional, non-abelian, Dirac-like theory with massless fermions interacting with an external vector field through a pseudo-vectorial coupling. Fellow of the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Brazil.
Using recurrence plot analysis for software execution interpretation and fault detection
NASA Astrophysics Data System (ADS)
Mosdorf, M.
2015-09-01
This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.
Calibration Designs for Non-Monolithic Wind Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Johnson, Thomas H.; Parker, Peter A.; Landman, Drew
2010-01-01
This research paper investigates current experimental designs and regression models for calibrating internal wind tunnel force balances of non-monolithic design. Such calibration methods are necessary for this class of balance because it has an electrical response that is dependent upon the sign of the applied forces and moments. This dependency gives rise to discontinuities in the response surfaces that are not easily modeled using traditional response surface methodologies. An analysis of current recommended calibration models is shown to lead to correlated response model terms. Alternative modeling methods are explored which feature orthogonal or near-orthogonal terms.
Exact solutions of fractional mBBM equation and coupled system of fractional Boussinesq-Burgers
NASA Astrophysics Data System (ADS)
Javeed, Shumaila; Saif, Summaya; Waheed, Asif; Baleanu, Dumitru
2018-06-01
The new exact solutions of nonlinear fractional partial differential equations (FPDEs) are established by adopting first integral method (FIM). The Riemann-Liouville (R-L) derivative and the local conformable derivative definitions are used to deal with the fractional order derivatives. The proposed method is applied to get exact solutions for space-time fractional modified Benjamin-Bona-Mahony (mBBM) equation and coupled time-fractional Boussinesq-Burgers equation. The suggested technique is easily applicable and effectual which can be implemented successfully to obtain the solutions for different types of nonlinear FPDEs.
METHOD FOR SOLDERING NORMALLY NON-SOLDERABLE ARTICLES
McGuire, J.C.
1959-11-24
Methods are presented for coating and joining materials which are considered difficult to solder by utilizing an abrasive wheel and applying a bar of a suitable coating material, such as Wood's metal, to the rotating wheel to fill the cavities of the abrasive wheel and load the wheel with the coating material. The surface of the base material is then rubbed against the loaded rotating wheel, thereby coating the surface with the soft coating metal. The coating is a cohesive bonded layer and holds the base metal as tenaciously as a solder holds to easily solderable metals.
Richard, Vaea; Aubry, Maite
2018-05-01
Experimental studies on Zika virus (ZIKV) may require improvement of infectious titers in viral stocks obtained by cell culture amplification. The use of centrifugal filter devices to increase infectious titers of ZIKV from cell-culture supernatants is highlighted here. A mean gain of 2.33 ± 0.12 log 10 DICT 50 /mL was easily and rapidly obtained with this process. This efficient method of ultrafiltration may be applied to other viruses and be useful in various experimental studies requiring high viral titers. Copyright © 2018 Elsevier B.V. All rights reserved.
1992-12-01
cm 2 heat flux which must be transferred by the buoyancy-induced gas flow. A survey of electronic cooling literature can easily demonstrate how large...Toward Implementation of a Certification Framework for Reusable Dr. Allen S. Parrish Software Modules 15 Data Association Problems in Multisensor Data...next section and the reader is referred to [5] for additional details of the analysis. Then the method is applied to a dipole element with straight
Cross-correlation of point series using a new method
NASA Technical Reports Server (NTRS)
Strothers, Richard B.
1994-01-01
Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.
Complexity-Entropy Causality Plane as a Complexity Measure for Two-Dimensional Patterns
Ribeiro, Haroldo V.; Zunino, Luciano; Lenzi, Ervin K.; Santoro, Perseu A.; Mendes, Renio S.
2012-01-01
Complexity measures are essential to understand complex systems and there are numerous definitions to analyze one-dimensional data. However, extensions of these approaches to two or higher-dimensional data, such as images, are much less common. Here, we reduce this gap by applying the ideas of the permutation entropy combined with a relative entropic index. We build up a numerical procedure that can be easily implemented to evaluate the complexity of two or higher-dimensional patterns. We work out this method in different scenarios where numerical experiments and empirical data were taken into account. Specifically, we have applied the method to fractal landscapes generated numerically where we compare our measures with the Hurst exponent; liquid crystal textures where nematic-isotropic-nematic phase transitions were properly identified; 12 characteristic textures of liquid crystals where the different values show that the method can distinguish different phases; and Ising surfaces where our method identified the critical temperature and also proved to be stable. PMID:22916097
Kokaly, Raymond F.; Hoefen, Todd M.; Livo, K. Eric; Swayze, Gregg A.; Leifer, Ira; McCubbin, Ian B.; Eastwood, Michael L.; Green, Robert O.; Lundeen, Sarah R.; Sarture, Charles M.; Steele, Denis; Ryan, Thomas; Bradley, Eliza S.; Roberts, Dar A.; ,
2010-01-01
This report describes a method to create color-composite images indicative of thick oil:water emulsions on the surface of clear, deep ocean water by using normalized difference ratios derived from remotely sensed data collected by an imaging spectrometer. The spectral bands used in the normalized difference ratios are located in wavelength regions where the spectra of thick oil:water emulsions on the ocean's surface have a distinct shape compared to clear water and clouds. In contrast to quantitative analyses, which require rigorous conversion to reflectance, the method described is easily computed and can be applied rapidly to radiance data or data that have been atmospherically corrected or ground-calibrated to reflectance. Examples are shown of the method applied to Airborne Visible/Infrared Imaging Spectrometer data collected May 17 and May 19, 2010, over the oil spill from the Deepwater Horizon offshore oil drilling platform in the Gulf of Mexico.
Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
Quantification of sewer system infiltration using delta(18)O hydrograph separation.
Prigiobbe, V; Giulianelli, M
2009-01-01
The infiltration of parasitical water into two sewer systems in Rome (Italy) was quantified during a dry weather period. Infiltration was estimated using the hydrograph separation method with two water components and delta(18)O as a conservative tracer. The two water components were groundwater, the possible source of parasitical water within the sewer, and drinking water discharged into the sewer system. This method was applied at an urban catchment scale in order to test the effective water-tightness of two different sewer networks. The sampling strategy was based on an uncertainty analysis and the errors have been propagated using Monte Carlo random sampling. Our field applications showed that the method can be applied easily and quickly, but the error in the estimated infiltration rate can be up to 20%. The estimated infiltration into the recent sewer in Torraccia is 14% and can be considered negligible given the precision of the method, while the old sewer in Infernetto has an estimated infiltration of 50%.
A multigrid solver for the semiconductor equations
NASA Technical Reports Server (NTRS)
Bachmann, Bernhard
1993-01-01
We present a multigrid solver for the exponential fitting method. The solver is applied to the current continuity equations of semiconductor device simulation in two dimensions. The exponential fitting method is based on a mixed finite element discretization using the lowest-order Raviart-Thomas triangular element. This discretization method yields a good approximation of front layers and guarantees current conservation. The corresponding stiffness matrix is an M-matrix. 'Standard' multigrid solvers, however, cannot be applied to the resulting system, as this is dominated by an unsymmetric part, which is due to the presence of strong convection in part of the domain. To overcome this difficulty, we explore the connection between Raviart-Thomas mixed methods and the nonconforming Crouzeix-Raviart finite element discretization. In this way we can construct nonstandard prolongation and restriction operators using easily computable weighted L(exp 2)-projections based on suitable quadrature rules and the upwind effects of the discretization. The resulting multigrid algorithm shows very good results, even for real-world problems and for locally refined grids.
Differential equation models for sharp threshold dynamics.
Schramm, Harrison C; Dimitrov, Nedialko B
2014-01-01
We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.
Integration of optical measurement methods with flight parameter measurement systems
NASA Astrophysics Data System (ADS)
Kopecki, Grzegorz; Rzucidlo, Pawel
2016-05-01
During the AIM (advanced in-flight measurement techniques) and AIM2 projects, innovative modern techniques were developed. The purpose of the AIM project was to develop optical measurement techniques dedicated for flight tests. Such methods give information about aircraft elements deformation, thermal loads or pressure distribution, etc. In AIM2 the development of optical methods for flight testing was continued. In particular, this project aimed at the development of methods that could be easily applied in flight tests in an industrial setting. Another equally important task was to guarantee the synchronization of the classical measuring system with cameras. The PW-6U glider used in flight tests was provided by the Rzeszów University of Technology. The glider had all the equipment necessary for testing the IPCT (image pattern correlation technique) and IRT (infrared thermometry) methods. Additionally, equipment adequate for the measurement of typical flight parameters, registration and analysis has been developed. This article describes the designed system, as well as presenting the system’s application during flight tests. Additionally, the results obtained in flight tests show certain limitations of the IRT method as applied.
Dong, Xue; Zhao, Guanhui; Liu, Li; Li, Xuan; Wei, Qin; Cao, Wei
2018-07-01
In this work, Ru(bpy) 3 2+ encapsulated in metal-organic frameworks (MOFs) UiO-67 (Ru(bpy) 3 2+ /UiO-67) as luminophor was easily prepared and firstly applied in constructing an electrochemiluminescence (ECL) immunosensor to efficiently estimate diethylstilbestrol (DES). The competitive method-based ECL immunosensor platform was fabricated by amino-silicon dioxide which possesses large surface area. The poriness of UiO-67 was splendid so that Ru(bpy) 3 2+ could be easily encapsulated. Ru(bpy) 3 2+ /UiO-67 with excellent ECL luminescence signal existed large specific surface area for easily labeled with antibodies. DES competed with bovine serum albumin-diethylstilbestrol (BSA-DES) for binding to antibody-specific sites in the constructed immunosensor. However DES was micromolecule, which was easier to bond to antibodies than BSA-DES. The ECL signal was gradually decreases with the increase of the concentration of DES. Under optimal conditions, the proposed immunosensor exhibited a wide linear range from 0.01 pg mL -1 to 50 ng mL -1 with a low detetion limit of 3.27 fg mL -1 (S/N = 3). The novel fabricated immunosensor with interference immunity and high stability may cause an attractive approach for the other targets determination. Copyright © 2018 Elsevier B.V. All rights reserved.
Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.
2015-01-01
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.
Scaling of mode shapes from operational modal analysis using harmonic forces
NASA Astrophysics Data System (ADS)
Brandt, A.; Berardengo, M.; Manzoni, S.; Cigada, A.
2017-10-01
This paper presents a new method for scaling mode shapes obtained by means of operational modal analysis. The method is capable of scaling mode shapes on any structure, also structures with closely coupled modes, and the method can be used in the presence of ambient vibration from traffic or wind loads, etc. Harmonic excitation can be relatively easily accomplished by using general-purpose actuators, also for force levels necessary for driving large structures such as bridges and highrise buildings. The signal processing necessary for mode shape scaling by the proposed method is simple and the method can easily be implemented in most measurement systems capable of generating a sine wave output. The tests necessary to scale the modes are short compared to typical operational modal analysis test time. The proposed method is thus easy to apply and inexpensive relative to some other methods for scaling mode shapes that are available in literature. Although it is not necessary per se, we propose to excite the structure at, or close to, the eigenfrequencies of the modes to be scaled, since this provides better signal-to-noise ratio in the response sensors, thus permitting the use of smaller actuators. An extensive experimental activity on a real structure was carried out and the results reported demonstrate the feasibility and accuracy of the proposed method. Since the method utilizes harmonic excitation for the mode shape scaling, we propose to call the method OMAH.
The design of multirate digital control systems
NASA Technical Reports Server (NTRS)
Berg, M. C.
1986-01-01
The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.
NASA Astrophysics Data System (ADS)
Barouchas, Pantelis; Koulos, Vasilios; Melfos, Vasilios
2017-04-01
For the determination of total carbonates in soil archaeometry a new technique was applied using a multi-sensor philosophy, which combines simultaneous measurement of pressure and temperature. This technology is innovative and complies with EN ISO 10693:2013, ASTM D4373-02(2007) and Soil Science Society of America standard test methods for calcium carbonate content in soils and sediments. The total carbonates analysis is based on a pressure method that utilizes the FOGII Digital Soil CalcimeterTM, which is a portable apparatus. The total carbonate content determined by treating a 1.000 g (+/- 0.001 g) dried sample specimens with 6N hydrochloric acid (HCL) reagent grade, in an enclosed reaction vessel. Carbon dioxide gas evolved during the reaction between the acid and carbonate fraction of the specimen, was measured by the resulting pressure generated, taking in account the temperature conditions during the reaction. Prior to analysis the procedure was validated with Sand/Soil mixtures from BIPEA proficiency testing program with soils of different origins. For applying this new method in archaeometry a total number of ten samples were used from various rocks which are related with cultural constructions and implements in Greece. They represent a large range of periods since the Neolithic times, and were selected because there was an uncertainty about their accurate mineralogical composition especially regarding the presence of carbonate minerals. The results were compared to the results from ELTRA CS580 inorganic carbon analyzer using an infrared cell. The determination of total carbonates for 10 samples from different ancient sites indicated a very good correlation (R2 >0.97) between the pressure method with temperature compensation and the infrared method. The proposed method is quickly and accurate in archaeometry and can replace easily other techniques for total carbonates testing. The FOGII Digital Soil CalcimeterTM is portable and easily can be carried for field work in archaeology.
NASA Astrophysics Data System (ADS)
Myrcha, Julian; Trzciński, Tomasz; Rokita, Przemysław
2017-08-01
Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.
Leakey, Tatiana I; Zielinski, Jerzy; Siegfried, Rachel N; Siegel, Eric R; Fan, Chun-Yang; Cooney, Craig A
2008-06-01
DNA methylation at cytosines is a widely studied epigenetic modification. Methylation is commonly detected using bisulfite modification of DNA followed by PCR and additional techniques such as restriction digestion or sequencing. These additional techniques are either laborious, require specialized equipment, or are not quantitative. Here we describe a simple algorithm that yields quantitative results from analysis of conventional four-dye-trace sequencing. We call this method Mquant and we compare it with the established laboratory method of combined bisulfite restriction assay (COBRA). This analysis of sequencing electropherograms provides a simple, easily applied method to quantify DNA methylation at specific CpG sites.
Periodic bidirectional associative memory neural networks with distributed delays
NASA Astrophysics Data System (ADS)
Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde
2006-05-01
Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.
Products of multiple Fourier series with application to the multiblade transformation
NASA Technical Reports Server (NTRS)
Kunz, D. L.
1981-01-01
A relatively simple and systematic method for forming the products of multiple Fourier series using tensor like operations is demonstrated. This symbolic multiplication can be performed for any arbitrary number of series, and the coefficients of a set of linear differential equations with periodic coefficients from a rotating coordinate system to a nonrotating system is also demonstrated. It is shown that using Fourier operations to perform this transformation make it easily understood, simple to apply, and generally applicable.
Management of high-risk perioperative systems.
Dain, Steven
2006-06-01
The perioperative system is a complex system that requires people, materials, and processes to come together in a highly ordered and timely manner. However, when working in this high-risk system, even well-organized, knowledgeable, vigilant, and well-intentioned individuals will eventually make errors. All systems need to be evaluated on a continual basis to reduce the risk of errors, make errors more easily recognizable, and provide methods for error mitigation. A simple approach to risk management that may be applied in clinical medicine is discussed.
Trace elements as paradigms of developmental neurotoxicants: lead, methylmercury and arsenic
Grandjean, Philippe; Herz, Katherine T.
2014-01-01
Trace elements have contributed unique insights into developmental neurotoxicity and serve as paradigms for such adverse effects. Many trace elements are retained in the body for long periods and can be easily measured to assess exposure by inexpensive analytical methods that became available several decades ago so that past and cumulated exposures could be easily characterized through analysis of biological samples, e.g. blood and urine. The first compelling evidence resulted from unfortunate poisoning events that allowed scrutiny of long-term outcomes of acute exposures that occurred during early development. Pursuant to this documentation, prospective studies of children's cohorts that applied sensitive neurobehavioral methods supported the notion that the brain is uniquely vulnerable to toxic damage during early development. Lead, methylmercury, and arsenic thereby serve as paradigm neurotoxicants that provide a reference for other substances that may have similar adverse effects. Less evidence is available on manganese, fluoride, and cadmium, but experience from the former trace elements suggest that, with time, adverse effects are likely to be documented at exposures previously thought to be low and safe. PMID:25175507
Trace elements as paradigms of developmental neurotoxicants: Lead, methylmercury and arsenic.
Grandjean, Philippe; Herz, Katherine T
2015-01-01
Trace elements have contributed unique insights into developmental neurotoxicity and serve as paradigms for such adverse effects. Many trace elements are retained in the body for long periods and can be easily measured to assess exposure by inexpensive analytical methods that became available several decades ago so that past and cumulated exposures could be easily characterized through analysis of biological samples, e.g. blood and urine. The first compelling evidence resulted from unfortunate poisoning events that allowed scrutiny of long-term outcomes of acute exposures that occurred during early development. Pursuant to this documentation, prospective studies of children's cohorts that applied sensitive neurobehavioral methods supported the notion that the brain is uniquely vulnerable to toxic damage during early development. Lead, methylmercury, and arsenic thereby serve as paradigm neurotoxicants that provide a reference for other substances that may have similar adverse effects. Less evidence is available on manganese, fluoride, and cadmium, but experience from the former trace elements suggest that, with time, adverse effects are likely to be documented at exposures previously thought to be low and safe. Copyright © 2014 Elsevier GmbH. All rights reserved.
Acrylic Resin Molding Based Head Fixation Technique in Rodents.
Roh, Mootaek; Lee, Kyungmin; Jang, Il-Sung; Suk, Kyoungho; Lee, Maan-Gee
2016-01-12
Head fixation is a technique of immobilizing animal's head by attaching a head-post on the skull for rigid clamping. Traditional head fixation requires surgical attachment of metallic frames on the skull. The attached frames are then clamped to a stationary platform resulting in immobilization of the head. However, metallic frames for head fixation have been technically difficult to design and implement in general laboratory environment. In this study, we provide a novel head fixation method. Using a custom-made head fixation bar, head mounter is constructed during implantation surgery. After the application of acrylic resin for affixing implants such as electrodes and cannula on the skull, additional resins applied on top of that to build a mold matching to the port of the fixation bar. The molded head mounter serves as a guide rails, investigators conveniently fixate the animal's head by inserting the head mounter into the port of the fixation bar. This method could be easily applicable if implantation surgery using dental acrylics is necessary and might be useful for laboratories that cannot easily fabricate CNC machined metal head-posts.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
Measuring the orthogonality error of coil systems
Heilig, B.; Csontos, A.; Pajunpää, K.; White, Tim; St. Louis, B.; Calp, D.
2012-01-01
Recently, a simple method was proposed for the determination of pitch angle between two coil axes by means of a total field magnetometer. The method is applicable when the homogeneous volume in the centre of the coil system is large enough to accommodate the total field sensor. Orthogonality of calibration coil systems used for calibrating vector magnetometers can be attained by this procedure. In addition, the method can be easily automated and applied to the calibration of delta inclination–delta declination (dIdD) magnetometers. The method was tested by several independent research groups, having a variety of test equipment, and located at differing geomagnetic observatories, including: Nurmijärvi, Finland; Hermanus, South Africa; Ottawa, Canada; Tihany, Hungary. This paper summarizes the test results, and discusses the advantages and limitations of the method.
Nebe, Marco M; Kucukdisli, Murat; Opatz, Till
2016-05-20
Various heterocyclic structures containing the pyrrole moiety have been synthesized from easily accessible 3,4-dihydro-2H-pyrrole-2-carbonitriles in one-pot procedures. 5,6,7,8-Tetrahydroindolizines, 2,3-dihydro-1H-pyrrolizines as well as 6,7,8,9-tetrahydro-5H-pyrrolo[1,2-a]azepines were obtained from these precursors in high yields in an alkylation/annulation sequence. The same conditions were applied in the synthesis of a 5,8-dihydroindolizine, which could easily be transformed to the corresponding indolizine by dehydrogenation. Furthermore, oxidative couplings of 3,4-dihydro-2H-pyrrole-2-carbonitriles with copper(II)-salts furnished 2,2'-bipyrroles as well as 5,5'-bis(5-cyano-1-pyrrolines), depending on the reaction conditions. Overall, these methods give high yielding access to a variety of pyrrole-containing heterocyles in two steps from commercially available starting materials.
Mixed Methods Designs for Sports Medicine Research.
Kay, Melissa C; Kucera, Kristen L
2018-07-01
Mixed methods research is a relatively new approach in the field of sports medicine, where the benefits of qualitative and quantitative research are combined while offsetting the other's flaws. Despite its known and successful use in other populations, it has been used minimally in sports medicine, including studies of the clinician perspective, concussion, and patient outcomes. Therefore, there is a need for this approach to be applied in other topic areas not easily addressed by one type of research approach in isolation, such as the retirement from sport, effects of and return from injury, and catastrophic injury. Copyright © 2018 Elsevier Inc. All rights reserved.
Transform methods for precision continuum and control models of flexible space structures
NASA Technical Reports Server (NTRS)
Lupi, Victor D.; Turner, James D.; Chun, Hon M.
1991-01-01
An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.
Accelerated testing of composites
NASA Technical Reports Server (NTRS)
Papazian, H. A.
1983-01-01
It is shown that the Zhurkov method for testing the strength of solids can be applied to dynamic tension and to cyclic loading and provides a viable approach to accelerated testing of composites. Data from the literature are used to demonstrate a straightforward application of the method to dynamic tension of glass fiber and cyclic loading for glass/polymer, metal matrix, and graphite/epoxy composites. Zhurkov's equation can be used at relatively high loads to obtain failure times at any temperature of interest. By taking a few data points at one or two other temperatures the spectrum of failure times can be expanded to temperatures not easily accessible.
Numerical simulation using vorticity-vector potential formulation
NASA Technical Reports Server (NTRS)
Tokunaga, Hiroshi
1993-01-01
An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.
Amer, Sawsan M; Abbas, Samah S; Shehata, Mostafa A; Ali, Nahed M
2008-01-01
A simple and reliable high-performance liquid chromatographic method was developed for the simultaneous determination of mixture of phenylephrine hydrochloride (PHENYL), guaifenesin (GUAIF), and chlorpheniramine maleate (CHLO) either in pure form or in the presence of methylparaben and propylparaben in a commercial cough syrup dosage form. Separation was achieved on a C8 column using 0.005 M heptane sulfonic acid sodium salt (pH 3.4 +/- 0.1) and acetonitrile as a mobile phase by gradient elution at different flow rates, and detection was done spectrophotometrically at 210 nm. A linear relationship in the range of 30-180, 120-1800, and 10-60 microg/mL was obtained for PHENYL, GUAIF, and CHLO, respectively. The results were statistically analyzed and compared with those obtained by applying the British Pharmacopoeia (2002) method and showed that the proposed method is precise, accurate, and can be easily applied for the determination of the drugs under investigation in pure form and in cough syrup formulations.
The Goddard Profiling Algorithm (GPROF): Description and Current Applications
NASA Technical Reports Server (NTRS)
Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea
2004-01-01
Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.
A trust region approach with multivariate Padé model for optimal circuit design
NASA Astrophysics Data System (ADS)
Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.
2017-11-01
Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.
Analysis of cold worked holes for structural life extension
NASA Technical Reports Server (NTRS)
Wieland, David H.; Cutshall, Jon T.; Burnside, O. Hal; Cardinal, Joseph W.
1994-01-01
Cold working holes for improved fatigue life of fastener holes are widely used on aircraft. This paper presents methods used by the authors to determine the percent of cold working to be applied and to analyze fatigue crack growth of cold worked fastener holes. An elastic, perfectly-plastic analysis of a thick-walled tube is used to determine the stress field during the cold working process and the residual stress field after the process is completed. The results of the elastic/plastic analysis are used to determine the amount of cold working to apply to a hole. The residual stress field is then used to perform damage tolerance analysis of a crack growing out of a cold worked fastener hole. This analysis method is easily implemented in existing crack growth computer codes so that the cold worked holes can be used to extend the structural life of aircraft. Analytical results are compared to test data where appropriate.
Companies can apply to use the voluntary new graphic on product labels of skin-applied insect repellents. This graphic is intended to help consumers easily identify the protection time for mosquitoes and ticks and select appropriately.
NASA Astrophysics Data System (ADS)
Han, Keyu; Heng, Liping; Wen, Liping; Jiang, Lei
2016-06-01
We design a novel type of artificial multiple nanochannel system with remarkable ion rectification behavior via a facile breath figure (BF) method. Notably, even though the charge polarity in the channel wall reverses under different pH values, this nanofluidic device displays the same ionic rectification direction. Compared with traditional nanochannels, this composite multiple ion channel device can be more easily obtained and has directional ionic rectification advantages, which can be applied in many fields.We design a novel type of artificial multiple nanochannel system with remarkable ion rectification behavior via a facile breath figure (BF) method. Notably, even though the charge polarity in the channel wall reverses under different pH values, this nanofluidic device displays the same ionic rectification direction. Compared with traditional nanochannels, this composite multiple ion channel device can be more easily obtained and has directional ionic rectification advantages, which can be applied in many fields. Electronic supplementary information (ESI) available: Pore size distribution histograms of the AAO substrates; SEM images of the side view of pure AAO membranes and top view of the flat PI/AAO composite film; the current-time curves of the flat composite film; the current-voltage characteristics curves of pure AAO nanochannels with different mean pore diameters; CA of the two surfaces of the composite PI/AAO film, the structural formula of the polymer polyimide resin (PI), and solid surface zeta potential. See DOI: 10.1039/c6nr02506d
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
A Markovian Entropy Measure for the Analysis of Calcium Activity Time Series.
Marken, John P; Halleran, Andrew D; Rahman, Atiqur; Odorizzi, Laura; LeFew, Michael C; Golino, Caroline A; Kemper, Peter; Saha, Margaret S
2016-01-01
Methods to analyze the dynamics of calcium activity often rely on visually distinguishable features in time series data such as spikes, waves, or oscillations. However, systems such as the developing nervous system display a complex, irregular type of calcium activity which makes the use of such methods less appropriate. Instead, for such systems there exists a class of methods (including information theoretic, power spectral, and fractal analysis approaches) which use more fundamental properties of the time series to analyze the observed calcium dynamics. We present a new analysis method in this class, the Markovian Entropy measure, which is an easily implementable calcium time series analysis method which represents the observed calcium activity as a realization of a Markov Process and describes its dynamics in terms of the level of predictability underlying the transitions between the states of the process. We applied our and other commonly used calcium analysis methods on a dataset from Xenopus laevis neural progenitors which displays irregular calcium activity and a dataset from murine synaptic neurons which displays activity time series that are well-described by visually-distinguishable features. We find that the Markovian Entropy measure is able to distinguish between biologically distinct populations in both datasets, and that it can separate biologically distinct populations to a greater extent than other methods in the dataset exhibiting irregular calcium activity. These results support the benefit of using the Markovian Entropy measure to analyze calcium dynamics, particularly for studies using time series data which do not exhibit easily distinguishable features.
Computer program CDCID: an automated quality control program using CDC update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, G.L.; Aguilar, F.
1984-04-01
A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less
A New Evaluation Method of Stored Heat Effect of Reinforced Concrete Wall of Cold Storage
NASA Astrophysics Data System (ADS)
Nomura, Tomohiro; Murakami, Yuji; Uchikawa, Motoyuki
Today it has become imperative to save energy by operating a refrigerator in a cold storage executed by external insulate reinforced concrete wall intermittently. The theme of the paper is to get the evaluation method to be capable of calculating, numerically, interval time for stopping the refrigerator, in applying reinforced concrete wall as source of stored heat. The experiments with the concrete models were performed in order to examine the time variation of internal temperature after refrigerator stopped. In addition, the simulation method with three dimensional unsteady FEM for personal-computer type was introduced for easily analyzing the internal temperature variation. Using this method, it is possible to obtain the time variation of internal temperature and to calculate the interval time for stopping the refrigerator.
NASA Astrophysics Data System (ADS)
Kaiya, Haruhiko; Osada, Akira; Kaijiri, Kenji
We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.
Efficient Generation and Use of Power Series for Broad Application.
NASA Astrophysics Data System (ADS)
Rudmin, Joseph; Sochacki, James
2017-01-01
A brief history and overview of the Parker-Sockacki Method of Power Series generation is presented. This method generates power series to order n in time n2 for any system of differential equations that has a power series solution. The method is simple enough that novices to differential equations can easily learn it and immediately apply it. Maximal absolute error estimates allow one to determine the number of terms needed to reach desired accuracy. Ratios of coefficients in a solution with global convergence differ signficantly from that for a solution with only local convergence. Divergence of the series prevents one from overlooking poles. The method can always be cast in polynomial form, which allows separation of variables in almost all physical systems, facilitating exploration of hidden symmetries, and is implicitly symplectic.
Ray-tracing in three dimensions for calculation of radiation-dose calculations. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, D.R.
1986-05-27
This thesis addresses several methods of calculating the radiation-dose distribution for use by technicians or clinicians in radiation-therapy treatment planning. It specifically covers the calculation of the effective pathlength of the radiation beam for use in beam models representing the dose distribution. A two-dimensional method by Bentley and Milan is compared to the method of Strip Trees developed by Duda and Hart and then a three-dimensional algorithm built to perform the calculations in three dimensions. The use of PRISMS conforms easily to the obtained CT Scans and provides a means of only doing two-dimensional ray-tracing while performing three-dimensional dose calculations.more » This method is already being applied and used in actual calculations.« less
Comparison between two methods of scorpion venom milking in Morocco
2013-01-01
Background The present study compared two methods used successfully in a large-scale program for the collection of scorpion venoms, namely the milking of adult scorpions via manual and electrical stimulation. Results Our immunobiochemical characterizations clearly demonstrate that regularly applied electrical stimulation obtains scorpion venom more easily and, most importantly, in greater quantity. Qualitatively, the electrically collected venom showed lack of hemolymph contaminants such as hemocyanin. In contrast, manual obtainment of venom subjects scorpions to maximal trauma, leading to hemocyanin secretion. Our study highlighted the importance of reducing scorpion trauma during venom milking. Conclusions In conclusion, to produce high quality antivenom with specific antibodies, it is necessary to collect venom by the gentler electrical stimulation method. PMID:23849043
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jian Hua; Gooding, R.J.
1994-06-01
We propose an algorithm to solve a system of partial differential equations of the type u[sub t](x,t) = F(x, t, u, u[sub x], u[sub xx], u[sub xxx], u[sub xxxx]) in 1 + 1 dimensions using the method of lines with piecewise ninth-order Hermite polynomials, where u and F and N-dimensional vectors. Nonlinear boundary conditions are easily incorporated with this method. We demonstrate the accuracy of this method through comparisons of numerically determine solutions to the analytical ones. Then, we apply this algorithm to a complicated physical system involving nonlinear and nonlocal strain forces coupled to a thermal field. 4 refs.,more » 5 figs., 1 tab.« less
Chericoni, Silvio; Stefanelli, Fabio; Da Valle, Ylenia; Giusiani, Mario
2015-09-01
A sensitive and reliable method for extraction and quantification of benzoylecgonine (BZE) and cocaine (COC) in urine is presented. Propyl-chloroformate was used as derivatizing agent, and it was directly added to the urine sample: the propyl derivative and COC were then recovered by liquid-liquid extraction procedure. Gas chromatography-mass spectrometry was used to detect the analytes in selected ion monitoring mode. The method proved to be precise for BZE and COC both in term of intraday and interday analysis, with a coefficient of variation (CV)<6%. Limits of detection (LOD) were 2.7 ng/mL for BZE and 1.4 ng/mL for COC. The calibration curve showed a linear relationship for BZE and COC (r2>0.999 and >0.997, respectively) within the range investigated. The method, applied to thirty authentic samples, showed to be very simple, fast, and reliable, so it can be easily applied in routine analysis for the quantification of BZE and COC in urine samples. © 2015 American Academy of Forensic Sciences.
High repetition rate laser induced fluorescence applied to Surfatron Induced Plasmas
NASA Astrophysics Data System (ADS)
van der Mullen, J. J. A. M.; Palomares, J. M.; Carbone, E. A. D.; Graef, W.; Hübner, S.
2012-05-01
The reaction kinetics in the excitation space of Ar and the conversion space of Ar-molecule mixtures are explored using a combination of high rep-rate YAG-Dye laser systems with a well defined and easily controllable Surfatron Induced Plasma set-up. Applying the method of Saturation Time Resolved Laser Induced Fluorescence (SaTiRe-LIF), we could trace excitation and conversion channels and determine rates of electron and heavy particle excitation kinetics. The time resolved density disturbances observed in the Ar excitation space, which are initiated by the laser, reveal the excitation channels and corresponding rates; responses of the molecular radiation in Ar-molecule mixtures corresponds to the presence of conversion processes induced by heavy particle excitation kinetics.
Envisioning migration: Mathematics in both experimental analysis and modeling of cell behavior
Zhang, Elizabeth R.; Wu, Lani F.; Altschuler, Steven J.
2013-01-01
The complex nature of cell migration highlights the power and challenges of applying mathematics to biological studies. Mathematics may be used to create model equations that recapitulate migration, which can predict phenomena not easily uncovered by experiments or intuition alone. Alternatively, mathematics may be applied to interpreting complex data sets with better resolution—potentially empowering scientists to discern subtle patterns amid the noise and heterogeneity typical of migrating cells. Iteration between these two methods is necessary in order to reveal connections within the cell migration signaling network, as well as to understand the behavior that arises from those connections. Here, we review recent quantitative analysis and mathematical modeling approaches to the cell migration problem. PMID:23660413
Nasrollahzadeh, Mahmoud; Sajadi, S Mohammad
2016-02-15
We describe a method for supporting palladium nanoparticles on magnetic nanoparticles using Euphorbia stracheyi Boiss root extract as the natural source of reducing and stabilizing agent. The progress of the reaction was monitored using UV-visible spectroscopy. The nanocatalyst was characterized by FE-SEM, TEM, XRD, EDS, FT-IR spectroscopy and ICP. The nanocatalyst was applied as an efficient, magnetically recoverable, highly reusable and heterogeneous catalyst for one-pot reductive amination of aldehydes at room temperature. The nanocatalyst was easily recovered by applying an external magnet and reused several times without considerable loss of activity. Copyright © 2015 Elsevier Inc. All rights reserved.
Envisioning migration: mathematics in both experimental analysis and modeling of cell behavior.
Zhang, Elizabeth R; Wu, Lani F; Altschuler, Steven J
2013-10-01
The complex nature of cell migration highlights the power and challenges of applying mathematics to biological studies. Mathematics may be used to create model equations that recapitulate migration, which can predict phenomena not easily uncovered by experiments or intuition alone. Alternatively, mathematics may be applied to interpreting complex data sets with better resolution--potentially empowering scientists to discern subtle patterns amid the noise and heterogeneity typical of migrating cells. Iteration between these two methods is necessary in order to reveal connections within the cell migration signaling network, as well as to understand the behavior that arises from those connections. Here, we review recent quantitative analysis and mathematical modeling approaches to the cell migration problem. Copyright © 2013 Elsevier Ltd. All rights reserved.
Correlation of live-cell imaging with volume scanning electron microscopy.
Lucas, Miriam S; Günthert, Maja; Bittermann, Anne Greet; de Marco, Alex; Wepf, Roger
2017-01-01
Live-cell imaging is one of the most widely applied methods in live science. Here we describe two setups for live-cell imaging, which can easily be combined with volume SEM for correlative studies. The first procedure applies cell culture dishes with a gridded glass support, which can be used for any light microscopy modality. The second approach is a flow-chamber setup based on Ibidi μ-slides. Both live-cell imaging strategies can be followed up with serial blockface- or focused ion beam-scanning electron microscopy. Two types of resin embedding after heavy metal staining and dehydration are presented making best use of the particular advantages of each imaging modality: classical en-bloc embedding and thin-layer plastification. The latter can be used only for focused ion beam-scanning electron microscopy, but is advantageous for studying cell-interactions with specific substrates, or when the substrate cannot be removed. En-bloc embedding has diverse applications and can be applied for both described volume scanning electron microscopy techniques. Finally, strategies for relocating the cell of interest are discussed for both embedding approaches and in respect to the applied light and scanning electron microscopy methods. Copyright © 2017 Elsevier Inc. All rights reserved.
Lo Presti, Rossella; Barca, Emanuele; Passarella, Giuseppe
2010-01-01
Environmental time series are often affected by the "presence" of missing data, but when dealing statistically with data, the need to fill in the gaps estimating the missing values must be considered. At present, a large number of statistical techniques are available to achieve this objective; they range from very simple methods, such as using the sample mean, to very sophisticated ones, such as multiple imputation. A brand new methodology for missing data estimation is proposed, which tries to merge the obvious advantages of the simplest techniques (e.g. their vocation to be easily implemented) with the strength of the newest techniques. The proposed method consists in the application of two consecutive stages: once it has been ascertained that a specific monitoring station is affected by missing data, the "most similar" monitoring stations are identified among neighbouring stations on the basis of a suitable similarity coefficient; in the second stage, a regressive method is applied in order to estimate the missing data. In this paper, four different regressive methods are applied and compared, in order to determine which is the most reliable for filling in the gaps, using rainfall data series measured in the Candelaro River Basin located in South Italy.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
Tebbutt, G. M.
1991-01-01
The performance of agar-contact plates and an alginate-swab method for sampling food surfaces before and after cleaning was compared. Contact plates were more convenient, and were at least as sensitive as the swabbing method. To assess cleaning efficiency repeated sampling was carried out in selected premises, and several cleaning methods were introduced for trial periods. Some surfaces, notably wood and polypropylene, were particularly difficult to clean. For these scrubbing with a nylon brush was the best method. Other surfaces were more easily cleaned, and generally the methods introduced as part of this study were better than the original method used in the premises. Paper proved to be unpopular, and cleaning solutions applied with it did no better than those cleaned with a multiuse cloth kept soaking in a detergent and hypochlorite solution. PMID:1850362
Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
Evaluation of cerebral ischemia using near-infrared spectroscopy with oxygen inhalation
NASA Astrophysics Data System (ADS)
Ebihara, Akira; Tanaka, Yuichi; Konno, Takehiko; Kawasaki, Shingo; Fujiwara, Michiyuki; Watanabe, Eiju
2012-09-01
Conventional methods presently used to evaluate cerebral hemodynamics are invasive, require physical restraint, and employ equipment that is not easily transportable. Therefore, it is difficult to take repeated measurements at the patient's bedside. An alternative method to evaluate cerebral hemodynamics was developed using near-infrared spectroscopy (NIRS) with oxygen inhalation. The bilateral fronto-temporal areas of 30 normal volunteers and 33 patients with cerebral ischemia were evaluated with the NIRS system. The subjects inhaled oxygen through a mask for 2 min at a flow rate of 8 L/min. Principal component analysis (PCA) was applied to the data, and a topogram was drawn using the calculated weights. NIRS findings were compared with those of single-photon-emission computed tomography (SPECT). In normal volunteers, no laterality of the PCA weights was observed in 25 of 30 cases (83%). In patients with cerebral ischemia, PCA weights in ischemic regions were lower than in normal regions. In 28 of 33 patients (85%) with cerebral ischemia, NIRS findings agreed with those of SPECT. The results suggest that transmission of the changes in systemic SpO2 were attenuated in ischemic regions. The method discussed here should be clinically useful because it can be used to measure cerebral ischemia easily, repeatedly, and noninvasively.
Experimental entanglement distillation of two-qubit mixed states under local operations.
Wang, Zhi-Wei; Zhou, Xiang-Fa; Huang, Yun-Feng; Zhang, Yong-Sheng; Ren, Xi-Feng; Guo, Guang-Can
2006-06-09
We experimentally demonstrate optimal entanglement distillation from two forms of two-qubit mixed states under local filtering operations according to the constructive method introduced by [F. Verstraete, Phys. Rev. A 64, 010101(R) (2001)10.1103/PhysRevA.64.010101]. In principle, our setup can be easily applied to distilling entanglement from arbitrary two-qubit partially mixed states. We also test the violation of the Clauser-Horne-Shinmony-Holt inequality for the distilled state from the first form of mixed state to show its "hidden nonlocality."
An interactive program on digitizing historical seismograms
NASA Astrophysics Data System (ADS)
Xu, Yihe; Xu, Tao
2014-02-01
Retrieving information from analog seismograms is of great importance since they are considered as the unique sources that provide quantitative information of historical earthquakes. We present an algorithm for automatic digitization of the seismograms as an inversion problem that forms an interactive program using Matlab® GUI. The program integrates automatic digitization with manual digitization and users can easily switch between the two modalities and carry out different combinations for the optimal results. Several examples about applying the interactive program are given to illustrate the merits of the method.
NASA Astrophysics Data System (ADS)
Decho, Alan W.; Beckman, Erin M.; Chandler, G. Thomas; Kawaguchi, Tomohiro
2008-06-01
An indirect immunofluorescence approach was developed using semiconductor quantum dot nanocrystals to label and detect a specific bacterial serotype of the bacterial human pathogen Vibrio parahaemolyticus, attached to small marine animals (i.e. benthic harpacticoid copepods), which are suspected pathogen carriers. This photostable labeling method using nanotechnology will potentially allow specific serotypes of other bacterial pathogens to be detected with high sensitivity in a range of systems, and can be easily applied for sensitive detection to other Vibrio species such as Vibrio cholerae.
Compendium of methods for applying measured data to vibration and acoustic problems
NASA Astrophysics Data System (ADS)
Dejong, R. G.
1985-10-01
The scope of this report includes the measurement, analysis and use of vibration and acoustic data. The purpose of this report is then two-fold. First, it provides introductory material in an easily understood manner to engineers, technicians, and their managers in areas other than their specialties relating to the measurement, analysis and use of vibration and acoustic data. Second, it provides a quick reference source for engineers, technicians and their managers in the areas of their specialties relating to the measurement, analysis and use of vibration and acoustic data.
Rapid-estimation method for assessing scour at highway bridges
Holnbeck, Stephen R.
1998-01-01
A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.
Sáiz-Abajo, María-José; González-Ferrero, Carolina; Moreno-Ruiz, Ana; Romo-Hualde, Ana; González-Navarro, Carlos J
2013-06-01
β-Carotene is a carotenoid usually applied in the food industry as a precursor of vitamin A or as a colourant. β-Carotene is a labile compound easily degraded by light, heat and oxygen. Casein micelles were used as nanostructures to encapsulate, stabilise and protect β-carotene from degradation during processing in the food industry. Self-assembly method was applied to re-assemble nanomicelles containing β-carotene. The protective effect of the nanostructures against degradation during the most common industrial treatments (sterilisation, pasteurisation, high hydrostatic pressure and baking) was proven. Casein micelles protected β-carotene from degradation during heat stabilisation, high pressure processing and the processes most commonly used in the food industry including baking. This opens new possibilities for introducing thermolabile ingredients in bakery products. Copyright © 2012 Elsevier Ltd. All rights reserved.
Applying physics to solve problems in new contexts and representations: Methods Students Use
NASA Astrophysics Data System (ADS)
Zollman, Dean
2010-02-01
``The questions on the test were, like, totally different from the homework.'' All of us have heard variations on this statement. Yet, when we look at the homework and the test questions, we see great similarities. Changing the context or the representation in a physics problem can cause students to have significant difficulties. These difficulties persist sometimes in homework problems, exams and even hands-on activities. With significant effort from Sanjay Rebello our group has been investigating some of the issues which lead to the inability to apply physics learned in one context or with one representation to other situations. By looking at what aspects students are able to use easily and those that they have difficulty applying, we are beginning to understand some of the aspects that help this transfer of learning and some that do not. (Supported by the grants from the National Science Foundation and US Department of Education) )
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albeverio, Sergio; Chen Kai; Fei Shaoming
A necessary separability criterion that relates the structures of the total density matrix and its reductions is given. The method used is based on the realignment method [K. Chen and L. A. Wu, Quant. Inf. Comput. 3, 193 (2003)]. The separability criterion naturally generalizes the reduction separability criterion introduced independently in the previous work [M. Horodecki and P. Horodecki, Phys. Rev. A 59, 4206 (1999) and N. J. Cerf, C. Adami, and R. M. Gingrich, Phys. Rev. A 60, 898 (1999)]. In special cases, it recovers the previous reduction criterion and the recent generalized partial transposition criterion [K. Chen andmore » L. A. Wu, Phys. Lett. A 306, 14 (2002)]. The criterion involves only simple matrix manipulations and can therefore be easily applied.« less
Realization of Comfortable Massage by Using Iterative Learning Control Based on EEG
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
Recently the massage chair is used by a lot of people because they are able to use it easily at home. However a present massage chair only realizes the massage motion. Moreover the massage chair can not consider the user’s condition and massage force. On the other hand, the professional masseur is according to presume the mental condition by patient’s reaction. Then this paper proposes the method of applying masseur’s procedure for the massage chair using iterative learning control based on EEG. And massage force is estimated by acceleration sensor. The realizability of the proposed method is verified by the experimental works using the massage chair.
Exercising privacy rights in medical science.
Hillmer, Michael; Redelmeier, Donald A
2007-12-04
Privacy laws are intended to preserve human well-being and improve medical outcomes. We used the Sportstats website, a repository of competitive athletic data, to test how easily these laws can be circumvented. We designed a haphazard, unrepresentative case-series analysis and applied unscientific methods based on an Internet connection and idle time. We found it both feasible and titillating to breach anonymity, stockpile personal information and generate misquotations. We extended our methods to snoop on celebrities, link to outside databases and uncover refusal to participate. Throughout our study, we evaded capture and public humiliation despite violating these 6 privacy fundamentals. We suggest that the legitimate principle of safeguarding personal privacy is undermined by the natural human tendency toward showing off.
Using foresight methods to anticipate future threats: the case of disease management.
Ma, Sai; Seid, Michael
2006-01-01
We describe a unique foresight framework for health care managers to use in longer-term planning. This framework uses scenario-building to envision plausible alternate futures of the U.S. health care system and links those broad futures to business-model-specific "load-bearing" assumptions. Because the framework we describe simultaneously addresses very broad and very specific issues, it can be easily applied to a broad range of health care issues by using the broad framework and business-specific assumptions for the particular case at hand. We illustrate this method using the case of disease management, pointing out that although the industry continues to grow rapidly, its future also contains great uncertainties.
Second Law of Thermodynamics Applied to Metabolic Networks
NASA Technical Reports Server (NTRS)
Nigam, R.; Liang, S.
2003-01-01
We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.
Robust location of optical fiber modes via the argument principle method
NASA Astrophysics Data System (ADS)
Chen, Parry Y.; Sivan, Yonatan
2017-05-01
We implement a robust, globally convergent root search method for transcendental equations guaranteed to locate all complex roots within a specified search domain, based on Cauchy's residue theorem. Although several implementations of the argument principle already exist, ours has several advantages: it allows singularities within the search domain and branch points are not fatal to the method. Furthermore, our implementation is simple and is written in MATLAB, fulfilling the need for an easily integrated implementation which can be readily modified to accommodate the many variations of the argument principle method, each of which is suited to a different application. We apply the method to the step index fiber dispersion relation, which has become topical due to the recent proliferation of high index contrast fibers. We also find modes with permittivity as the eigenvalue, catering to recent numerical methods that expand the radiation of sources using eigenmodes.
Quasi-periodic Pulse Amplitude Modulation in the Accreting Millisecond Pulsar IGR J00291+5934
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bult, Peter; Doesburgh, Marieke van; Klis, Michiel van der
We introduce a new method for analyzing the aperiodic variability of coherent pulsations in accreting millisecond X-ray pulsars (AMXPs). Our method involves applying a complex frequency correction to the time-domain light curve, allowing for the aperiodic modulation of the pulse amplitude to be robustly extracted in the frequency domain. We discuss the statistical properties of the resulting modulation spectrum and show how it can be correlated with the non-pulsed emission to determine if the periodic and aperiodic variability are coupled processes. Using this method, we study the 598.88 Hz coherent pulsations of the AMXP IGR J00291+5934 as observed with themore » Rossi X-ray Timing Explorer and XMM-Newton . We demonstrate that our method easily confirms the known coupling between the pulsations and a strong 8 mHz quasi-periodic oscillation (QPO) in XMM-Newton observations. Applying our method to the RXTE observations, we further show, for the first time, that the much weaker 20 mHz QPO and its harmonic are also coupled with the pulsations. We discuss the implications of this coupling and indicate how it may be used to extract new information on the underlying accretion process.« less
Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation
De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan
2017-01-01
In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436
Quasi-Periodic Pulse Amplitude Modulation in the Accreting Millisecond Pulsar IGR J00291+5934
NASA Technical Reports Server (NTRS)
Bult, Peter; van Doesburgh, Marieke; van der Klis, Michiel
2017-01-01
We introduce a new method for analyzing the a periodic variability of coherent pulsations in accreting millisecond X-ray pulsars (AMXPs). Our method involves applying a complex frequency correction to the time-domain lightcurve, allowing for the aperiodic modulation of the pulse amplitude to be robustly extracted in the frequency domain. We discuss the statistical properties of the resulting modulation spectrum and show how it can be correlated with the non-pulsed emission to determine if the periodic and a periodic variability are coupled processes. Using this method, we study the 598.88 Hz coherent pulsations of the AMXP IGR J00291+5934 as observed with the Rossi X-ray Timing Explorer and XMM-Newton. We demonstrate that our method easily confirms the known coupling between the pulsations and a strong 8 mHz quasi-periodic oscillation (QPO) in XMM-Newton observations. Applying our method to the RXTE observations, we further show, for the first time, that the much weaker 20 mHz QPO and its harmonic are also coupled with the pulsations. We discuss the implications of this coupling and indicate how it may be used to extract new information on the underlying accretion process.
The complex phase gradient method applied to leaky Lamb waves.
Lenoir, O; Conoir, J M; Izbicki, J L
2002-10-01
The classical phase gradient method applied to the characterization of the angular resonances of an immersed elastic plate, i.e., the angular poles of its reflection coefficient R, was proved to be efficient when their real parts are close to the real zeros of R and their imaginary parts are not too large compared to their real parts. This method consists of plotting the partial reflection coefficient phase derivative with respect to the sine of the incidence angle, considered as real, versus incidence angle. In the vicinity of a resonance, this curve exhibits a Breit-Wigner shape, whose minimum is located at the pole real part and whose amplitude is the inverse of its imaginary part. However, when the imaginary part is large, this method is not sufficiently accurate compared to the exact calculation of the complex angular root. An improvement of this method consists of plotting, in 3D, in the complex angle plane and at a given frequency, the angular phase derivative with respect to the real part of the sine of the incidence angle, considered as complex. When the angular pole is reached, the 3D curve shows a clear-cut transition whose position is easily obtained.
Optical aberration correction for simple lenses via sparse representation
NASA Astrophysics Data System (ADS)
Cui, Jinlin; Huang, Wei
2018-04-01
Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.
Wada, Atsushi; Kono, Mari; Kawauchi, Sawako; Takagi, Yuri; Morikawa, Takashi; Funakoshi, Kunihiro
2012-01-01
Background For precise diagnosis of urinary tract infections (UTI), and selection of the appropriate prescriptions for their treatment, we explored a simple and rapid method of discriminating gram-positive and gram-negative bacteria in liquid samples. Methodology/Principal Findings We employed the NaOH-sodium dodecyl sulfate (SDS) solution conventionally used for plasmid extraction from Escherichia coli and the automated urine particle analyzer UF-1000i (Sysmex Corporation) for our novel method. The NaOH-SDS solution was used to determine differences in the cell wall structures between gram-positive and gram-negative bacteria, since the tolerance to such chemicals reflects the thickness and structural differences of bacterial cell walls. The UF-1000i instrument was used as a quantitative bacterial counter. We found that gram-negative bacteria, including E. coli, in liquid culture could easily be lysed by direct addition of equal volumes of NaOH-SDS solution. In contrast, Enterococcus faecalis, which is a gram-positive bacterium, could not be completely lysed by the solution. We then optimized the reaction time of the NaOH-SDS treatment at room temperature by using 3 gram-positive and 4 gram-negative bacterial strains and determined that the optimum reaction time was 5 min. Finally, in order to evaluate the generalizability of this method, we treated 8 gram-positive strains and 8 gram-negative strains, or 4 gram-positive and 4 gram-negative strains incubated in voluntary urine from healthy volunteers in the same way and demonstrated that all the gram-positive bacteria were discriminated quantitatively from gram negative bacteria using this method. Conclusions/Significance Using our new method, we could easily discriminate gram-positive and gram-negative bacteria in liquid culture media within 10 min. This simple and rapid method may be useful for determining the treatment course of patients with UTIs, especially for those without a prior history of UTIs. The method may be easily applied in order to obtain additional information for clinical prescriptions from bacteriuria. PMID:23077549
García-Hernández, J; Moreno, Y; Amorocho, C M; Hernández, M
2012-03-01
We have developed a direct viable count (DVC)-FISH procedure for quickly and easily discriminating between viable and nonviable cells of Lactobacillus delbrueckii subsp. bulgaricus and Streptococcus thermophilus strains, the traditional yogurt bacteria. direct viable count method has been modified and adapted for Lact. delbrueckii subsp. bulgaricus and Strep. thermophilus analysis by testing different times of incubation and concentrations of DNA-gyrase inhibitors. DVC procedure has been combined with fluorescent in situ hybridization (FISH) for the specific detection of viable cells of both bacteria with specific rRNA oligonucleotide probes (DVC-FISH). Of the four antibiotics tested (novobiocin, nalidixic acid, pipemidic acid and ciprofloxacin), novobiocin was the most effective for DVC method and the optimum incubation time was 7 h for both bacteria. The number of viable cells was obtained by the enumeration of specific hybridized cells that were elongated at least twice their original length for Lactobacillus and twice their original size for Streptococcus. This technique was successfully applied to detect viable cells in inoculated faeces. Results showed that this DVC-FISH procedure is a quick and culture-independent useful method to specifically detect viable Lact. delbrueckii subsp. bulgaricus and Strep. thermophilus in different samples, being applied for the first time to lactic acid bacteria. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
A Markovian Entropy Measure for the Analysis of Calcium Activity Time Series
Rahman, Atiqur; Odorizzi, Laura; LeFew, Michael C.; Golino, Caroline A.; Kemper, Peter; Saha, Margaret S.
2016-01-01
Methods to analyze the dynamics of calcium activity often rely on visually distinguishable features in time series data such as spikes, waves, or oscillations. However, systems such as the developing nervous system display a complex, irregular type of calcium activity which makes the use of such methods less appropriate. Instead, for such systems there exists a class of methods (including information theoretic, power spectral, and fractal analysis approaches) which use more fundamental properties of the time series to analyze the observed calcium dynamics. We present a new analysis method in this class, the Markovian Entropy measure, which is an easily implementable calcium time series analysis method which represents the observed calcium activity as a realization of a Markov Process and describes its dynamics in terms of the level of predictability underlying the transitions between the states of the process. We applied our and other commonly used calcium analysis methods on a dataset from Xenopus laevis neural progenitors which displays irregular calcium activity and a dataset from murine synaptic neurons which displays activity time series that are well-described by visually-distinguishable features. We find that the Markovian Entropy measure is able to distinguish between biologically distinct populations in both datasets, and that it can separate biologically distinct populations to a greater extent than other methods in the dataset exhibiting irregular calcium activity. These results support the benefit of using the Markovian Entropy measure to analyze calcium dynamics, particularly for studies using time series data which do not exhibit easily distinguishable features. PMID:27977764
NASA Astrophysics Data System (ADS)
Daiguji, Hisaaki; Yamamoto, Satoru
1988-12-01
The implicit time-marching finite-difference method for solving the three-dimensional compressible Euler equations developed by the authors is extended to the Navier-Stokes equations. The distinctive features of this method are to make use of momentum equations of contravariant velocities instead of physical boundaries, and to be able to treat the periodic boundary condition for the three-dimensional impeller flow easily. These equations can be solved by using the same techniques as the Euler equations, such as the delta-form approximate factorization, diagonalization and upstreaming. In addition to them, a simplified total variation diminishing scheme by the authors is applied to the present method in order to capture strong shock waves clearly. Finally, the computed results of the three-dimensional flow through a transonic compressor rotor with tip clearance are shown.
A computing method for spatial accessibility based on grid partition
NASA Astrophysics Data System (ADS)
Ma, Linbing; Zhang, Xinchang
2007-06-01
An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.
RJMCMC based Text Placement to Optimize Label Placement and Quantity
NASA Astrophysics Data System (ADS)
Touya, Guillaume; Chassin, Thibaud
2018-05-01
Label placement is a tedious task in map design, and its automation has long been a goal for researchers in cartography, but also in computational geometry. Methods that search for an optimal or nearly optimal solution that satisfies a set of constraints, such as label overlapping, have been proposed in the literature. Most of these methods mainly focus on finding the optimal position for a given set of labels, but rarely allow the removal of labels as part of the optimization. This paper proposes to apply an optimization technique called Reversible-Jump Markov Chain Monte Carlo that enables to easily model the removal or addition during the optimization iterations. The method, quite preliminary for now, is tested on a real dataset, and the first results are encouraging.
Quasi-Static Probabilistic Structural Analyses Process and Criteria
NASA Technical Reports Server (NTRS)
Goldberg, B.; Verderaime, V.
1999-01-01
Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.
Cvetkovic, Dean
2013-01-01
The Cooperative Learning in Engineering Design curriculum can be enhanced with structured and timely self and peer assessment teaching methodologies which can easily be applied to any Biomedical Engineering curriculum. A study was designed and implemented to evaluate the effectiveness of this structured and timely self and peer assessment on student team-based projects. In comparing the 'peer-blind' and 'face-to-face' Fair Contribution Scoring (FCS) methods, both had advantages and disadvantages. The 'peer-blind' self and peer assessment method would cause high discrepancy between self and team ratings. But the 'face-to-face' method on the other hand did not have the discrepancy issue and had actually proved to be a more accurate and effective, indicating team cohesiveness and good cooperative learning.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Low-emittance tuning of storage rings using normal mode beam position monitor calibration
NASA Astrophysics Data System (ADS)
Wolski, A.; Rubin, D.; Sagan, D.; Shanks, J.
2011-07-01
We describe a new technique for low-emittance tuning of electron and positron storage rings. This technique is based on calibration of the beam position monitors (BPMs) using excitation of the normal modes of the beam motion, and has benefits over conventional methods. It is relatively fast and straightforward to apply, it can be as easily applied to a large ring as to a small ring, and the tuning for low emittance becomes completely insensitive to BPM gain and alignment errors that can be difficult to determine accurately. We discuss the theory behind the technique, present some simulation results illustrating that it is highly effective and robust for low-emittance tuning, and describe the results of some initial experimental tests on the CesrTA storage ring.
Towards a balanced performance measurement system in a public health care organization.
Yuen, Peter P; Ng, Artie W
2012-01-01
This article attempts to devise an integrated performance measurement framework to assess the Hong Kong Hospital Authority (HA) management system by harnessing previous performance measurement systems. An integrated evaluative framework based on the balanced score card (BSC) was developed and applied using the case study method and longitudinal data to evaluate the HA's performance management system. The authors unveil evolving HA performance indicators (P1). Despite the HA staffs explicit quality emphasis, cost control remains the primary focus in their performance measurements. RESEARCH LHNITATIONS/IMPLICATIONS: Data used in this study are from secondary sources, disclosed mostly by HA staff. This study shows public sector staff often attach too much importance to cost control and easily measurable activities at the expense of quality and other less easily measurable attributes'. A balanced performance measurement system, linked to health targets, with a complementary budgeting process that supports pertinent resource allocation is yet to be implemented in Hong Kong's public hospitals.
NASA Astrophysics Data System (ADS)
Kolyakov, Sergei; Afanasyeva, Natalia; Bruch, Reinhard; Afanasyeva, Natalia
1998-05-01
The new method of fiber optical evanescent wave Fourier transform infrared (FEW-FTIR) spectroscopy has been applied to the diagnostics of normal skin tissue, as well as precancerous and cancerous conditions. The FEW-FTIR technique is nondestructive and sensitive to changes of vibrational spectra in the IR region, without heating and damaging human and animal skin tissue. Therefore this method and technique is an ideal diagnostic tool for tumor and cancer characterization at an early stage of development on a molecular level. The application of fiber optic technology in the middle infrared (MIR) region is relatively inexpensive and can be adapted easily to any commercially available tabletop FTIR spectrometers. This method of diagnostics is fast (several seconds), and can be applied to many fields. Noninvasive medical diagnostics of skin cancer and other skin diseases in vivo, ex vivo, and in vitro allow for the development of convenient, remote clinical applications in dermatology and related fields. The spectral variations from normal to pathological skin tissue and environmental influence on skin have been measured.
An Analysis of the Optimal Control Modification Method Applied to Flutter Suppression
NASA Technical Reports Server (NTRS)
Drew, Michael; Nguyen, Nhan T.; Hashemi, Kelley E.; Ting, Eric; Chaparro, Daniel
2017-01-01
Unlike basic Model Reference Adaptive Control (MRAC)l, Optimal Control Modification (OCM) has been shown to be a promising MRAC modification with robustness and analytical properties not present in other adaptive control methods. This paper presents an analysis of the OCM method, and how the asymptotic property of OCM is useful for analyzing and tuning the controller. We begin with a Lyapunov stability proof of an OCM controller having two adaptive gain terms, then the less conservative and easily analyzed OCM asymptotic property is presented. Two numerical examples are used to show how this property can accurately predict steady state stability and quantitative robustness in the presence of time delay, and relative to linear plant perturbations, and nominal Loop Transfer Recovery (LTR) tuning. The asymptotic property of the OCM controller is then used as an aid in tuning the controller applied to a large scale aeroservoelastic longitudinal aircraft model for flutter suppression. Control with OCM adaptive augmentation is shown to improve performance over that of the nominal non-adaptive controller when significant disparities exist between the controller/observer model and the true plant model.
Square Wave Voltammetric Determination of Diclofenac in Pharmaceutical Preparations and Human Serum
Ciltas, Ulvihan; Yilmaz, Bilal; Kaban, Selcuk; Akcay, Bilge Kaan; Nazik, Gulsah
2015-01-01
In this study, a simple and reliable square wave voltammetric (SWV) method was developed and validated for determination of diclofenac in pharmaceutical preparations and human serum. The proposed method was based on electrooxidation of diclofenac at platinum electrode in 0.1 M TBAClO4/acetonitrile solution. The well-defined two oxidation peaks were observed at 0.87 and 1.27 V, respectively. Calibration curves that were obtained by using current values measured for second peak were linear over the concentration range of 1.5-17.5 μg mL-1 and 2-20 μg mL-1 in supporting electrolyte and serum, respectively. Precision and accuracy were also checked in all media. Intra- and inter-day precision values for diclofenac were less than 3.64, and accuracy (relative error) was better than 2.49%. Developed method in this study is accurate, precise and can be easily applied to Diclomec, Dicloflam and Voltaren tablets as pharmaceutical preparation. Also, the proposed technique was successfully applied to spiked human serum samples. No electroactive interferences from the endogenous substances were found in human serum. PMID:26330859
Square Wave Voltammetric Determination of Diclofenac in Pharmaceutical Preparations and Human Serum.
Ciltas, Ulvihan; Yilmaz, Bilal; Kaban, Selcuk; Akcay, Bilge Kaan; Nazik, Gulsah
2015-01-01
In this study, a simple and reliable square wave voltammetric (SWV) method was developed and validated for determination of diclofenac in pharmaceutical preparations and human serum. The proposed method was based on electrooxidation of diclofenac at platinum electrode in 0.1 M TBAClO4/acetonitrile solution. The well-defined two oxidation peaks were observed at 0.87 and 1.27 V, respectively. Calibration curves that were obtained by using current values measured for second peak were linear over the concentration range of 1.5-17.5 μg mL(-1) and 2-20 μg mL(-1) in supporting electrolyte and serum, respectively. Precision and accuracy were also checked in all media. Intra- and inter-day precision values for diclofenac were less than 3.64, and accuracy (relative error) was better than 2.49%. Developed method in this study is accurate, precise and can be easily applied to Diclomec, Dicloflam and Voltaren tablets as pharmaceutical preparation. Also, the proposed technique was successfully applied to spiked human serum samples. No electroactive interferences from the endogenous substances were found in human serum.
Gutiérrez-Juárez, G; Vargas-Luna, M; Córdova, T; Varela, J B; Bernal-Alvarado, J J; Sosa, M
2002-08-01
A photoacoustic technique is used for studying topically applied substance absorption in human skin. The proposed method utilizes a double-chamber PA cell. The absorption determination was obtained through the measurement of the thermal effusivity of the binary system substance-skin. The theoretical model assumes that the effective thermal effusivity of the binary system corresponds to that of a two-phase system. Experimental applications of the method employed different substances of topical application in different parts of the body of a volunteer. The method is demonstrated to be an easily used non-invasive technique for dermatology research. The relative concentrations as a function of time of substances such as ketoconazol and sunscreen were determined by fitting a sigmoidal function to the data, while an exponential function corresponds to the best fit for the set of data for nitrofurazona, vaseline and vaporub. The time constants associated with the rates of absorption, were found to vary in the range between 10 and 58 min, depending on the substance and the part of the body.
Scaling Laws Applied to a Modal Formulation of the Aeroservoelastic Equations
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.
2002-01-01
A method of scaling is described that easily converts the aeroelastic equations of motion of a full-sized aircraft into ones of a wind-tunnel model. To implement the method, a set of rules is provided for the conversion process involving matrix operations with scale factors. In addition, a technique for analytically incorporating a spring mounting system into the aeroelastic equations is also presented. As an example problem, a finite element model of a full-sized aircraft is introduced from the High Speed Research (HSR) program to exercise the scaling method. With a set of scale factor values, a brief outline is given of a procedure to generate the first-order aeroservoelastic analytical model representing the wind-tunnel model. To verify the scaling process as applied to the example problem, the root-locus patterns from the full-sized vehicle and the wind-tunnel model are compared to see if the root magnitudes scale with the frequency scale factor value. Selected time-history results are given from a numerical simulation of an active-controlled wind-tunnel model to demonstrate the utility of the scaling process.
Novel optical interconnect devices and coupling methods applying self-written waveguide technology
NASA Astrophysics Data System (ADS)
Nakama, Kenichi; Mikami, Osamu
2011-05-01
For the use in cost-effective optical interconnection of opt-electronic printed wiring boards (OE-PWBs), we have developed novel optical interconnect devices and coupling methods simplifying board to board optical interconnect. All these are based on the self-written waveguide (SWW) technology by the mask-transfer method with light-curable resin. This method enables fabrication of arrayed M × N optical channels at one shot of UV light. Very precise patterns, as an example, optical rod with diameters of 50μm to 500μm, can be easily fabricated. The length of the fabricated patterns ,, typically up to about 1000μm , can be controlled by a spacer placed between the photomask and the substrate. Using these technologies, several new optical interfaces have been demonstrated. These are a chip VCSEL with an optical output rod and new coupling methods of "plug-in" alignment and "optical socket" based on SWW.
Taguchi method of experimental design in materials education
NASA Technical Reports Server (NTRS)
Weiser, Martin W.
1993-01-01
Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.
Versatile robotic probe calibration for position tracking in ultrasound imaging.
Bø, Lars Eirik; Hofstad, Erlend Fagertun; Lindseth, Frank; Hernes, Toril A N
2015-05-07
Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.
Versatile robotic probe calibration for position tracking in ultrasound imaging
NASA Astrophysics Data System (ADS)
Eirik Bø, Lars; Fagertun Hofstad, Erlend; Lindseth, Frank; Hernes, Toril A. N.
2015-05-01
Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.
Method for Waterproofing Ceramic Materials
NASA Technical Reports Server (NTRS)
Cagliostro, Domenick E. (Inventor); Hsu, Ming-Ta S. (Inventor)
1998-01-01
Hygroscopic ceramic materials which are difficult to waterproof with a silane, substituted silane or silazane waterproofing agent, such as an alumina containing fibrous, flexible and porous, fibrous ceramic insulation used on a reentry space vehicle, are rendered easy to waterproof if the interior porous surface of the ceramic is first coated with a thin coating of silica. The silica coating is achieved by coating the interior surface of the ceramic with a silica precursor converting the precursor to silica either in-situ or by oxidative pyrolysis and then applying the waterproofing agent to the silica coated ceramic. The silica precursor comprises almost any suitable silicon containing material such as a silane, silicone, siloxane, silazane and the like applied by solution, vapor deposition and the like. If the waterproofing is removed by e.g., burning, the silica remains and the ceramic is easily rewaterproofed. An alumina containing TABI insulation which absorbs more that five times its weight of water, absorbs less than 10 wt. % water after being waterproofed according to the method of the invention.
Three dimensional measurement with an electrically tunable focused plenoptic camera
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.
Three dimensional measurement with an electrically tunable focused plenoptic camera.
Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.
Quantitation of Fine Displacement in Echography
NASA Astrophysics Data System (ADS)
Masuda, Kohji; Ishihara, Ken; Yoshii, Ken; Furukawa, Toshiyuki; Kumagai, Sadatoshi; Maeda, Hajime; Kodama, Shinzo
1993-05-01
A High-speed Digital Subtraction Echography was developed to visualize the fine displacement of human internal organs. This method indicates differences in position through time series images of high-frame-rate echography. Fine displacement less than ultrasonic wavelength can be observed. This method, however, lacks the ability to quantitatively measure displacement length. The subtraction between two successive images was affected by displacement direction in spite of the displacement length being the same. To solve this problem, convolution of an echogram with Gaussian distribution was used. To express displacement length as brightness quantitatively, normalization using a brightness gradient was applied. The quantitation algorithm was applied to successive B-mode images. Compared to the simply subtracted images, quantitated images express more precisely the motion of organs. Expansion of the carotid artery and fine motion of ventricular walls can be visualized more easily. Displacement length can be quantitated with wavelength. Under more static conditions, this system quantitates displacement length that is much less than wavelength.
Sea otter research methods and tools
Bodkin, James L.; Maldini, Daniela; Calkins, Donald; Atkinson, Shannon; Meehan, Rosa
2004-01-01
Sea otters possess physical characteristics and life history attributes that provide both opportunity and constraint to their study. Because of their relatively limited diving ability they occur in nearshore marine habitats that are usually viewable from shore, allowing direct observation of most behaviors. Because sea otters live nearshore and forage on benthic invertebrates, foraging success and diet are easily measured. Because they rely almost exclusively on their pelage for insulation, which requires frequent grooming, successful application of external tags or instruments has been limited to attachments in the interdigital webbing of the hind flippers. Techniques to surgically implant instruments into the intraperitoneal cavity are well developed and routinely applied. Because they have relatively small home ranges and rest in predictable areas, they can be recaptured with some predictability using closed-circuit scuba diving technology. The purpose of this summary is to identify some of the approaches, methods, and tools that are currently engaged for the study of sea otters, and to suggest potential avenues for applying advancing technologies.
Li, Lin-Qiu; Baibado, Joewel T; Shen, Qing; Cheung, Hon-Yeung
2017-12-01
Plastron is a nutritive and superior functional food. Due to its limited supply yet enormous demands, some functional foods supposed to contain plastron may be forged with other substitutes. This paper reports a novel and simple method for determination of the authenticity of plastron-derived functional foods based on comparison of the amino acid (AA) profiles of plastron and its possible substitutes. By applying micellar electrokinetic chromatography (MEKC), 18 common AAs along with another 2 special AAs - hydroxyproline (Hyp) and hydroxylysine (Hyl) were detected in all plastron samples. Since chicken, egg, fish, milk, pork, nail and hair lacked of Hyp and Hyl, plastron could be easily distinguished. For those containing collagen, a statistical analysis technique - principal component analysis (PCA) was adopted and plastron was successfully distinguished. When applied the proposed method to authenticate turtle shell glue in the market, fake products were commonly found. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Makó, Éva; Kovács, András; Ható, Zoltán; Kristóf, Tamás
2015-12-01
Recent experimental and simulation findings with kaolinite-methanol intercalation complexes raised the question of the existence of more stable structures in wet and dry state, which has not been fully cleared up yet. Experimental and molecular simulation analyses were used to investigate different types of kaolinite-methanol complexes, revealing their real structures. Cost-efficient homogenization methods were applied to synthesize the kaolinite-dimethyl sulfoxide and kaolinite-urea pre-intercalation complexes of the kaolinite-methanol ones. The tested homogenization method required an order of magnitude lower amount of reagents than the generally applied solution method. The influence of the type of pre-intercalated molecules and of the wetting or drying (at room temperature and at 150 °C) procedure on the intercalation was characterized experimentally by X-ray diffraction and thermal analysis. Consistent with the suggestion from the present simulations, 1.12-nm and 0.83-nm stable kaolinite-methanol complexes were identified. For these complexes, our molecular simulations predict either single-layered structures of mobile methanol/water molecules or non-intercalated structures of methoxy-functionalized kaolinite. We found that the methoxy-modified kaolinite can easily be intercalated by liquid methanol.
Developing collaborative classifiers using an expert-based model
Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan
2009-01-01
This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, Billy C.
The measurement of the radiation characteristics of an antenna on a near-field range requires that the antenna under test be located very close to the near-field probe. Although the direct coupling is utilized for characterizing the near field, this close proximity also presents the opportunity for significant undesired interactions (for example, reflections) to occur between the antenna and the near-field probe. When uncompensated, these additional interactions will introduce error into the measurement, increasing the uncertainty in the final gain pattern obtained through the near-field-to-far-field transformation. Quantifying this gain-uncertainty contribution requires quantifying the various additional interactions. A method incorporating spatial-frequency analysismore » is described which allows the dominant interaction contributions to be easily identified and quantified. In addition to identifying the additional antenna-to-probe interactions, the method also allows identification and quantification of interactions with other nearby objects within the measurement room. Because the method is a spatial-frequency method, wide-bandwidth data is not required, and it can be applied even when data is available at only a single temporal frequency. This feature ensures that the method can be applied to narrow-band antennas, where a similar time-domain analysis would not be possible. - 3 - - 4 -« less
Occupancy in community-level studies
MacKenzie, Darryl I.; Nichols, James; Royle, Andy; Pollock, Kenneth H.; Bailey, Larissa L.; Hines, James
2018-01-01
Another type of multi-species studies, are those focused on community-level metrics such as species richness. In this chapter we detail how some of the single-species occupancy models described in earlier chapters have been applied, or extended, for use in such studies, while accounting for imperfect detection. We highlight how Bayesian methods using MCMC are particularly useful in such settings to easily calculate relevant community-level summaries based on presence/absence data. These modeling approaches can be used to assess richness at a single point in time, or to investigate changes in the species pool over time.
METHOD OF FORMING ELONGATED COMPACTS
Larson, H.F.
1959-05-01
A powder compacting procedure and apparatus which produces elongated compacts of Be is described. The powdered metal is placed in a thin metal tube which is chemically compatible to lubricant, powder, atmosphere, and die material and will undergo a high degree of plastic deformation and have intermediate hardness. The tube is capped and placed in the die, and punches are applied to the ends. During the compacting stroke the powder seizes the tube and a thickening and shortening of the tube occurs. The tube is easily removed from the die, split, and peeled from the compact. (T.R.H.)
Li, Na; Yang, Gongzheng; Sun, Yong; Song, Huawei; Cui, Hao; Yang, Guowei; Wang, Chengxin
2015-05-13
Transparency has never been integrated into freestanding flexible graphene paper (FF-GP), although FF-GP has been discussed extensively, because a thin transparent graphene sheet will fracture easily when the template or substrate is removed using traditional methods. Here, transparent FF-GP (FFT-GP) was developed using NaCl as the template and was applied in transparent and stretchable supercapacitors. The capacitance was improved by nearly 1000-fold compared with that of the laminated or wrinkled chemical vapor deposition graphene-film-based supercapacitors.
An efficient numerical scheme for the study of equal width equation
NASA Astrophysics Data System (ADS)
Ghafoor, Abdul; Haq, Sirajul
2018-06-01
In this work a new numerical scheme is proposed in which Haar wavelet method is coupled with finite difference scheme for the solution of a nonlinear partial differential equation. The scheme transforms the partial differential equation to a system of algebraic equations which can be solved easily. The technique is applied to equal width equation in order to study the behaviour of one, two, three solitary waves, undular bore and soliton collision. For efficiency and accuracy of the scheme, L2 and L∞ norms and invariants are computed. The results obtained are compared with already existing results in literature.
Norrgard, E B; Sitaraman, N; Barry, J F; McCarron, D J; Steinecker, M H; DeMille, D
2016-05-01
We demonstrate a simple and easy method for producing low-reflectivity surfaces that are ultra-high vacuum compatible, may be baked to high temperatures, and are easily applied even on complex surface geometries. Black cupric oxide (CuO) surfaces are chemically grown in minutes on any copper surface, allowing for low-cost, rapid prototyping, and production. The reflective properties are measured to be comparable to commercially available products for creating optically black surfaces. We describe a vacuum apparatus which uses multiple blackened copper surfaces for sensitive, low-background detection of molecules using laser-induced fluorescence.
NASA Astrophysics Data System (ADS)
Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.
2005-02-01
A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.
Neural network regulation driven by autonomous neural firings
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2016-07-01
Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.
Exercising privacy rights in medical science
Hillmer, Michael; Redelmeier, Donald A.
2007-01-01
Privacy laws are intended to preserve human well-being and improve medical outcomes. We used the Sportstats website, a repository of competitive athletic data, to test how easily these laws can be circumvented. We designed a haphazard, unrepresentative case-series analysis and applied unscientific methods based on an Internet connection and idle time. We found it both feasible and titillating to breach anonymity, stockpile personal information and generate misquotations. We extended our methods to snoop on celebrities, link to outside databases and uncover refusal to participate. Throughout our study, we evaded capture and public humiliation despite violating these 6 privacy fundamentals. We suggest that the legitimate principle of safeguarding personal privacy is undermined by the natural human tendency toward showing off. PMID:18056619
Detection of defects on apple using B-spline lighting correction method
NASA Astrophysics Data System (ADS)
Li, Jiangbo; Huang, Wenqian; Guo, Zhiming
To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.
Estimating vapor pressures of pure liquids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraburda, S.S.
1996-03-01
Calculating the vapor pressures for pure liquid chemicals is a key step in designing equipment for separation of liquid mixtures. Here is a useful way to develop an equation for predicting vapor pressures over a range of temperatures. The technique uses known vapor pressure points for different temperatures. Although a vapor-pressure equation is being showcased in this article, the basic method has much broader applicability -- in fact, users can apply it to develop equations for any temperature-dependent model. The method can be easily adapted for use in software programs for mathematics evaluation, minimizing the need for any programming. Themore » model used is the Antoine equation, which typically provides a good correlation with experimental or measured data.« less
NASA Astrophysics Data System (ADS)
Hansson, Tony
1999-08-01
An inexpensive semiclassical method to simulate time-resolved pump-probe spectroscopy on molecular wave packets is applied to NaK molecules at high temperature. The method builds on the introduction of classical phase factors related to the r-centroids for vibronic transitions and assumes instantaneous laser-molecule interaction. All observed quantum mechanical features are reproduced - for short times where experimental data are available even quantitatively. Furthermore, it is shown that fully quantum dynamical molecular wave packet calculations on molecules at elevated temperatures, which do not include all rovibrational states, must be regarded with caution, as they easily might yield even qualitatively incorrect results.
Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference
NASA Astrophysics Data System (ADS)
Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun
2018-06-01
Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.
Some observations on glass-knife making.
Ward, R T
1977-11-01
The yield of usable knife edge per knife (for thin sectioning) was markedly increased when glass knives were made at an included angle of 55 degrees rather than the customary 45 degrees. A large number of measurements of edge check marks made with a routine light scattering method as well as observations made on a smaller number of test sections with the electron microscope indicated the superiority of 55 degrees knives. Knives were made with both taped pliers and an LKB Knifemaker. Knives were graded by methods easily applied in any biological electron microscope laboratory. Depending on the mode of fracture, the yield of knives having more than 33% of their edges free of check marks was 30 to 100 times greater at 55 degrees than 45 degrees.
Determination of soluble and insoluble dietary fiber in psyllium-containing cereal products.
Lee, S C; Rodriguez, F; Storey, M; Farmakalidis, E; Prosky, L
1995-01-01
A method for soluble and insoluble dietary fiber determinations was developed for psyllium-containing food products, which are highly viscous in aqueous solutions. The assay is based on a modification of the AOAC soluble and insoluble dietary fiber method (991.43), which was recommended for nutrition labeling in the final U.S. food labeling regulations. We found that method 991.43 and other existing dietary fiber methods could not be applied to psyllium food products, which exhibit high viscosity in aqueous solutions, because highly viscous solutions could not be filtered easily. In this study, we modified AOAC method 991.43 to accommodate the filtration process of viscous sample solutions. Sonication followed by high-speed centrifugation was used before filtration. The principles of the method are similar to those for AOAC method 991.43, including the use of the same 3 enzymes (heat-stable alpha-amylase, protease, and amyloglucosidase) as well as similar enzyme incubation conditions. The modification using sonication and high-speed centrifugation did not alter the method performance for analytically normal products such as wheat bran, oat bran, and soy fiber. Yet, the modification allowed the separation of soluble dietary fiber fractions from insoluble fractions for psyllium products with satisfactory precision. This method for psyllium dietary fiber determinations may be applied to other food products that exhibit high viscosity in aqueous solutions.
Menezes, Helvécio Costa; de Barcelos, Stella Maris Resende; Macedo, Damiana Freire Dias; Purceno, Aluir Dias; Machado, Bruno Fernades; Teixeira, Ana Paula Carvalho; Lago, Rochel Monteiro; Serp, Philippe; Cardeal, Zenilda Lourdes
2015-05-11
This paper describes a new, efficient and versatile method for the sampling and preconcentration of PAH in environmental water matrices using special hybrid magnetic carbon nanotubes. These N-doped amphiphilic CNT can be easily dispersed in any aqueous matrix due to the N containing hydrophilic part and at the same time show high efficiency for the adsorption of different PAH contaminants due to the very hydrophobic surface. After adsorption, the CNT can be easily removed from the medium by a simple magnetic separation. GC/MS analyses showed that the CNT method is more efficient than the use of polydimethylsiloxane (PDMS) with much lower solvent consumption, technical simplicity and time, showing good linearity (range 0.18-80.00 μg L(-1)) and determination coefficient (R(2) > 0.9810). The limit of detection ranged from 0.05 to 0.42 μg L(-1) with limit of quantification from 0.18 to 1.40 μg L(-1). Recovery (n=9) ranged from 80.50 ± 10 to 105.40 ± 12%. Intraday precision (RSD, n=9) ranged from 1.91 to 9.01%, whereas inter day precision (RSD, n=9) ranged from 7.02 to 17.94%. The method was applied to the analyses of PAH in four lake water samples collected in Belo Horizonte City, Brazil. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shou-Yi; Wang, Jian, E-mail: wangjian@nwnu.edu.cn; Wang, Gang
2015-08-15
Highlights: • The alumina multilayer structure with alternating high and low refractive index is fabricated. • This multilayer shows a strong photonic band gap (PBG) and vivid film colors. • The first PBG could be modulated easily by varying the duration time of constant high or low voltages. • Fabrication of the photonic crystal is obtained by directly electrochemical anodization. • The formation mechanism of multilayer is also discussed. - Abstract: The alumina nanolayer structure with alternating high and low porosities is conveniently fabricated by applying a modified pulse voltage waveform with constant high and low voltage. This structure showsmore » the well-defined layer in a long-range structural periodicity leads to a strong photonic band gap (PBG) from visible to near infrared and brilliant film colors. Compared with the previous reported tuning method, this method is more simple and flexible in tuning the PBG of photonic crystals (PCs). The effect of duration time of high, low and 0 V voltages on PBG is discussed. The first PBG could be modulated easily from the visible to near infrared region by varying the duration time of constant high or low voltages. It is also found that the 0 V lasting for appropriate time is helpful to improve the quality of the PCs. The formation mechanism of multilayer is also discussed.« less
Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.
Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang
2016-11-01
Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.
Inferring relationships between pairs of individuals from locus heterozygosities
Presciuttini, Silvano; Toni, Chiara; Tempestini, Elena; Verdiani, Simonetta; Casarino, Lucia; Spinetti, Isabella; Stefano, Francesco De; Domenici, Ranieri; Bailey-Wilson, Joan E
2002-01-01
Background The traditional exact method for inferring relationships between individuals from genetic data is not easily applicable in all situations that may be encountered in several fields of applied genetics. This study describes an approach that gives affordable results and is easily applicable; it is based on the probabilities that two individuals share 0, 1 or both alleles at a locus identical by state. Results We show that these probabilities (zi) depend on locus heterozygosity (H), and are scarcely affected by variation of the distribution of allele frequencies. This allows us to obtain empirical curves relating zi's to H for a series of common relationships, so that the likelihood ratio of a pair of relationships between any two individuals, given their genotypes at a locus, is a function of a single parameter, H. Application to large samples of mother-child and full-sib pairs shows that the statistical power of this method to infer the correct relationship is not much lower than the exact method. Analysis of a large database of STR data proves that locus heterozygosity does not vary significantly among Caucasian populations, apart from special cases, so that the likelihood ratio of the more common relationships between pairs of individuals may be obtained by looking at tabulated zi values. Conclusions A simple method is provided, which may be used by any scientist with the help of a calculator or a spreadsheet to compute the likelihood ratios of common alternative relationships between pairs of individuals. PMID:12441003
A Semisupervised Support Vector Machines Algorithm for BCI Systems
Qin, Jianzhao; Li, Yuanqing; Sun, Wei
2007-01-01
As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141
The adaptation of GDL motion recognition system to sport and rehabilitation techniques analysis.
Hachaj, Tomasz; Ogiela, Marek R
2016-06-01
The main novelty of this paper is presenting the adaptation of Gesture Description Language (GDL) methodology to sport and rehabilitation data analysis and classification. In this paper we showed that Lua language can be successfully used for adaptation of the GDL classifier to those tasks. The newly applied scripting language allows easily extension and integration of classifier with other software technologies and applications. The obtained execution speed allows using the methodology in the real-time motion capture data processing where capturing frequency differs from 100 Hz to even 500 Hz depending on number of features or classes to be calculated and recognized. Due to this fact the proposed methodology can be used to the high-end motion capture system. We anticipate that using novel, efficient and effective method will highly help both sport trainers and physiotherapist in they practice. The proposed approach can be directly applied to motion capture data kinematics analysis (evaluation of motion without regard to the forces that cause that motion). The ability to apply pattern recognition methods for GDL description can be utilized in virtual reality environment and used for sport training or rehabilitation treatment.
Single Cell Total RNA Sequencing through Isothermal Amplification in Picoliter-Droplet Emulsion.
Fu, Yusi; Chen, He; Liu, Lu; Huang, Yanyi
2016-11-15
Prevalent single cell RNA amplification and sequencing chemistries mainly focus on polyadenylated RNAs in eukaryotic cells by using oligo(dT) primers for reverse transcription. We develop a new RNA amplification method, "easier-seq", to reverse transcribe and amplify the total RNAs, both with and without polyadenylate tails, from a single cell for transcriptome sequencing with high efficiency, reproducibility, and accuracy. By distributing the reverse transcribed cDNA molecules into 1.5 × 10 5 aqueous droplets in oil, the cDNAs are isothermally amplified using random primers in each of these 65-pL reactors separately. This new method greatly improves the ease of single-cell RNA sequencing by reducing the experimental steps. Meanwhile, with less chance to induce errors, this method can easily maintain the quality of single-cell sequencing. In addition, this polyadenylate-tail-independent method can be seamlessly applied to prokaryotic cell RNA sequencing.
Liu, Qian-qian; Wang, Chun-yan; Shi, Xiao-feng; Li, Wen-dong; Luan, Xiao-ning; Hou, Shi-lin; Zhang, Jin-liang; Zheng, Rong-er
2012-04-01
In this paper, a new method was developed to differentiate the spill oil samples. The synchronous fluorescence spectra in the lower nonlinear concentration range of 10(-2) - 10(-1) g x L(-1) were collected to get training data base. Radial basis function artificial neural network (RBF-ANN) was used to identify the samples sets, along with principal component analysis (PCA) as the feature extraction method. The recognition rate of the closely-related oil source samples is 92%. All the results demonstrated that the proposed method could identify the crude oil samples effectively by just one synchronous spectrum of the spill oil sample. The method was supposed to be very suitable to the real-time spill oil identification, and can also be easily applied to the oil logging and the analysis of other multi-PAHs or multi-fluorescent mixtures.
Hosono, Eiji; Wang, Yonggang; Kida, Noriyuki; Enomoto, Masaya; Kojima, Norimichi; Okubo, Masashi; Matsuda, Hirofumi; Saito, Yoshiyasu; Kudo, Tetsuichi; Honma, Itaru; Zhou, Haoshen
2010-01-01
A triaxial LiFePO4 nanowire with a multi wall carbon nanotube (VGCF:Vapor-grown carbon fiber) core column and an outer shell of amorphous carbon was successfully synthesized through the electrospinning method. The carbon nanotube core oriented in the direction of the wire played an important role in the conduction of electrons during the charge-discharge process, whereas the outer amorphous carbon shell suppressed the oxidation of Fe2+. An electrode with uniformly dispersed carbon and active materials was easily fabricated via a single process by heating after the electrospinning method is applied. Mossbauer spectroscopy for the nanowire showed a broadening of the line width, indicating a disordered coordination environment of the Fe ion near the surface. The electrospinning method was proven to be suitable for the fabrication of a triaxial nanostructure.
Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia
2016-06-01
To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.
Engisch, William; Muzzio, Fernando
Continuous processing in pharmaceutical manufacturing is a relatively new approach that has generated significant attention. While it has been used for decades in other industries, showing significant advantages, the pharmaceutical industry has been slow in its adoption of continuous processing, primarily due to regulatory uncertainty. This paper aims to help address these concerns by introducing methods for batch definition, raw material traceability, and sensor frequency determination. All of the methods are based on established engineering and mathematical principles, especially the residence time distribution (RTD). This paper introduces a risk-based approach to address content uniformity challenges of continuous manufacturing. All of the detailed methods are discussed using a direct compaction manufacturing line as the main example, but the techniques can easily be applied to other continuous manufacturing methods such as wet and dry granulation, hot melt extrusion, capsule filling, etc.
Teutsch, T; Mesch, M; Giessen, H; Tarin, C
2015-01-01
In this contribution, a method to select discrete wavelengths that allow an accurate estimation of the glucose concentration in a biosensing system based on metamaterials is presented. The sensing concept is adapted to the particular application of ophthalmic glucose sensing by covering the metamaterial with a glucose-sensitive hydrogel and the sensor readout is performed optically. Due to the fact that in a mobile context a spectrometer is not suitable, few discrete wavelengths must be selected to estimate the glucose concentration. The developed selection methods are based on nonlinear support vector regression (SVR) models. Two selection methods are compared and it is shown that wavelengths selected by a sequential forward feature selection algorithm achieves an estimation improvement. The presented method can be easily applied to different metamaterial layouts and hydrogel configurations.
Forecasting petroleum discoveries in sparsely drilled areas: Nigeria and the North Sea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attanasi, E.D.; Root, D.H.
1988-10-01
Decline function methods for projecting future discoveries generally capture the crowding effects of wildcat wells on the discovery rate. However, these methods do not accommodate easily situations where exploration areas and horizons are expanding. In this paper, a method is presented that uses a mapping algorithm for separating these often countervailing influences. The method is applied to Nigeria and the North Sea. For an amount of future drilling equivalent to past drilling (825 wildcat wells), future discoveries (in resources found) for Nigeria are expected to decline by 68% per well but still amount to 8.5 billion barrels of oil equivalentmore » (BOE). Similarly, for the total North Sea for an equivalent amount and mix among areas of past drilling (1322 wildcat wells), future discoveries are expected to amount to 17.9 billion BOE, whereas the average discovery rate per well is expected to decline by 71%.« less
Forecasting petroleum discoveries in sparsely drilled areas: Nigeria and the North Sea
Attanasi, E.D.; Root, D.H.
1988-01-01
Decline function methods for projecting future discoveries generally capture the crowding effects of wildcat wells on the discovery rate. However, these methods do not accommodate easily situations where exploration areas and horizons are expanding. In this paper, a method is presented that uses a mapping algorithm for separating these often countervailing influences. The method is applied to Nigeria and the North Sea. For an amount of future drilling equivalent to past drilling (825 wildcat wells), future discoveries (in resources found) for Nigeria are expected to decline by 68% per well but still amount to 8.5 billion barrels of oil equivalent (BOE). Similarly, for the total North Sea for an equivalent amount and mix among areas of past drilling (1322 wildcat wells), future discoveries are expected to amount to 17.9 billion BOE, whereas the average discovery rate per well is expected to decline by 71%. ?? 1988 International Association for Mathematical Geology.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
Learning to Select Supplier Portfolios for Service Supply Chain
Zhang, Rui; Li, Jingfei; Wu, Shaoyu; Meng, Dabin
2016-01-01
The research on service supply chain has attracted more and more focus from both academia and industrial community. In a service supply chain, the selection of supplier portfolio is an important and difficult problem due to the fact that a supplier portfolio may include multiple suppliers from a variety of fields. To address this problem, we propose a novel supplier portfolio selection method based on a well known machine learning approach, i.e., Ranking Neural Network (RankNet). In the proposed method, we regard the problem of supplier portfolio selection as a ranking problem, which integrates a large scale of decision making features into a ranking neural network. Extensive simulation experiments are conducted, which demonstrate the feasibility and effectiveness of the proposed method. The proposed supplier portfolio selection model can be applied in a real corporation easily in the future. PMID:27195756
Mavrodi, Alexandra; Ohanyan, Ani; Kechagias, Nikos; Tsekos, Antonis; Vahtsevanos, Konstantinos
2015-09-01
Post-operative complications of various degrees of severity are commonly observed in third molar impaction surgery. For this reason, a surgical procedure that decreases the trauma of bone and soft tissues should be a priority for surgeons. In the present study, we compare the efficacy and the post-operative complications of patients to whom two different surgical techniques were applied for impacted lower third molar extraction. Patients of the first group underwent the classical bur technique, while patients of the second group underwent another technique, in which an elevator was placed on the buccal surface of the impacted molar in order to luxate the alveolar socket more easily. Comparing the two techniques, we observed a statistically significant decrease in the duration of the procedure and in the need for tooth sectioning when applying the second surgical technique, while the post-operative complications were similar in the two groups. We also found a statistically significant lower incidence of lingual nerve lesions and only a slightly higher frequency of sharp mandibular bone irregularities in the second group, which however was not statistically significant. The results of our study indicate that the surgical technique using an elevator on the buccal surface of the tooth seems to be a reliable method to extract impacted third molars safely, easily, quickly and with the minimum trauma to the surrounding tissues.
Spray-on electrodes enable EKG monitoring of physically active subjects
NASA Technical Reports Server (NTRS)
1966-01-01
Easily applied EKG electrodes monitor the heart signals of human subjects engaged in various physical exercises. The electrodes are formed from an air drying, electrically conductive cement mixture that can be applied to the skin by means of a modified commercially available spray gun.
The use of generalised additive models (GAM) in dentistry.
Helfenstein, U; Steiner, M; Menghini, G
1997-12-01
Ordinary multiple regression and logistic multiple regression are widely applied statistical methods which allow a researcher to 'explain' or 'predict' a response variable from a set of explanatory variables or predictors. In these models it is usually assumed that quantitative predictors such as age enter linearly into the model. During recent years these methods have been further developed to allow more flexibility in the way explanatory variables 'act' on a response variable. The methods are called 'generalised additive models' (GAM). The rigid linear terms characterising the association between response and predictors are replaced in an optimal way by flexible curved functions of the predictors (the 'profiles'). Plotting the 'profiles' allows the researcher to visualise easily the shape by which predictors 'act' over the whole range of values. The method facilitates detection of particular shapes such as 'bumps', 'U-shapes', 'J-shapes, 'threshold values' etc. Information about the shape of the association is not revealed by traditional methods. The shapes of the profiles may be checked by performing a Monte Carlo simulation ('bootstrapping'). After the presentation of the GAM a relevant case study is presented in order to demonstrate application and use of the method. The dependence of caries in primary teeth on a set of explanatory variables is investigated. Since GAMs may not be easily accessible to dentists, this article presents them in an introductory condensed form. It was thought that a nonmathematical summary and a worked example might encourage readers to consider the methods described. GAMs may be of great value to dentists in allowing visualisation of the shape by which predictors 'act' and obtaining a better understanding of the complex relationships between predictors and response.
NASA Astrophysics Data System (ADS)
Wasag, H.; Cel, W.; Chomczynska, M.; Kujawska, J.
2018-05-01
The paper deals with a new method of hydrogen sulphide removal from air by its filtration and selective catalytic oxidation with the use of fibrous carriers of Fe(III)-EDTA complex. The basis of these filtering materials includes fibrous ion exchangers with the complex immobilized on their functional groups. It has been established that the degree of catalytic hydrogen sulphide decomposition depends on the reaction time. Thus, the required degree of hydrogen sulphide removal from air could be easily controlled by applying appropriate thickness of the filtering layer under a given filtering velocity. It allows applying very thin filtering layers of the Fe(III)-EDTA/Fiban AK-22 or Fiban A-6 catalysts. The obtained results of the research confirm the applicability of these materials for deep air purification from hydrogen sulphide.
Evaluation of liquefaction potential for building code
NASA Astrophysics Data System (ADS)
Nunziata, C.; De Nisco, G.; Panza, G. F.
2008-07-01
The standard approach for the evaluation of the liquefaction susceptibility is based on the estimation of a safety factor between the cyclic shear resistance to liquefaction and the earthquake induced shear stress. Recently, an updated procedure based on shear-wave velocities (Vs) has been proposed which could be more easily applied. These methods have been applied at La Plaja beach of Catania, that experienced liquefaction because of the 1693 earthquake. The detailed geotechnical and Vs information and the realistic ground motion computed for the 1693 event let us compare the two approaches. The successful application of the Vs procedure, slightly modified to fit historical and safety factor information, even if additional field performances are needed, encourages the development of a guide for liquefaction potential analysis, based on well defined Vs profiles to be included in the italian seismic code.
Detection of Biomarkers of Pathogenic Naegleria fowleri Through Mass Spectrometry and Proteomics
Moura, Hercules; Izquierdo, Fernando; Woolfitt, Adrian R.; Wagner, Glauber; Pinto, Tatiana; del Aguila, Carmen; Barr, John R.
2017-01-01
Emerging methods based on mass spectrometry (MS) can be used in the rapid identification of microorganisms. Thus far, these practical and rapidly evolving methods have mainly been applied to characterize prokaryotes. We applied matrix-assisted laser-desorption-ionization-time-of-flight mass spectrometry MALDI-TOF MS in the analysis of whole cells of 18 N. fowleri isolates belonging to three genotypes. Fourteen originated from the cerebrospinal fluid or brain tissue of primary amoebic meningoencephalitis patients and four originated from water samples of hot springs, rivers, lakes or municipal water supplies. Whole Naegleria trophozoites grown in axenic cultures were washed and mixed with MALDI matrix. Mass spectra were acquired with a 4700 TOF-TOF instrument. MALDI-TOF MS yielded consistent patterns for all isolates examined. Using a combination of novel data processing methods for visual peak comparison, statistical analysis and proteomics database searching we were able to detect several biomarkers that can differentiate all species and isolates studied, along with common biomarkers for all N. fowleri isolates. Naegleria fowleri could be easily separated from other species within the genus Naegleria. A number of peaks detected were tentatively identified. MALDI-TOF MS fingerprinting is a rapid, reproducible, high-throughput alternative method for identifying Naegleria isolates. This method has potential for studying eukaryotic agents. PMID:25231600
NASA Astrophysics Data System (ADS)
Şahan, Mehmet Fatih
2017-11-01
In this paper, the viscoelastic damped response of cross-ply laminated shallow spherical shells is investigated numerically in a transformed Laplace space. In the proposed approach, the governing differential equations of cross-ply laminated shallow spherical shell are derived using the dynamic version of the principle of virtual displacements. Following this, the Laplace transform is employed in the transient analysis of viscoelastic laminated shell problem. Also, damping can be incorporated with ease in the transformed domain. The transformed time-independent equations in spatial coordinate are solved numerically by Gauss elimination. Numerical inverse transformation of the results into the real domain are operated by the modified Durbin transform method. Verification of the presented method is carried out by comparing the results with those obtained by the Newmark method and ANSYS finite element software. Furthermore, the developed solution approach is applied to problems with several impulsive loads. The novelty of the present study lies in the fact that a combination of the Navier method and Laplace transform is employed in the analysis of cross-ply laminated shallow spherical viscoelastic shells. The numerical sample results have proved that the presented method constitutes a highly accurate and efficient solution, which can be easily applied to the laminated viscoelastic shell problems.
Graph theory applied to the analysis of motor activity in patients with schizophrenia and depression
Fasmer, Erlend Eindride; Berle, Jan Øystein; Oedegaard, Ketil J.; Hauge, Erik R.
2018-01-01
Depression and schizophrenia are defined only by their clinical features, and diagnostic separation between them can be difficult. Disturbances in motor activity pattern are central features of both types of disorders. We introduce a new method to analyze time series, called the similarity graph algorithm. Time series of motor activity, obtained from actigraph registrations over 12 days in depressed and schizophrenic patients, were mapped into a graph and we then applied techniques from graph theory to characterize these time series, primarily looking for changes in complexity. The most marked finding was that depressed patients were found to be significantly different from both controls and schizophrenic patients, with evidence of less regularity of the time series, when analyzing the recordings with one hour intervals. These findings support the contention that there are important differences in control systems regulating motor behavior in patients with depression and schizophrenia. The similarity graph algorithm we have described can easily be applied to the study of other types of time series. PMID:29668743
Fasmer, Erlend Eindride; Fasmer, Ole Bernt; Berle, Jan Øystein; Oedegaard, Ketil J; Hauge, Erik R
2018-01-01
Depression and schizophrenia are defined only by their clinical features, and diagnostic separation between them can be difficult. Disturbances in motor activity pattern are central features of both types of disorders. We introduce a new method to analyze time series, called the similarity graph algorithm. Time series of motor activity, obtained from actigraph registrations over 12 days in depressed and schizophrenic patients, were mapped into a graph and we then applied techniques from graph theory to characterize these time series, primarily looking for changes in complexity. The most marked finding was that depressed patients were found to be significantly different from both controls and schizophrenic patients, with evidence of less regularity of the time series, when analyzing the recordings with one hour intervals. These findings support the contention that there are important differences in control systems regulating motor behavior in patients with depression and schizophrenia. The similarity graph algorithm we have described can easily be applied to the study of other types of time series.
Mapping the subcellular distribution of biomolecules at the ultrastructural level by ion microscopy.
Galle, P; Escaig, F; Dantin, F; Zhang, L
1996-05-01
Analytical ion microscopy, a method proposed and developed in 1960 by Casting and Slodzian at the Orsay University (France), makes it possible to obtain easily and rapidly analytical images representing the distribution in a tissue section of elements or isotopes (beginning from the three isotopes of hydrogen until to transuranic elements), even when these elements or isotopes are at a trace concentration of 1 ppm or less. This method has been applied to study the subcellular distribution of different varieties of biomolecules. The subcellular location of these molecules can be easily determined when the molecules contain in their structures a specific atom such as fluorine, iodine, bromine or platinum, what is the case of many pharmaceutical drugs. In this situation, the distribution of these specific atoms can be considered as representative of the distribution of the corresponding molecule. In other cases, the molecules must be labelled with an isotope which may be either radioactive or stable. Recent developments in ion microscopy allow the obtention of their chemical images at ultra structural level. In this paper we present the results obtained with the prototype of a new Scanning Ion Microscope used for the study of the intracellular distribution of different varieties of molecules: glucocorticoids, estrogens, pharmaceutical drugs and pyrimidine analogues.
Acquiring Knowledge and Using It.
ERIC Educational Resources Information Center
Smilkstein, Rita
1993-01-01
Understanding why students are not naturally and easily able to generalize or apply what they have learned in other situations involves understanding what teachers want their students to learn; what learning is; what teaching is; and what is involved in generalizing or applying what has been learned. Research in educational psychology identifies…
Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe
2014-01-01
Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
Ultrafast, 2 min synthesis of monolayer-protected gold nanoclusters (d < 2 nm)
NASA Astrophysics Data System (ADS)
Martin, Matthew N.; Li, Dawei; Dass, Amala; Eah, Sang-Kee
2012-06-01
An ultrafast synthesis method is presented for hexanethiolate-coated gold nanoclusters (d < 2 nm, <250 atoms per nanocluster), which takes only 2 min and can be easily reproduced. With two immiscible solvents, gold nanoclusters are separated from the reaction byproducts fast and easily without any need for post-synthesis cleaning.An ultrafast synthesis method is presented for hexanethiolate-coated gold nanoclusters (d < 2 nm, <250 atoms per nanocluster), which takes only 2 min and can be easily reproduced. With two immiscible solvents, gold nanoclusters are separated from the reaction byproducts fast and easily without any need for post-synthesis cleaning. Electronic supplementary information (ESI) available: Experimental details of gold nanocluster synthesis and mass-spectrometry. See DOI: 10.1039/c2nr30890h
NASA Astrophysics Data System (ADS)
Sonoda, Jun; Yamaki, Kota
We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.
Zhang, Guangbin; Tang, Yuhai; Sun, Yang; Yu, Hua; Du, Wei; Fu, Qiang
2016-02-01
A water-soluble sulphonato-(salen)manganese(III) complex with excellent catalytic properties was synthesized and demonstrated to greatly enhance the chemiluminescence signal of the hydrogen peroxide - luminol reaction. Coupled with flow-injection technique, a simple and sensitive chemiluminescence method was first developed to detect hydroquinone based on the chemiluminescence system of the hydrogen peroxide-luminol-sulphonato-(salen)manganese(III) complex. Under optimal conditions, the assay exhibited a wide linear range from 0.1 to 10 ng mL(-1) with a detection limit of 0.05 ng mL(-1) for hydroquinone. The method was applied successfully to detect hydroquinone in tap-water and mineral-water, with a sampling frequency of 120 times per hour. The relative standard deviation for determination of hydroquinone was less than 5.6%, and the recoveries ranged from 96.8 to 103.0%. The ultraviolet spectra, chemiluminescence spectra, and the reaction kinetics for the peroxide-luminol-sulphonato-(salen)manganese(III) complex system were employed to study the possible chemiluminescence mechanism. The proposed chemiluminescence analysis technique is rapid and sensitive, with low cost, and could be easily extended and applied to other compounds. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.
2016-12-01
Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.
Implicit LES using adaptive filtering
NASA Astrophysics Data System (ADS)
Sun, Guangrui; Domaradzki, Julian A.
2018-04-01
In implicit large eddy simulations (ILES) numerical dissipation prevents buildup of small scale energy in a manner similar to the explicit subgrid scale (SGS) models. If spectral methods are used the numerical dissipation is negligible but it can be introduced by applying a low-pass filter in the physical space, resulting in an effective ILES. In the present work we provide a comprehensive analysis of the numerical dissipation produced by different filtering operations in a turbulent channel flow simulated using a non-dissipative, pseudo-spectral Navier-Stokes solver. The amount of numerical dissipation imparted by filtering can be easily adjusted by changing how often a filter is applied. We show that when the additional numerical dissipation is close to the subgrid-scale (SGS) dissipation of an explicit LES the overall accuracy of ILES is also comparable, indicating that periodic filtering can replace explicit SGS models. A new method is proposed, which does not require any prior knowledge of a flow, to determine the filtering period adaptively. Once an optimal filtering period is found, the accuracy of ILES is significantly improved at low implementation complexity and computational cost. The method is general, performing well for different Reynolds numbers, grid resolutions, and filter shapes.
Cheng, Yezeng; Larin, Kirill V
2006-12-20
Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.
NASA Astrophysics Data System (ADS)
Cheng, Yezeng; Larin, Kirill V.
2006-12-01
Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-01-01
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868
Application of parametric equations of motion to study the resonance coalescence in H2(+).
Kalita, Dhruba J; Gupta, Ashish K
2012-12-07
Recently, occurrence of coalescence point was reported in H(2)(+) undergoing multiphoton dissociation in strong laser field. We have applied parametric equations of motion and smooth exterior scaling method to study the coalescence phenomenon of H(2)(+). The advantage of this method is that one can easily trace the different states that are changing as the field parameters change. It was reported earlier that in the parameter space, only two bound states coalesce [R. Lefebvre, O. Atabek, M. Sindelka, and N. Moiseyev, Phys. Rev. Lett. 103, 123003 (2009)]. However, it is found that increasing the accuracy of the calculation leads to the coalescence between resonance states originating from the bound and the continuum states. We have also reported many other coalescence points.
Self-interaction correction in multiple scattering theory: application to transition metal oxides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daene, Markus W; Lueders, Martin; Ernst, Arthur
2009-01-01
We apply to transition metal monoxides the self-interaction corrected (SIC) local spin density (LSD) approximation, implemented locally in the multiple scattering theory within the Korringa-Kohn-Rostoker (KKR) band structure method. The calculated electronic structure and in particular magnetic moments and energy gaps are discussed in reference to the earlier SIC results obtained within the LMTO-ASA band structure method, involving transformations between Bloch and Wannier representations to solve the eigenvalue problem and calculate the SIC charge and potential. Since the KKR can be easily extended to treat disordered alloys, by invoking the coherent potential approximation (CPA), in this paper we compare themore » CPA approach and supercell calculations to study the electronic structure of NiO with cation vacancies.« less
Optical coherence tomography used for internal biometrics
NASA Astrophysics Data System (ADS)
Chang, Shoude; Sherif, Sherif; Mao, Youxin; Flueraru, Costel
2007-06-01
Traditional biometric technologies used for security and person identification essentially deal with fingerprints, hand geometry and face images. However, because all these technologies use external features of human body, they can be easily fooled and tampered with by distorting, modifying or counterfeiting these features. Nowadays, internal biometrics which detects the internal ID features of an object is becoming increasingly important. Being capable of exploring under-skin structure, optical coherence tomography (OCT) system can be used as a powerful tool for internal biometrics. We have applied fiber-optic and full-field OCT systems to detect the multiple-layer 2D images and 3D profile of the fingerprints, which eventually result in a higher discrimination than the traditional 2D recognition methods. More importantly, the OCT based fingerprint recognition has the ability to easily distinguish artificial fingerprint dummies by analyzing the extracted layered surfaces. Experiments show that our OCT systems successfully detected the dummy, which was made of plasticene and was used to bypass the commercially available fingerprint scanning system with a false accept rate (FAR) of 100%.
Transfrontal orbitotomy in the dog: an adaptable three-step approach to the orbit.
Håkansson, Nils Wallin; Håkansson, Berit Wallin
2010-11-01
To describe an adaptable and extensive method for orbitotomy in the dog. An adaptable three-step technique for orbitotomy was developed and applied in nine consecutive cases. The steps are zygomatic arch resection laterally, temporalis muscle elevation medially and zygomatic process osteotomy anteriorly-dorsally. The entire orbit is accessed with excellent exposure and room for surgical manipulation. Facial nerve, lacrimal nerve and lacrimal gland function are preserved. The procedure can easily be converted into an orbital exenteration. Exposure of the orbit was excellent in all cases and anatomically correct closure was achieved. Signs of postoperative discomfort were limited, with moderate, reversible swelling in two cases and mild in seven. Wound infection or emphysema did not occur, nor did any other complication attributable to the operative procedure. Blinking ability and lacrimal function were preserved over follow-up times ranging from 1 to 4 years. Transfrontal orbitotomy in the dog offers excellent exposure and room for manipulation. Anatomically correct closure is easily accomplished, postoperative discomfort is limited and complications are mild and temporary. © 2010 American College of Veterinary Ophthalmologists.
Evaluation of mesoporous silicon thermal conductivity by electrothermal finite element simulation
2012-01-01
The aim of this work is to determine the thermal conductivity of mesoporous silicon (PoSi) by fitting the experimental results with simulated ones. The electrothermal response (resistance versus applied current) of differently designed test lines integrated onto PoSi/silicon substrates and the bulk were compared to the simulations. The PoSi thermal conductivity was the single parameter used to fit the experimental results. The obtained thermal conductivity values were compared with those determined from Raman scattering measurements, and a good agreement between both methods was found. This methodology can be used to easily determine the thermal conductivity value for various porous silicon morphologies. PMID:22849851
NASA Astrophysics Data System (ADS)
Wyrick, Jonathan; Einstein, T. L.; Bartels, Ludwig
2015-03-01
We present a method of analyzing the results of density functional modeling of molecular adsorption in terms of an analogue of molecular orbitals. This approach permits intuitive chemical insight into the adsorption process. Applied to a set of anthracene derivates (anthracene, 9,10-anthraquinone, 9,10-dithioanthracene, and 9,10-diselenonanthracene), we follow the electronic states of the molecules that are involved in the bonding process and correlate them to both the molecular adsorption geometry and the species' diffusive behavior. We additionally provide computational code to easily repeat this analysis on any system.
A Graphical User-Interface for Propulsion System Analysis
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Ryall, Kathleen
1992-01-01
NASA LeRC uses a series of computer codes to calculate installed propulsion system performance and weight. The need to evaluate more advanced engine concepts with a greater degree of accuracy has resulted in an increase in complexity of this analysis system. Therefore, a graphical user interface was developed to allow the analyst to more quickly and easily apply these codes. The development of this interface and the rationale for the approach taken are described. The interface consists of a method of pictorially representing and editing the propulsion system configuration, forms for entering numerical data, on-line help and documentation, post processing of data, and a menu system to control execution.
A graphical user-interface for propulsion system analysis
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Ryall, Kathleen
1993-01-01
NASA LeRC uses a series of computer codes to calculate installed propulsion system performance and weight. The need to evaluate more advanced engine concepts with a greater degree of accuracy has resulted in an increase in complexity of this analysis system. Therefore, a graphical user interface was developed to allow the analyst to more quickly and easily apply these codes. The development of this interface and the rationale for the approach taken are described. The interface consists of a method of pictorially representing and editing the propulsion system configuration, forms for entering numerical data, on-line help and documentation, post processing of data, and a menu system to control execution.
Oligonucleotide (GTG)5 as an epidemiological tool in the study of nontuberculous mycobacteria.
Cilliers, F J; Warren, R M; Hauman, J H; Wiid, I J; van Helden, P D
1997-01-01
Analysis of restriction fragment length polymorphisms in the genome of Mycobacterium tuberculosis (DNA fingerprinting) has proved to be a useful epidemiological tool in the study of tuberculosis within populations or communities. However, to date, no similar method has been developed to study the epidemiology of nontuberculous mycobacteria (NTM). In this communication, we report that a simple oligonucleotide repeat, (GTG)5, can be used to accurately genotype all species and strains of NTM tested. We suggest that this technology is an easily applied and accurate tool which can be used for the study of the epidemiology of NTM. PMID:9163479
Watanabe, Eiki; Miyake, Shiro
2018-06-05
Easy-to-use commercial kit-based enzyme-linked immunosorbent assays (ELISAs) have been used to detect neonicotinoid dinotefuran, clothianidin and imidacloprid in Chinese chives, which are considered a troublesome matrix for chromatographic techniques. Based on their high water solubility, water was used as an extractant. Matrix interference could be avoided substantially just diluting sample extracts. Average recoveries of insecticides from spiked samples were 85-113%, with relative standard deviation of <15%. The concentrations of insecticides detected from the spiked samples with the proposed ELISA methods correlated well with those by the reference high-performance liquid chromatography (HPLC) method. The residues analyzed by the ELISA methods were consistently 1.24 times that found by the HPLC method, attributable to loss of analyte during sample clean-up for HPLC analyses. It was revealed that the ELISA methods can be applied easily to pesticide residue analysis in troublesome matrix such as Chinese chives.
Shera, Christopher A.
2014-01-01
Parent and Allen [(2007). J. Acoust. Soc. Am. 122, 918–931] introduced the “method of lumens” to compute the plane-wave reflectance in a duct terminated with a nonuniform impedance. The method involves splitting the duct into multiple, fictitious subducts (lumens), solving for the reflectance in each subduct, and then combining the results. The method of lumens has considerable intuitive appeal and is easily implemented in the time domain. Previously applied only in a complex acoustical setting where proper evaluation is difficult (i.e., in a model of the ear canal and tympanic membrane), the method is tested here by using it to compute the reflectance from an area constriction in an infinite lossless duct considered in the long-wavelength limit. Neither the original formulation of the method—shown here to violate energy conservation except when the termination impedance is uniform—nor a reformulation consistent with basic physical constraints yields the correct solution to this textbook problem in acoustics. The results are generalized and the nature of the errors illuminated. PMID:25480060
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
Analysis and compensation of synchronous measurement error for multi-channel laser interferometer
NASA Astrophysics Data System (ADS)
Du, Shengwu; Hu, Jinchun; Zhu, Yu; Hu, Chuxiong
2017-05-01
Dual-frequency laser interferometer has been widely used in precision motion system as a displacement sensor, to achieve nanoscale positioning or synchronization accuracy. In a multi-channel laser interferometer synchronous measurement system, signal delays are different in the different channels, which will cause asynchronous measurement, and then lead to measurement error, synchronous measurement error (SME). Based on signal delay analysis of the measurement system, this paper presents a multi-channel SME framework for synchronous measurement, and establishes the model between SME and motion velocity. Further, a real-time compensation method for SME is proposed. This method has been verified in a self-developed laser interferometer signal processing board (SPB). The experiment result showed that, using this compensation method, at a motion velocity 0.89 m s-1, the max SME between two measuring channels in the SPB is 1.1 nm. This method is more easily implemented and applied to engineering than the method of directly testing smaller signal delay.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
NASA Astrophysics Data System (ADS)
Huang, Y.; Shi, W.; Zhang, C.; Wen, H.
2017-09-01
For the determination of nitrogen oxides in the air, the structure of diazo and coupling compounds was studied and tested by experiments. The conditions and methods of diazo and coupling reactions were investigated. Furthermore, a spectrophotometric method using sulfanilamide as a diazo compound and 2-N-ethyl-5-naphthol-7-sulfonic acid (N-ethyl J acid) as a coupling compound was proposed. The maximum absorption wavelength of sulfanilamide-Nethyl J acid azo compound was at 478 nm. The molar absorptivity was 4.31 × 104 L/(mol × cm) with a recovery of 98.7-100.9% and RSD of 1.85%. For nitrogen oxides, the determinate limit of this measurement was 0.015 mg/m3 and the determinate range 0.024-2.0 mg/m3. Moreover, a high degree of correlation was observed between the results obtained by the proposed method and the standard methods. The proposed method can be easily applied to determine nitrogen oxides in the air.
Dye Degradation by Fungi: An Exercise in Applied Science for Biology Students
ERIC Educational Resources Information Center
Lefebvre, Daniel D.; Chenaux, Peter; Edwards, Maureen
2005-01-01
An easily implemented practical exercise in applied science for biology students is presented that uses fungi to degrade an azo-dye. This is an example of bioremediation, the employment of living organisms to detoxify or contain pollutants. Its interdisciplinary nature widens students' perspectives of biology by exposing them to a chemical…
Radar attenuation tomography using the centroid frequency downshift method
Liu, L.; Lane, J.W.; Quan, Y.
1998-01-01
A method for tomographically estimating electromagnetic (EM) wave attenuation based on analysis of centroid frequency downshift (CFDS) of impulse radar signals is described and applied to cross-hole radar data. The method is based on a constant-Q model, which assumes a linear frequency dependence of attenuation for EM wave propagation above the transition frequency. The method uses the CFDS to construct the projection function. In comparison with other methods for estimating attenuation, the CFDS method is relatively insensitive to the effects of geometric spreading, instrument response, and antenna coupling and radiation pattern, but requires the data to be broadband so that the frequency shift and variance can be easily measured. The method is well-suited for difference tomography experiments using electrically conductive tracers. The CFDS method was tested using cross-hole radar data collected at the U.S. Geological Survey Fractured Rock Research Site at Mirror Lake, New Hampshire (NH) during a saline-tracer injection experiment. The attenuation-difference tomogram created with the CFDS method outlines the spatial distribution of saline tracer within the tomography plane. ?? 1998 Elsevier Science B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birmingham, D.; Kantowski, R.; Milton, K.A.
We use two methods of computing the unique logarithmically divergent part of the Casimir energy for massive scalar and spinor fields defined on even-dimensional Kaluza-Klein spaces of the form M/sup 4/ x S/sup N//sup 1/ x S/sup N//sup 2/ x xxx. Both methods (heat kernel and direct) give identical results. The first evaluates the required internal zeta function by identifying it in the asymptotic expansion of the trace of the heat kernel, and the second evaluates the zeta function directly using the Euler-Maclaurin sum formula. In Appendix C we tabulate these energies for all spaces of total internal dimension lessmore » than or equal to6. These methods are easily applied to vector and tensor fields needed in computing one-loop vacuum gravitational energies on these spaces. Stable solutions are given for internal structure S/sup 2/ x S/sup 2/.« less
Adachi, Yoko; Sumikuma, Toshiya; Kagami, Ryogo; Nishio, Akira; Akasaka, Koji; Tsunemine, Hiroko; Kodaka, Taiichi; Hiramatsu, Yasushi; Tada, Hiroshi
2010-05-01
There have been some reports on the efficacy and tolerability of an oral itraconazole (ITCZ) solution as prophylaxis for fungal infection in patients with hematological malignancies. However, there are some cases where the bitter taste of oral ITCZ solution leads to an interruption of administration because the patient refuses to take this medicine. Therefore, we prospectively investigated the pharmacokinetics and promotion of treatment adherence in patients taking oral ITCZ solution mixed with a beverage. Compared with the responses of patients taking oral ITCZ solution with water, the taste of the agent was improved significantly when mixed with orange juice, although the plasma concentration of the agent did not differ between the two groups. Using this method, we can expect an improvement in treatment adherence and this method can easily be applied in clinical practice. This method is highly useful and should become common knowledge.
Boiling point measurement of a small amount of brake fluid by thermocouple and its application.
Mogami, Kazunari
2002-09-01
This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.
NASA Astrophysics Data System (ADS)
Post, Alexander; Beath, Andrew; Sauret, Emilie; Persky, Rodney
2017-06-01
Concentrated solar thermal power generation poses a unique situation for power block selection, in which a capital intensive heat source is subject to daily and seasonal fluctuations in intensity. In this study, a method is developed to easily evaluate the favourability of different power blocks for converting the heat supplied by a concentrated solar thermal plant into power at the 100MWe scale based on several key parameters. The method is then applied to a range of commercially available power cycles that operate over different temperatures and efficiencies, and with differing capital costs, each with performance and economic parameters selected to be typical of their technology type, as reported in literature. Using this method, the power cycle is identified among those examined that is most likely to result in a minimum levelised cost of energy of a solar thermal plant.
Uchimura, Hiromasa; Kim, Yusam; Mizuguchi, Takaaki; Kiso, Yoshiaki; Saito, Kazuki
2011-01-01
A concise method was developed for quantifying native disulfide-bond formation in proteins using isotopically labeled internal standards, which were easily prepared with proteolytic 18O-labeling. As the method has much higher throughput to estimate the amounts of fragments possessing native disulfide arrangements by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) than the conventional high performance liquid chromatography (HPLC) analyses, it allows many different experimental conditions to be assessed in a short time. The method was applied to refolding experiments of a recombinant neuregulin 1-β1 EGF-like motif (NRG1-β1), and the optimum conditions for preparing native NRG1-β1 were obtained by quantitative comparisons. Protein disulfide isomerase (PDI) was most effective at the reduced/oxidized glutathione ratio of 2:1 for refolding the denatured sample NRG1-β1 with the native disulfide bonds. PMID:21500299
Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains
NASA Astrophysics Data System (ADS)
Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao
2017-11-01
A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.
Seki, Masaaki; Sato, Akimasa; Honda, Ikuro; Yamazaki, Toshio; Yano, Ikuya; Koyama, Akira; Toida, Ichiro
2005-05-02
When an adverse reaction occurs and a mycobacterial species is isolated from a person vaccinated with Bacillus Calmette-Guérin (BCG) or a patient receiving BCG immunotherapy, it is essential to identify whether the isolate is BCG or another mycobacterial species. However, differentiation of BCG from other members of Mycobacterium tuberculosis complex has been very difficult. Using several specific primer-pairs, Bedwell et al. [Bedwell J, Kairo SK, Behr MA, Bygraves JA. Identification of substrains of BCG vaccine using multiplex PCR. Vaccine 2001; 19: 2146-51] recently reported that they could distinguish BCG substrains. We modified their method to improve differentiation of Tokyo 172 from other members of the M. tuberculosis complex, and examined whether this modified method could be applied to clinical isolates. Our method clearly identified BCG substrain (BCG Tokyo 172) among clinical isolates and easily distinguished between M. tuberculosis and wild-type Mycobacterium bovis.
Fernandes, Henrique; Zhang, Hai; Figueiredo, Alisson; Malheiros, Fernando; Ignacio, Luis Henrique; Sfarra, Stefano; Ibarra-Castanedo, Clemente; Guimaraes, Gilmar; Maldague, Xavier
2018-01-19
The use of fiber reinforced materials such as randomly-oriented strands has grown in recent years, especially for manufacturing of aerospace composite structures. This growth is mainly due to their advantageous properties: they are lighter and more resistant to corrosion when compared to metals and are more easily shaped than continuous fiber composites. The resistance and stiffness of these materials are directly related to their fiber orientation. Thus, efficient approaches to assess their fiber orientation are in demand. In this paper, a non-destructive evaluation method is applied to assess the fiber orientation on laminates reinforced with randomly-oriented strands. More specifically, a method called pulsed thermal ellipsometry combined with an artificial neural network, a machine learning technique, is used in order to estimate the fiber orientation on the surface of inspected parts. Results showed that the method can be potentially used to inspect large areas with good accuracy and speed.
[Sequential preparation of microvlllous and basal membranes from human placenta].
Long, Ning; Xing, Ai-yun; Yang, Xiao-hua; Zhang, Rong; Wu, Lin
2010-03-01
To improve the technology of isolating paired fractions of the maternal-facing membranes (MVM) and fetal-facing plasma membranes (BM) from a term placenta. The component of buffer was improved based on Illsley method. The time of Mg2+ -aggregated basal membranes was extended. MVM were obtained from the supernatant of low speed centrifugation while BM were further purified on a sucrose step gradient. Yield for MVM and BM prepared by the method were (0.55 +/- 10.10) mg/g and (0.54 +/- 0.02) mg/g wet weight of placenta. They were enriched 16.87-fold and 11.19-fold as determined by the membrane marker enzymes, alkaline phosphatase (MVM) and adenylate cyclase (BM). The modified Illsley method can easily produce both MVM and BM of satisfied quantity from human placenta. It could be applied as a cell molecular model of maternal-fetal exchange interface.
Maldague, Xavier
2018-01-01
The use of fiber reinforced materials such as randomly-oriented strands has grown in recent years, especially for manufacturing of aerospace composite structures. This growth is mainly due to their advantageous properties: they are lighter and more resistant to corrosion when compared to metals and are more easily shaped than continuous fiber composites. The resistance and stiffness of these materials are directly related to their fiber orientation. Thus, efficient approaches to assess their fiber orientation are in demand. In this paper, a non-destructive evaluation method is applied to assess the fiber orientation on laminates reinforced with randomly-oriented strands. More specifically, a method called pulsed thermal ellipsometry combined with an artificial neural network, a machine learning technique, is used in order to estimate the fiber orientation on the surface of inspected parts. Results showed that the method can be potentially used to inspect large areas with good accuracy and speed. PMID:29351240
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
Xu, Jingyang; Zhang, Ziyuan; Zheng, Xiaochun; Bond, John W
2017-05-01
Visualization of latent fingerprints on metallic surfaces by the method of applying electrostatic charging and adsorption is considered as a promising chemical-free method, which has the merit of nondestruction, and is considered to be effective for some difficult situations such as aged fingerprint deposits or those exposed to environmental extremes. In fact, a portable electrostatic generator can be easily accessible in a local forensic technology laboratory, which is already widely used in the visualization of footwear impressions. In this study, a modified version of this electrostatic apparatus is proposed for latent fingerprint development and has shown great potential in visualizing fingerprints on metallic surfaces such as cartridge cases. Results indicate that this experimental arrangement can successfully develop aged latent fingerprints on metal surfaces, and we demonstrate its effectiveness compared with existing conventional fingerprint recovery methods. © 2016 American Academy of Forensic Sciences.
Modeling and Simulation of A Microchannel Cooling System for Vitrification of Cells and Tissues.
Wang, Y; Zhou, X M; Jiang, C J; Yu, Y T
The microchannel heat exchange system has several advantages and can be used to enhance heat transfer for vitrification. To evaluate the microchannel cooling method and to analyze the effects of key parameters such as channel structure, flow rate and sample size. A computational flow dynamics model is applied to study the two-phase flow in microchannels and its related heat transfer process. The fluid-solid coupling problem is solved with a whole field solution method (i.e., flow profile in channels and temperature distribution in the system being simulated simultaneously). Simulation indicates that a cooling rate >10 4 C/min is easily achievable using the microchannel method with the high flow rate for a board range of sample sizes. Channel size and material used have significant impact on cooling performance. Computational flow dynamics is useful for optimizing the design and operation of the microchannel system.
NASA Astrophysics Data System (ADS)
Zhang, Chaosheng
2010-05-01
Outliers in urban soil geochemical databases may imply potential contaminated land. Different methodologies which can be easily implemented for the identification of global and spatial outliers were applied for Pb concentrations in urban soils of Galway City in Ireland. Due to its strongly skewed probability feature, a Box-Cox transformation was performed prior to further analyses. The graphic methods of histogram and box-and-whisker plot were effective in identification of global outliers at the original scale of the dataset. Spatial outliers could be identified by a local indicator of spatial association of local Moran's I, cross-validation of kriging, and a geographically weighted regression. The spatial locations of outliers were visualised using a geographical information system. Different methods showed generally consistent results, but differences existed. It is suggested that outliers identified by statistical methods should be confirmed and justified using scientific knowledge before they are properly dealt with.
Feasibility and Utility of Lexical Analysis for Occupational Health Text.
Harber, Philip; Leroy, Gondy
2017-06-01
Assess feasibility and potential utility of natural language processing (NLP) for storing and analyzing occupational health data. Basic NLP lexical analysis methods were applied to 89,000 Mine Safety and Health Administration (MSHA) free text records. Steps included tokenization, term and co-occurrence counts, term annotation, and identifying exposure-health effect relationships. Presence of terms in the Unified Medical Language System (UMLS) was assessed. The methods efficiently demonstrated common exposures, health effects, and exposure-injury relationships. Many workplace terms are not present in UMLS or map inaccurately. Use of free text rather than narrowly defined numerically coded fields is feasible, flexible, and efficient. It has potential to encourage workers and clinicians to provide more data and to support automated knowledge creation. The lexical method used is easily generalizable to other areas. The UMLS vocabularies should be enhanced to be relevant to occupational health.
Barolin, Gerhard S
2003-01-01
Group-therapy and autogenic training in combination show mutual potentiation. Our results have proved the hypothesis to be true and we have also been able to explain it by an analysis of the neurophysiological and psychological findings concerning both methods. Our "model" has proved to be very economical in time and can be easily applied. It needs basic psychotherapeutical education but no special additive schooling. It is particularly well employed in rehabilitation patients, elderly patients and geronto-rehabilitation patients. As numbers of such patients are steadily increasing, it could soon become highly important, and in the technically dominated medicine of today, the particularly communicative component that we postulate in integrated psychotherapy could also grow in importance. By combining the two methods, it is not method that is at the centre of our endeavours but the patient.
Calibration and Measurement in Turbulence Research by the Hot-Wire Method
NASA Technical Reports Server (NTRS)
Kovasznay, Kaszlo
1947-01-01
The problem of turbulence in aerodynamics is at present being attacked both theoretically and experimentally. In view of the fact however that purely theoretical considerations have not thus far led to satisfactory results the experimental treatment of the problem is of great importance. Among the different measuring procedures the hot wire methods are so far recognized as the most suitable for investigating the turbulence structure. The several disadvantages of these methods however, in particular those arising from the temperature lag of the wire can greatly impair the measurements and may easily render questionable the entire value of the experiment. The name turbulence is applied to that flow condition in which at any point of the stream the magnitude and direction of the velocity fluctuate arbitrarily about a well definable mean value. This fluctuation imparts a certain whirling characteristic to the flow.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
Aguayo-Ortiz, A; Mendoza, S; Olvera, D
2018-01-01
In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and "Rankine-Hugoniot" jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges.
NASA Astrophysics Data System (ADS)
Gampe, David; Huber García, Verena; Marzahn, Philip; Ludwig, Ralf
2017-04-01
Actual evaporation (Eta) is an essential variable to assess water availability, drought risk and food security, among others. Measurements of Eta are however limited to a small footprint, hampering a spatially explicit analysis and application and are very often not available at all. To overcome the problem of data scarcity, Eta can be assessed by various remote sensing approaches such as the Triangle Method (Jiang & Islam, 1999). Here, Eta is estimated by using the Normalized Difference Vegetation Index (NDVI) and land surface temperature (LST). In this study, the R-package 'TriangleMethod' was compiled to efficiently perform the calculations of NDVI and processing LST to finally derive Eta from the applied data set. The package contains all necessary calculation steps and allows easy processing of a large data base of remote sensing images. By default, the parameterization for the Landsat TM and ETM+ sensors are implemented, however, the algorithms can be easily extended to additional sensors. The auxiliary variables required to estimate Eta with this method, such as elevation, solar radiation and air temperature at the overpassing time, can be processed as gridded information to allow for a better representation of the study area. The package was successfully applied in various studies in Spain, Palestine, Costa Rica and Canada.
Mendoza, S.; Olvera, D.
2018-01-01
In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and “Rankine-Hugoniot” jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges. PMID:29659602
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-03-01
As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.
Analysis of composite ablators using massively parallel computation
NASA Technical Reports Server (NTRS)
Shia, David
1995-01-01
In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.
An optimized chemical synthesis of human relaxin-2.
Barlos, Kostas K; Gatos, Dimitrios; Vasileiou, Zoe; Barlos, Kleomenis
2010-04-01
Human gene 2 relaxin (RLX) is a member of the insulin superfamily and is a multi-functional factor playing a vital role in pregnancy, aging, fibrosis, cardioprotection, vasodilation, inflammation, and angiogenesis. RLX is currently applied in clinical trials to cure among others acute heart failure, fibrosis, and preeclampsia. The synthesis of RLX by chemical methods is difficult because of the insolubility of its B-chain and the required laborious and low yielding site-directed combination of its A (RLXA) and B (RLXB) chains. We report here that oxidation of the Met(25) residue of RLXB improves its solubility, allowing its effective solid-phase synthesis and application in random interchain combination reactions with RLXA. Linear Met(O)(25)-RLX B-chain (RLXBO) reacts with a mixture of isomers of bicyclic A-chain (bcRLXA) giving exclusively the native interchain combination. Applying this method Met(O)(25)-RLX (RLXO) was obtained in 62% yield and was easily converted to RLX in 78% yield, by reduction with ammonium iodide. Copyright (c) 2010 European Peptide Society and John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-05-01
Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1
A low-frequency near-field interferometric-TOA 3-D Lightning Mapping Array
NASA Astrophysics Data System (ADS)
Lyu, Fanchao; Cummer, Steven A.; Solanki, Rahulkumar; Weinert, Joel; McTague, Lindsay; Katko, Alex; Barrett, John; Zigoneanu, Lucian; Xie, Yangbo; Wang, Wenqi
2014-11-01
We report on the development of an easily deployable LF near-field interferometric-time of arrival (TOA) 3-D Lightning Mapping Array applied to imaging of entire lightning flashes. An interferometric cross-correlation technique is applied in our system to compute windowed two-sensor time differences with submicrosecond time resolution before TOA is used for source location. Compared to previously reported LF lightning location systems, our system captures many more LF sources. This is due mainly to the improved mapping of continuous lightning processes by using this type of hybrid interferometry/TOA processing method. We show with five station measurements that the array detects and maps different lightning processes, such as stepped and dart leaders, during both in-cloud and cloud-to-ground flashes. Lightning images mapped by our LF system are remarkably similar to those created by VHF mapping systems, which may suggest some special links between LF and VHF emission during lightning processes.
Novel platinum black electroplating technique improving mechanical stability.
Kim, Raeyoung; Nam, Yoonkey
2013-01-01
Platinum black microelectrodes are widely used as an effective neural signal recording sensor. The simple fabrication process, high quality signal recording and proper biocompatibility are the main advantages of platinum black microelectrodes. When microelectrodes are exposed to actual biological system, various physical stimuli are applied. However, the porous structure of platinum black is vulnerable to external stimuli and destroyed easily. The impedance level of the microelectrode increases when the microelectrodes are damaged resulting in decreased recording performance. In this study, we developed mechanically stable platinum black microelectrodes by adding polydopamine. The polydopamine layer was added between the platinum black structures by electrodeposition method. The initial impedance level of platinum black only microelectrodes and polydopamine added microelectrodes were similar but after applying ultrasonication the impedance value dramatically increased for platinum black only microelectrodes, whereas polydopamine added microelectrodes showed little increase which were nearly retained initial values. Polydopamine added platinum black microelectrodes are expected to extend the availability as neural sensors.
Protection performance evaluation regarding imaging sensors hardened against laser dazzling
NASA Astrophysics Data System (ADS)
Ritt, Gunnar; Koerber, Michael; Forster, Daniel; Eberle, Bernd
2015-05-01
Electro-optical imaging sensors are widely distributed and used for many different purposes, including civil security and military operations. However, laser irradiation can easily disturb their operational capability. Thus, an adequate protection mechanism for electro-optical sensors against dazzling and damaging is highly desirable. Different protection technologies exist now, but none of them satisfies the operational requirements without any constraints. In order to evaluate the performance of various laser protection measures, we present two different approaches based on triangle orientation discrimination on the one hand and structural similarity on the other hand. For both approaches, image analysis algorithms are applied to images taken of a standard test scene with triangular test patterns which is superimposed by dazzling laser light of various irradiance levels. The evaluation methods are applied to three different sensors: a standard complementary metal oxide semiconductor camera, a high dynamic range camera with a nonlinear response curve, and a sensor hardened against laser dazzling.
The NIFTy way of Bayesian signal inference
NASA Astrophysics Data System (ADS)
Selig, Marco
2014-12-01
We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Aftab; Khan, S. N.; Wilson, Brian G.
2011-07-06
A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less
Birefringence of magnesium fluoride in the vacuum ultraviolet and application to a half-waveplate.
Ishikawa, Ryohko; Kano, Ryouhei; Bando, Takamasa; Suematsu, Yoshinori; Ishikawa, Shin-nosuke; Kubo, Masahito; Narukage, Noriyuki; Hara, Hirohisa; Tsuneta, Saku; Watanabe, Hiroko; Ichimoto, Kiyoshi; Aoki, Kunichika; Miyagawa, Kenta
2013-12-01
Spectro-polarimeteric observations in the vacuum-ultraviolet (VUV) region are expected to be developed as a new astrophysics diagnostic tool for investigating space plasmas with temperatures of >10(4) K. Precise measurements of the difference in the extraordinary and ordinary refractive indices are required for developing accurate polarimeters, but reliable information on the birefringence in the VUV range is difficult to obtain. We have measured the birefringence of magnesium fluoride (MgF2) with an accuracy of better than ±4×10(-5) around the hydrogen Lyman-α line (121.57 nm). We show that MgF2 can be applied practically as a half-waveplate for the chromospheric Lyman-alpha spectro-polarimeter (CLASP) sounding rocket experiment and that the developed measurement method can be easily applied to other VUV birefringent materials at other wavelengths.
Endoscopical determination of gastric mucosal blood flow by the crossed thermocouple method.
Hiramatsu, A; Watanabe, T; Okuhira, M; Uchiyama, S; Mizuno, T; Sameshima, Y
1984-06-01
A crossed thermocouple method in combination with endoscopy was applied to determine the blood flow rate of the human gastric mucosa. Determination was carried out with 11 healthy control subjects at 8 sites of the stomach. The blood flow rates at all sites in the corpus were found to be higher than those at the antrum. In subjects less than 50 years old the blood flow rate in the corpus was higher than in older subjects. These results were in agreed well with those obtained by the hydrogen gas clearance method, which is widely adopted clinically. The crossed thermocouple method is easily applicable to all sites in the gastric mucosa and the time required for the assay is very short. This method dose not require the inhalation of hydrogen gas which is necessary for the hydrogen gas clearance method and which is possibly harmful to humans. Although the values obtained by the crossed thermocouple method are relative to the value at a certain fixed site, this method will holds great potential for the determination of gastric mucosal blood flow rate.
NASA Astrophysics Data System (ADS)
Habu, K.; Kaminohara, S.; Kimoto, T.; Kawagoe, A.; Sumiyoshi, F.; Okamoto, H.
2010-11-01
We have developed a new monitoring system to detect an unusual event in the superconducting coils without direct contact on the coils, using Poynting's vector method. In this system, the potential leads and pickup coils are set around the superconducting coils to measure local electric and magnetic fields, respectively. By measuring the sets of magnetic and electric fields, the Poynting's vectors around the coil can be obtained. An unusual event in the coil can be detected as the result of the change of the Poynting's vector. This system has no risk of the voltage breakdown which may happen with the balance voltage method, because there is no need of direct contacts on the coil windings. In a previous paper, we have demonstrated that our system can detect the normal transitions in the Bi-2223 coil without direct contact on the coil windings by using a small test system. For our system to be applied to practical devices, it is necessary for the early detection of an unusual event in the coils to be able to detect local normal transitions in the coils. The signal voltages of the small sensors to measure local magnetic and electric fields are small. Although the increase in signals of the pickup coils is attained easily by an increase in the number of turns of the pickup coils, an increase in the signals of the potential lead is not easily attained. In this paper, a new method to amplify the signal of local electric fields around the coil is proposed. The validity of the method has been confirmed by measuring local electric fields around the Bi-2223 coil.
Emadi, Mostafa; Baghernejad, Majid; Pakparvar, Mojtaba; Kowsar, Sayyed Ahang
2010-05-01
This study was undertaken to incorporate geostatistics, remote sensing, and geographic information system (GIS) technologies to improve the qualitative land suitability assessment in arid and semiarid ecosystems of Arsanjan plain, southern Iran. The primary data were obtained from 85 soil samples collected from tree depths (0-30, 30-60, and 60-90 cm); the secondary information was acquired from the remotely sensed data from the linear imaging self-scanner (LISS-III) receiver of the IRS-P6 satellite. Ordinary kriging and simple kriging with varying local means (SKVLM) methods were used to identify the spatial dependency of soil important parameters. It was observed that using the data collected from the spectral values of band 1 of the LISS-III receiver as the secondary variable applying the SKVLM method resulted in the lowest mean square error for mapping the pH and electrical conductivity (ECe) in the 0-30-cm depth. On the other hand, the ordinary kriging method resulted in a reliable accuracy for the other soil properties with moderate to strong spatial dependency in the study area for interpolation in the unstamped points. The parametric land suitability evaluation method was applied on the density points (150 x 150 m(2)) instead of applying on the limited representative profiles conventionally, which were obtained by the kriging or SKVLM methods. Overlaying the information layers of the data was used with the GIS for preparing the final land suitability evaluation. Therefore, changes in land characteristics could be identified in the same soil uniform mapping units over a very short distance. In general, this new method can easily present the squares and limitation factors of the different land suitability classes with considerable accuracy in arbitrary land indices.
Resistance heating releases structural adhesive
NASA Technical Reports Server (NTRS)
Glemser, N. N.
1967-01-01
Composite adhesive package bonds components together for testing and enables separation when testing is completed. The composite of adhesives, insulation and a heating element separate easily when an electrical current is applied.
Assessment study of lichenometric methods for dating surfaces
NASA Astrophysics Data System (ADS)
Jomelli, Vincent; Grancher, Delphine; Naveau, Philippe; Cooley, Daniel; Brunstein, Daniel
2007-04-01
In this paper, we discuss the advantages and drawbacks of the most classical approaches used in lichenometry. In particular, we perform a detailed comparison among methods based on the statistical analysis of either the largest lichen diameters recorded on geomorphic features or the frequency of all lichens. To assess the performance of each method, a careful comparison design with well-defined criteria is proposed and applied to two distinct data sets. First, we study 350 tombstones. This represents an ideal test bed because tombstone dates are known and, therefore, the quality of the estimated lichen growth curve can be easily tested for the different techniques. Secondly, 37 moraines from two tropical glaciers are investigated. This analysis corresponds to our real case study. For both data sets, we apply our list of criteria that reflects precision, error measurements and their theoretical foundations when proposing estimated ages and their associated confidence intervals. From this comparison, it clearly appears that two methods, the mean of the n largest lichen diameters and the recent Bayesian method based on extreme value theory, offer the most reliable estimates of moraine and tombstones dates. Concerning the spread of the error, the latter approach provides the smallest uncertainty and it is the only one that takes advantage of the statistical nature of the observations by fitting an extreme value distribution to the largest diameters.
Detection of biomarkers of pathogenic Naegleria fowleri through mass spectrometry and proteomics.
Moura, Hercules; Izquierdo, Fernando; Woolfitt, Adrian R; Wagner, Glauber; Pinto, Tatiana; del Aguila, Carmen; Barr, John R
2015-01-01
Emerging methods based on mass spectrometry (MS) can be used in the rapid identification of microorganisms. Thus far, these practical and rapidly evolving methods have mainly been applied to characterize prokaryotes. We applied matrix-assisted laser-desorption-ionization-time-of-flight mass spectrometry MALDI-TOF MS in the analysis of whole cells of 18 N. fowleri isolates belonging to three genotypes. Fourteen originated from the cerebrospinal fluid or brain tissue of primary amoebic meningoencephalitis patients and four originated from water samples of hot springs, rivers, lakes or municipal water supplies. Whole Naegleria trophozoites grown in axenic cultures were washed and mixed with MALDI matrix. Mass spectra were acquired with a 4700 TOF-TOF instrument. MALDI-TOF MS yielded consistent patterns for all isolates examined. Using a combination of novel data processing methods for visual peak comparison, statistical analysis and proteomics database searching we were able to detect several biomarkers that can differentiate all species and isolates studied, along with common biomarkers for all N. fowleri isolates. Naegleria fowleri could be easily separated from other species within the genus Naegleria. A number of peaks detected were tentatively identified. MALDI-TOF MS fingerprinting is a rapid, reproducible, high-throughput alternative method for identifying Naegleria isolates. This method has potential for studying eukaryotic agents. © 2014 The Author(s) Journal of Eukaryotic Microbiology © 2014 International Society of Protistologists.
An automatic method for segmentation of fission tracks in epidote crystal photomicrographs
NASA Astrophysics Data System (ADS)
de Siqueira, Alexandre Fioravante; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Tello Saenz, Carlos Alberto; Job, Aldo Eloizo
2014-08-01
Manual identification of fission tracks has practical problems, such as variation due to observe-observation efficiency. An automatic processing method that could identify fission tracks in a photomicrograph could solve this problem and improve the speed of track counting. However, separation of nontrivial images is one of the most difficult tasks in image processing. Several commercial and free softwares are available, but these softwares are meant to be used in specific images. In this paper, an automatic method based on starlet wavelets is presented in order to separate fission tracks in mineral photomicrographs. Automatization is obtained by the Matthews correlation coefficient, and results are evaluated by precision, recall and accuracy. This technique is an improvement of a method aimed at segmentation of scanning electron microscopy images. This method is applied in photomicrographs of epidote phenocrystals, in which accuracy higher than 89% was obtained in fission track segmentation, even for difficult images. Algorithms corresponding to the proposed method are available for download. Using the method presented here, a user could easily determine fission tracks in photomicrographs of mineral samples.
Citation Matching in Sanskrit Corpora Using Local Alignment
NASA Astrophysics Data System (ADS)
Prasad, Abhinandan S.; Rao, Shrisha
Citation matching is the problem of finding which citation occurs in a given textual corpus. Most existing citation matching work is done on scientific literature. The goal of this paper is to present methods for performing citation matching on Sanskrit texts. Exact matching and approximate matching are the two methods for performing citation matching. The exact matching method checks for exact occurrence of the citation with respect to the textual corpus. Approximate matching is a fuzzy string-matching method which computes a similarity score between an individual line of the textual corpus and the citation. The Smith-Waterman-Gotoh algorithm for local alignment, which is generally used in bioinformatics, is used here for calculating the similarity score. This similarity score is a measure of the closeness between the text and the citation. The exact- and approximate-matching methods are evaluated and compared. The methods presented can be easily applied to corpora in other Indic languages like Kannada, Tamil, etc. The approximate-matching method can in particular be used in the compilation of critical editions and plagiarism detection in a literary work.
Outila, Terhi A; Simulainen, Helena; Laukkanen, Tuula H A; Maarit Kyyrö, A
2006-01-01
In this study we have developed a new way of evaluating the healthiness of ready-to-eat foods. In the developed method, ready-to-eat foods were classified into specific product categories, and the nutritional quality of classified foods was analysed using the national dietary recommendations and the national dietary survey as a basis for the dietary calculations. The method was tested with the products of 'Saarioinen', which is the leading brand in the Finnish ready-to-eat food market. Results indicate that this low-cost method can easily be used in the food industry as a tool in product development and marketing in order to develop healthy foods. The method could also be applied to the restaurant and catering trade, as well as to other public institutions serving food. By using this model, nutritional researchers and the food industry could work together to prevent nutrition-related health problems.
Skin friction drag reduction in turbulent flow using spanwise traveling surface waves
NASA Astrophysics Data System (ADS)
Musgrave, Patrick F.; Tarazaga, Pablo A.
2017-04-01
A major technological driver in current aircraft and other vehicles is the improvement of fuel efficiency. One way to increase the efficiency is to reduce the skin friction drag on these vehicles. This experimental study presents an active drag reduction technique which decreases the skin friction using spanwise traveling waves. A novel method is introduced for generating traveling waves which is low-profile, non-intrusive, and operates under various flow conditions. This wave generation method is discussed and the resulting traveling waves are presented. These waves are then tested in a low-speed wind tunnel to determine their drag reduction potential. To calculate the drag reduction, the momentum integral method is applied to turbulent boundary layer data collected using a pitot tube and traversing system. The skin friction coefficients are then calculated and the drag reduction determined. Preliminary results yielded a drag reduction of ≍ 5% for 244Hz traveling waves. Thus, this novel wave generation method possesses the potential to yield an easily implementable, non-invasive drag reduction technology.
Yilmaz, B.; Kaban, S.; Akcay, B. K.
2015-01-01
In this study, simple, fast and reliable cyclic voltammetry, linear sweep voltammetry, square wave voltammetry and differential pulse voltammetry methods were developed and validated for determination of etodolac in pharmaceutical preparations. The proposed methods were based on electrochemical oxidation of etodolac at platinum electrode in acetonitrile solution containing 0.1 M lithium perchlorate. The well-defined oxidation peak was observed at 1.03 V. The calibration curves were linear for etodolac at the concentration range of 2.5-50 μg/ml for linear sweep, square wave and differential pulse voltammetry methods, respectively. Intra- and inter-day precision values for etodolac were less than 4.69, and accuracy (relative error) was better than 2.00%. The mean recovery of etodolac was 100.6% for pharmaceutical preparations. No interference was found from three tablet excipients at the selected assay conditions. Developed methods in this study are accurate, precise and can be easily applied to Etol, Tadolak and Etodin tablets as pharmaceutical preparation. PMID:26664057
Dissecting and Culturing Animal Cap Explants.
Dingwell, Kevin S; Smith, James C
2018-05-16
The animal cap explant is a simple but adaptable tool available to developmental biologists. The use of animal cap explants in demonstrating the presence of mesoderm-inducting activity in the Xenopus embryo vegetal pole is one of many elegant examples of their worth. Animal caps respond to a range of growth factors (e.g., Wnts, FGF, TGF-β), making them especially useful for studying signal transduction pathways and gene regulatory networks. Explants are also suitable for examining cell behavior and have provided key insights into the molecular mechanisms controlling vertebrate morphogenesis. In this protocol, we outline two methods to isolate animal cap explants from Xenopus laevis , both of which can be applied easily to Xenopus tropicalis The first method is a standard manual method that can be used in any laboratory equipped with a standard dissecting microscope. For labs planning on dissecting large numbers of explants on a regular basis, a second, high throughput method is described that uses a specialized microcautery surgical instrument. © 2018 Cold Spring Harbor Laboratory Press.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
NASA Astrophysics Data System (ADS)
Mazurek, Sylwester; Szostak, Roman; Kita, Agnieszka
2016-12-01
Potato chips are important products in the snack industry. The most significant parameter monitored during their quality control process is fat content. The Soxhlet method, which is applied for this purpose, is time consuming and expensive. We demonstrate that both infrared and Raman spectroscopy can effectively replace the extraction method. Raman, mid-infrared (MIR) and near-infrared (NIR) spectra of the homogenised laboratory-prepared chips were recorded. On the basis of obtained spectra, partial least squares (PLS) calibration models were constructed. They were characterised by the values of relative standard errors of prediction (RSEP) in the 1.0-1.9% range for both calibration and validation data sets. Using the developed models, six commercial products were successfully quantified with recovery in the 98.5-102.3% range against the AOAC extraction method. The proposed method for fat quantification in potato chips based on Raman spectroscopy can be easily adopted for on-line product analysis.
Turchini, John; Buckland, Michael E; Gill, Anthony J; Battye, Shane
2018-05-30
- Three-dimensional (3D) photogrammetry is a method of image-based modeling in which data points in digital images, taken from offset viewpoints, are analyzed to generate a 3D model. This modeling technique has been widely used in the context of geomorphology and artificial imagery, but has yet to be used within the realm of anatomic pathology. - To describe the application of a 3D photogrammetry system capable of producing high-quality 3D digital models and its uses in routine surgical pathology practice as well as medical education. - We modeled specimens received in the 2 participating laboratories. The capture and photogrammetry process was automated using user control software, a digital single-lens reflex camera, and digital turntable, to generate a 3D model with the output in a PDF file. - The entity demonstrated in each specimen was well demarcated and easily identified. Adjacent normal tissue could also be easily distinguished. Colors were preserved. The concave shapes of any cystic structures or normal convex rounded structures were discernable. Surgically important regions were identifiable. - Macroscopic 3D modeling of specimens can be achieved through Structure-From-Motion photogrammetry technology and can be applied quickly and easily in routine laboratory practice. There are numerous advantages to the use of 3D photogrammetry in pathology, including improved clinicopathologic correlation for the surgeon and enhanced medical education, revolutionizing the digital pathology museum with virtual reality environments and 3D-printing specimen models.
The forensic value of X-linked markers in mixed-male DNA analysis.
He, HaiJun; Zha, Lagabaiyila; Cai, JinHong; Huang, Jian
2018-05-04
Autosomal genetic markers and Y chromosome markers have been widely applied in analysis of mixed stains at crime scenes by forensic scientists. However, true genotype combinations are often difficult to distinguish using autosomal markers when similar amounts of DNA are contributed by multiple donors. In addition, specific individuals cannot be determined by Y chromosomal markers because male relatives share the same Y chromosome. X-linked markers, possessing characteristics somewhere intermediate between autosomes and the Y chromosome, are less universally applied in criminal casework. In this paper, X markers are proposed to apply to male mixtures because their true genes can be more easily and accurately recognized than the decision of the genotypes of AS markers. In this study, an actual two-man mixed stain from a forensic case file and simulated male-mixed DNA were examined simultaneously with the X markers and autosomal markers. Finally, the actual mixture was separated successfully by the X markers, although it was unresolved by AS-STRs, and the separation ratio of the simulated mixture was much higher using Chr X tools than with AS methods. We believe X-linked markers provide significant advantages in individual discrimination of male mixtures that should be further applied to forensic work.
Control of biaxial strain in single-layer molybdenite using local thermal expansion of the substrate
NASA Astrophysics Data System (ADS)
Plechinger, Gerd; Castellanos-Gomez, Andres; Buscema, Michele; van der Zant, Herre S. J.; Steele, Gary A.; Kuc, Agnieszka; Heine, Thomas; Schüller, Christian; Korn, Tobias
2015-03-01
Single-layer MoS2 is a direct-gap semiconductor whose electronic band structure strongly depends on the strain applied to its crystal lattice. While uniaxial strain can be easily applied in a controlled way, e.g., by bending of a flexible substrate with the atomically thin MoS2 layer on top, experimental realization of biaxial strain is more challenging. Here, we exploit the large mismatch between the thermal expansion coefficients of MoS2 and a silicone-based substrate to apply a controllable biaxial tensile strain by heating the substrate with a focused laser. The effect of this biaxial strain is directly observable in optical spectroscopy as a redshift of the MoS2 photoluminescence. We also demonstrate the potential of this method to engineer more complex strain patterns by employing highly absorptive features on the substrate to achieve non-uniform heat profiles. By comparison of the observed redshift to strain-dependent band structure calculations, we estimate the biaxial strain applied by the silicone-based substrate to be up to 0.2%, corresponding to a band gap modulation of 105 meV per percentage of biaxial tensile strain.
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
Monitoring Microbial Numbers in Food by Density Centrifugation
Basel, Richard M.; Richter, Edward R.; Banwart, George J.
1983-01-01
Some foods contain low numbers of microbes that may be difficult to enumerate by the plate count method due to small food particles that interfere with the counting of colonies. Ludox colloidal silicon was coated with reducing agents to produce a nontoxic density material. Food homogenates were applied to a layered 10 and 80% mixture of modified Ludox and centrifuged at low speed. The top and bottom of the tube contained the food material, and the Ludox-containing portion was evaluated by conventional pour plate techniques. Plate counts of the Ludox mixture agreed with plate counts of the food homogenate alone. The absence of small food particles from pour plates resulted in a plate that was more easily read than pour plates of the homogenate alone. Modified Ludox was evaluated for its effect on bacteria at 4°C during a 24-h incubation period. No inhibition was observed. This method is applicable to food products, such as doughnuts, spices, tomato products, and meat, in which small food particles often interfere with routine plate counts or low dilution may inhibit colony formation. Inhibitory substances can be removed from spices, resulting in higher counts. Ludox is more economical than similar products, such as Percoll. Modified Ludox is easily rendered nontoxic by the addition of common laboratory reagents. In addition, the mixture is compatible with microbiological media. PMID:6303217
Determination of intracellular nitrate.
Romero, J M; Lara, C; Guerrero, M G
1989-01-01
A sensitive procedure has been developed for the determination of intracellular nitrate. The method includes: (i) preparation of cell lysates in 2 M-H3PO4 after separation of cells from the outer medium by rapid centrifugation through a layer of silicone oil, and (ii) subsequent nitrate analysis by ion-exchange h.p.l.c. with, as mobile phase, a solution containing 50 mM-H3PO4 and 2% (v/v) tetrahydrofuran, adjusted to pH 1.9 with NaOH. The determination of nitrate is subjected to interference by chloride and sulphate when present in the samples at high concentrations. Nitrite also interferes, but it is easily eliminated by treatment of the samples with sulphamic acid. The method has been successfully applied to the study of nitrate transport in the unicellular cyanobacterium Anacystis nidulans. PMID:2497740
An Efficient Method for Hair Containment During Head and Neck Surgery.
Zingaretti, Nicola; De Biasio, Fabrizio; Riccio, Michele; Marchesi, Andrea; Parodi, Pier Camillo
2017-11-01
The authors present a simple technique for operations around hair-bearing areas such as during a rhytidectomy. Hair surrounding the surgical field is twisted into bundles and clipped with duckbill clips. The authors repeat the procedure for each strand of hair. Between 5 and 7 duckbill clips may be required per surgery.The clippers are faster, easily applicable, and well performing. They can be used with different hair lengths, and they do not require any additional trimming or shaving; clips also keep the hair firmly in place, and they do not loosen up in the process.This technical note explains a very simple, economical, and less time-consuming method to control hair located around the surgical site. It may be applied to all procedures within the field of the hair-bearing scalp, including craniofacial and maxillofacial surgery.
Decentralized stabilization of semi-active vibrating structures
NASA Astrophysics Data System (ADS)
Pisarski, Dominik
2018-02-01
A novel method of decentralized structural vibration control is presented. The control is assumed to be realized by a semi-active device. The objective is to stabilize a vibrating system with the optimal rates of decrease of the energy. The controller relies on an easily implemented decentralized switched state-feedback control law. It uses a set of communication channels to exchange the state information between the neighboring subcontrollers. The performance of the designed method is validated by means of numerical experiments performed for a double cantilever system equipped with a set of elastomers with controlled viscoelastic properties. In terms of the assumed objectives, the proposed control strategy significantly outperforms the passive damping cases and is competitive with a standard centralized control. The presented methodology can be applied to a class of bilinear control systems concerned with smart structural elements.
van Eck, Herman J; Vos, Peter G; Valkonen, Jari P T; Uitdewilligen, Jan G A M L; Lensing, Hellen; de Vetten, Nick; Visser, Richard G F
2017-03-01
The method of graphical genotyping is applied to a panel of tetraploid potato cultivars to visualize haplotype sharing. The method allowed to map genes involved in virus and nematode resistance. The physical coordinates of the amount of linkage drag surrounding these genes are easily interpretable. Graphical genotyping is a visually attractive and easily interpretable method to represent genetic marker data. In this paper, the method is extended from diploids to a panel of tetraploid potato cultivars. Application of filters to select a subset of SNPs allows one to visualize haplotype sharing between individuals that also share a specific locus. The method is illustrated with cultivars resistant to Potato virus Y (PVY), while simultaneously selecting for the absence of the SNPs in susceptible clones. SNP data will then merge into an image which displays the coordinates of a distal genomic region on the northern arm of chromosome 11 where a specific haplotype is introgressed from the wild potato species S. stoloniferum (CPC 2093) carrying a gene (Ny (o,n)sto ) conferring resistance to two PVY strains, PVY O and PVY NTN . Graphical genotyping was also successful in showing the haplotypes on chromosome 12 carrying Ry-f sto , another resistance gene derived from S. stoloniferum conferring broad-spectrum resistance to PVY, as well as chromosome 5 haplotypes from S. vernei, with the Gpa5 locus involved in resistance against Globodera pallida cyst nematodes. The image also shows shortening of linkage drag by meiotic recombination of the introgression segment in more recent breeding material. Identity-by-descent was found to be a requirement for using graphical genotyping, which is proposed as a non-statistical alternative method for gene discovery, as compared with genome-wide association studies. The potential and limitations of the method are discussed.
Opaque microfiche masthead permits easy reading
NASA Technical Reports Server (NTRS)
Lowe, E. M.
1965-01-01
White-pigmented backing applied to the reverse side of microfiche mastheads makes the area opaque and easily readable. This technique is of value for organizations involved in large volume information storage and retrieval.
NASA Astrophysics Data System (ADS)
Salleh, M. N. M.; Ishak, M.; Aiman, M. H.; Idris, S. R. A.; Romlay, F. R. M.
2017-09-01
AZ31B magnesium alloy have been hugely applied in the aerospace, automotive, and electronic industries. However, welding thin sheet AZ31B was challenging due to its properties which is easily to evaporated especially using conventional fusion welding method such as metal inert gas (MIG). Laser could be applied to weld this metal since it produces lower heat input. The application of fiber laser welding has been widely since this type of laser could produce better welding product especially in the automotive sectors. Low power fiber laser was used to weld this non-ferrous metal where pulse wave (PW) mode was used. Double fillet lap joint was applied to weld as thin as 0.6 mm thick of AZ31B and the effect of pulsed energy on the strength was studied. Bond width, throat length, and penetration depth also was studied related to the pulsed energy which effecting the joint. Higher pulsed energy contributes to the higher fracture load with angle of irradiation lower than 3 °
Determination of Flux rope axis for GS reconstruction
NASA Astrophysics Data System (ADS)
Tian, A.; Shi, Q.; Bai, S.; Zhang, S.
2016-12-01
It is important to give the axis direction and velocity of a magnetic flux ropes before employing Grad-Shafranov reconstruction. The ability of single-satellite based MVA (MVAB and CMVA) and multi-satellite based MDD methods in finding the invariant axis are tested by a model. The choice of principal axis given by MVA along the aimed direction is dependent on the distance of the path from the flux-rope axis. The MDD results are influenced by the ratio of Noise level/separation to the gradient of the structure. An accurate axial direction will be obtained when the ratio is less than 1. By a model, an example with failed HT method is displayed indicating the importance of the STD method in obtaining the velocity of such a structure. The applicability of trial and error method by Hu and Sonnerup(2012) was also used and discussed. Finally, all above methods were applied to a flux-rope observed by Cluster. It shows that the GS method can be easily carried out in the case of clearly known dimensionality and velocity.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Vessel Enhancement and Segmentation of 4D CT Lung Image Using Stick Tensor Voting
NASA Astrophysics Data System (ADS)
Cong, Tan; Hao, Yang; Jingli, Shi; Xuan, Yang
2016-12-01
Vessel enhancement and segmentation plays a significant role in medical image analysis. This paper proposes a novel vessel enhancement and segmentation method for 4D CT lung image using stick tensor voting algorithm, which focuses on addressing the vessel distortion issue of vessel enhancement diffusion (VED) method. Furthermore, the enhanced results are easily segmented using level-set segmentation. In our method, firstly, vessels are filtered using Frangi's filter to reduce intrapulmonary noises and extract rough blood vessels. Secondly, stick tensor voting algorithm is employed to estimate the correct direction along the vessel. Then the estimated direction along the vessel is used as the anisotropic diffusion direction of vessel in VED algorithm, which makes the intensity diffusion of points locating at the vessel wall be consistent with the directions of vessels and enhance the tubular features of vessels. Finally, vessels can be extracted from the enhanced image by applying level-set segmentation method. A number of experiments results show that our method outperforms traditional VED method in vessel enhancement and results in satisfied segmented vessels.
Health condition identification of multi-stage planetary gearboxes using a mRVM-based method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Liu, Zongyao; Wu, Xionghui; Li, Naipeng; Chen, Wu; Lin, Jing
2015-08-01
Multi-stage planetary gearboxes are widely applied in aerospace, automotive and heavy industries. Their key components, such as gears and bearings, can easily suffer from damage due to tough working environment. Health condition identification of planetary gearboxes aims to prevent accidents and save costs. This paper proposes a method based on multiclass relevance vector machine (mRVM) to identify health condition of multi-stage planetary gearboxes. In this method, a mRVM algorithm is adopted as a classifier, and two features, i.e. accumulative amplitudes of carrier orders (AACO) and energy ratio based on difference spectra (ERDS), are used as the input of the classifier to classify different health conditions of multi-stage planetary gearboxes. To test the proposed method, seven health conditions of a two-stage planetary gearbox are considered and vibration data is acquired from the planetary gearbox under different motor speeds and loading conditions. The results of three tests based on different data show that the proposed method obtains an improved identification performance and robustness compared with the existing method.
Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lin; Shao, Sihong; E, Weinan
2012-11-06
We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less
Solar Powered Liquid Desiccant Air Conditioner for Low-Electricity Humidity Control
2012-07-01
thermal comfort conditions. Liquid-desiccants are solutions that are hygroscopic but are easily able to be pumped and applied within heating, ventilating, and air conditioning (HVAC) equipment as necessary.
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
One-step formation and sterilization of gellan and hyaluronan nanohydrogels using autoclave.
Montanari, Elita; De Rugeriis, Maria Cristina; Di Meo, Chiara; Censi, Roberta; Coviello, Tommasina; Alhaique, Franco; Matricardi, Pietro
2015-01-01
The sterilization of nanoparticles for biomedical applications is one of the challenges that must be faced in the development of nanoparticulate systems. Usually, autoclave sterilization cannot be applied because of stability concerns when polymeric nanoparticles are involved. This paper describes an innovative method which allows to obtain, using a single step autoclave procedure, the preparation and, at the same time, the sterilization of self-assembling nanohydrogels (NHs) obtained with cholesterol-derivatized gellan and hyaluronic acid. Moreover, by using this approach, NHs, while formed in the autoclave, can be easily loaded with drugs. The obtained NHs dispersion can be lyophilized in the presence of a cryoprotectant, leading to the original NHs after re-dispersion in water.
Recovery of Pb-Sn Alloy and Copper from Photovoltaic Ribbon in Spent Solar Module
NASA Astrophysics Data System (ADS)
Lee, Jin-Seok; Ahn, Young-Soo; Kang, Gi-Hwan; Wang, Jei-Pil
2017-09-01
This research was attempted to recover metal alloy and copper from photovoltaic ribbon (PV ribbon) of spent solar module by means of thermal treatment. In this study, thermal method newly proposed was applied to remove coating layer composed of tin and lead and separate copper substrate. Using thermal treatment under reductive gas atmosphere with CH4 gas coating layer was easily melted down at the range of temperature of 700 °C to 800 °C. In the long run, metal alloy and copper substrate were successfully obtained and their chemical compositions were examined by inductively coupled plasma mass spectrometry (ICP-MS), scanning electron microscopy (SEM) and energy dispersive x-ray Spectroscopy (EDS).
NASA Astrophysics Data System (ADS)
Reichardt, Sven; Wirtz, Ludger
2017-05-01
We present the results of a diagrammatic, fully ab initio calculation of the G peak intensity of graphene. The flexibility and generality of our approach enables us to go beyond the previous analytical calculations in the low-energy regime. We study the laser and Fermi energy dependence of the G peak intensity and analyze the contributions from resonant and nonresonant electronic transitions. In particular, we explicitly demonstrate the importance of quantum interference and nonresonant states for the G peak process. Our method of analysis and computational concept is completely general and can easily be applied to study other materials as well.
Adopting software quality measures for healthcare processes.
Yildiz, Ozkan; Demirörs, Onur
2009-01-01
In this study, we investigated the adoptability of software quality measures for healthcare process measurement. Quality measures of ISO/IEC 9126 are redefined from a process perspective to build a generic healthcare process quality measurement model. Case study research method is used, and the model is applied to a public hospital's Entry to Care process. After the application, weak and strong aspects of the process can be easily observed. Access audibility, fault removal, completeness of documentation, and machine utilization are weak aspects and these aspects are the candidates for process improvement. On the other hand, functional completeness, fault ratio, input validity checking, response time, and throughput time are the strong aspects of the process.
Rui, Yu-kui; Luo, Yun-bo; Huang, Kun-lun; Wang, Wei-min; Zhang, Lu-da
2005-10-01
With the rapid development of the GMO, more and more GMO food has been pouring into the market. Much attention has been paid to GMO labeling under the controversy of GMO safety. Transgenic corns and their parents were scanned by continuous wave of near infrared diffuse reflectance spectroscopy range of 12000-4000 cm(-1); the resolution was 4 cm(-1); scanning was carried out for 64 times; BP algorithm was applied for data processing. The GMO food was easily resolved. Near-infrared diffuse reflectance spectroscopy is unpolluted and inexpensive compared with PCR and ELISA, so it is a very promising detection method for GMO food.
Vidal-García, Marta; Bandara, Lashi; Keogh, J Scott
2018-05-01
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
NASA Astrophysics Data System (ADS)
Singh, R. A.; Satyanarayana, N.; Kustandi, T. S.; Sinha, S. K.
2011-01-01
Micro/nano-electro-mechanical-systems (MEMS/NEMS) are miniaturized devices built at micro/nanoscales. At these scales, the surface/interfacial forces are extremely strong and they adversely affect the smooth operation and the useful operating lifetimes of such devices. When these forces manifest in severe forms, they lead to material removal and thereby reduce the wear durability of the devices. In this paper, we present a simple, yet robust, two-step surface modification method to significantly enhance the tribological performance of MEMS/NEMS materials. The two-step method involves oxygen plasma treatment of polymeric films and the application of a nanolubricant, namely perfluoropolyether. We apply the two-step method to the two most important MEMS/NEMS structural materials, namely silicon and SU8 polymer. On applying surface modification to these materials, their initial coefficient of friction reduces by ~4-7 times and the steady-state coefficient of friction reduces by ~2.5-3.5 times. Simultaneously, the wear durability of both the materials increases by >1000 times. The two-step method is time effective as each of the steps takes the time duration of approximately 1 min. It is also cost effective as the oxygen plasma treatment is a part of the MEMS/NEMS fabrication process. The two-step method can be readily and easily integrated into MEMS/NEMS fabrication processes. It is anticipated that this method will work for any kind of structural material from which MEMS/NEMS are or can be made.
Binary optimization for source localization in the inverse problem of ECG.
Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf
2014-09-01
The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.
NASA Astrophysics Data System (ADS)
Afanasyeva, Natalia I.; Kolyakov, Sergei F.; Butvina, Leonid N.
1998-04-01
The new method of fiber-optical evanescent wave Fourier transform IR (FEW-FTIR) spectroscopy has been applied to the diagnostics of normal tissue, as well as precancerous and cancerous conditions. The FEW-FTIR technique is nondestructive and sensitive to changes of vibrational spectra in the IR region, without heating and damaging human and animal skin tissue. Therefore this method and technique is an ideal diagnostic tool for tumor and cancer characterization at an early stage of development on a molecular level. The application of fiber optic technology in the middle IR region is relatively inexpensive and can be adapted easily to any commercially available tabletop FTIR spectrometers. This method of diagnostics is fast, remote, and can be applied to many fields Noninvasive medical diagnostics of skin cancer and other skin diseases in vivo, ex vivo, and in vitro allow for the development convenient, remote clinical applications in dermatology and related fields. The spectral variations from normal to pathological skin tissue and environmental influence on skin have been measured and assigned in the regions of 850-4000 cm-1. The lipid structure changes are discussed. We are able to develop the spectral histopathology as a fast and informative tool of analysis.
NASA Technical Reports Server (NTRS)
Didlake, Anthony C., Jr.; Heymsfield, Gerald M.; Tian, Lin; Guimond, Stephen R.
2015-01-01
The coplane analysis technique for mapping the three-dimensional wind field of precipitating systems is applied to the NASA High Altitude Wind and Rain Airborne Profiler (HIWRAP). HIWRAP is a dual-frequency Doppler radar system with two downward pointing and conically scanning beams. The coplane technique interpolates radar measurements to a natural coordinate frame, directly solves for two wind components, and integrates the mass continuity equation to retrieve the unobserved third wind component. This technique is tested using a model simulation of a hurricane and compared to a global optimization retrieval. The coplane method produced lower errors for the cross-track and vertical wind components, while the global optimization method produced lower errors for the along-track wind component. Cross-track and vertical wind errors were dependent upon the accuracy of the estimated boundary condition winds near the surface and at nadir, which were derived by making certain assumptions about the vertical velocity field. The coplane technique was then applied successfully to HIWRAP observations of Hurricane Ingrid (2013). Unlike the global optimization method, the coplane analysis allows for a transparent connection between the radar observations and specific analysis results. With this ability, small-scale features can be analyzed more adequately and erroneous radar measurements can be identified more easily.
Bolliger, Stephan A; Thali, Michael J; Bolliger, Michael J; Kneubuehl, Beat P
2010-11-01
By measuring the total crack lengths (TCL) along a gunshot wound channel simulated in ordnance gelatine, one can calculate the energy transferred by a projectile to the surrounding tissue along its course. Visual quantitative TCL analysis of cut slices in ordnance gelatine blocks is unreliable due to the poor visibility of cracks and the likely introduction of secondary cracks resulting from slicing. Furthermore, gelatine TCL patterns are difficult to preserve because of the deterioration of the internal structures of gelatine with age and the tendency of gelatine to decompose. By contrast, using computed tomography (CT) software for TCL analysis in gelatine, cracks on 1-cm thick slices can be easily detected, measured and preserved. In this, experiment CT TCL analyses were applied to gunshots fired into gelatine blocks by three different ammunition types (9-mm Luger full metal jacket, .44 Remington Magnum semi-jacketed hollow point and 7.62 × 51 RWS Cone-Point). The resulting TCL curves reflected the three projectiles' capacity to transfer energy to the surrounding tissue very accurately and showed clearly the typical energy transfer differences. We believe that CT is a useful tool in evaluating gunshot wound profiles using the TCL method and is indeed superior to conventional methods applying physical slicing of the gelatine.
Diagnostics of boundary layer transition by shear stress sensitive liquid crystals
NASA Astrophysics Data System (ADS)
Shapoval, E. S.
2016-10-01
Previous research indicates that the problem of boundary layer transition visualization on metal models in wind tunnels (WT) which is a fundamental question in experimental aerodynamics is not solved yet. In TsAGI together with Khristianovich Institute of Theoretical and Applied Mechanics (ITAM) a method of shear stress sensitive liquid crystals (LC) which allows flow visualization was proposed. This method allows testing several flow conditions in one wind tunnel run and does not need covering the investigated model with any special heat-insulating coating which spoils the model geometry. This coating is easily applied on the model surface by spray or even by brush. Its' thickness is about 40 micrometers and it does not spoil the surface quality. At first the coating obtains some definite color. Under shear stress the LC coating changes color and this change is proportional to shear stress. The whole process can be visually observed and during the tests it is recorded by camera. The findings of the research showed that it is possible to visualize boundary layer transition, flow separation, shock waves and the flow image on the whole. It is possible to predict that the proposed method of shear stress sensitive liquid crystals is a promise for future research.
Kremen, Arie; Tsompanakis, Yiannis
2010-04-01
The slope-stability of a proposed vertical extension of a balefill was investigated in the present study, in an attempt to determine a geotechnically conservative design, compliant with New Jersey Department of Environmental Protection regulations, to maximize the utilization of unclaimed disposal capacity. Conventional geotechnical analytical methods are generally limited to well-defined failure modes, which may not occur in landfills or balefills due to the presence of preferential slip surfaces. In addition, these models assume an a priori stress distribution to solve essentially indeterminate problems. In this work, a different approach has been applied, which avoids several of the drawbacks of conventional methods. Specifically, the analysis was performed in a two-stage process: (a) calculation of stress distribution, and (b) application of an optimization technique to identify the most probable failure surface. The stress analysis was performed using a finite element formulation and the location of the failure surface was located by dynamic programming optimization method. A sensitivity analysis was performed to evaluate the effect of the various waste strength parameters of the underlying mathematical model on the results, namely the factor of safety of the landfill. Although this study focuses on the stability investigation of an expanded balefill, the methodology presented can easily be applied to general geotechnical investigations.
Calvopiña, Manuel; Buendía-Sánchez, María; López-Abán, Julio; Vicente, Belén; Muro, Antonio
2018-01-01
Amphimeriasis, a fish-borne zoonotic disease caused by the liver fluke Amphimerus spp., has recently been reported as an emerging disease affecting an indigenous Ameridian group, the Chachi, living in Ecuador. The only method for diagnosing amphimeriasis was the microscopic detection of eggs from the parasite in patients' stool samples with very low sensitivity. Our group developed an ELISA technique for detection of anti-Amphimerus IgG in human sera and a molecular method based on LAMP technology (named LAMPhimerus) for specific and sensitive parasite DNA detection. The LAMPhimerus method showed to be much more sensitive than classical parasitological methods for amphimeriasis diagnosis using human stool samples for analysis. The objective of this work is to demonstrate the feasibility of using dried stool samples on filter paper as source of DNA in combination with the effectiveness of our previously designed LAMPhimerus assay for successfully Amphimerus sp. detection in clinical stool samples. A total of 102 untreated and undiluted stool samples collected from Chachi population were spread as thin layer onto common filter paper for easily transportation to our laboratory and stored at room temperature for one year until DNA extraction. When LAMPhimerus method was applied for Amphimerus sp. DNA detection, a higher number of positive results was detected (61/102; 59.80%) in comparison to parasitological methods (38/102; 37.25%), including 28/61 (45.90%) microscopy-confirmed Amphimerus sp. infections. The diagnostic parameters for the sensitivity and specificity werecalculated for our LAMPhimerus assay, which were 79.17% and 65.98%, respectively. We demonstrate, for the first time, that common filter paper is useful for easy collection and long-term storage of human stool samples for later DNA extraction and molecular analysis of human-parasitic trematode eggs. This simple, economic and easily handling method combined with the specific and sensible LAMPhimerus assay has the potential to beused as an effective molecular large-scale screening test for amphimeriasis-endemic areas. PMID:29444135
Accurate determination of the geoid undulation N
NASA Astrophysics Data System (ADS)
Lambrou, E.; Pantazis, G.; Balodimos, D. D.
2003-04-01
This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.
CP-CHARM: segmentation-free image classification made accessible.
Uhlmann, Virginie; Singh, Shantanu; Carpenter, Anne E
2016-01-27
Automated classification using machine learning often relies on features derived from segmenting individual objects, which can be difficult to automate. WND-CHARM is a previously developed classification algorithm in which features are computed on the whole image, thereby avoiding the need for segmentation. The algorithm obtained encouraging results but requires considerable computational expertise to execute. Furthermore, some benchmark sets have been shown to be subject to confounding artifacts that overestimate classification accuracy. We developed CP-CHARM, a user-friendly image-based classification algorithm inspired by WND-CHARM in (i) its ability to capture a wide variety of morphological aspects of the image, and (ii) the absence of requirement for segmentation. In order to make such an image-based classification method easily accessible to the biological research community, CP-CHARM relies on the widely-used open-source image analysis software CellProfiler for feature extraction. To validate our method, we reproduced WND-CHARM's results and ensured that CP-CHARM obtained comparable performance. We then successfully applied our approach on cell-based assay data and on tissue images. We designed these new training and test sets to reduce the effect of batch-related artifacts. The proposed method preserves the strengths of WND-CHARM - it extracts a wide variety of morphological features directly on whole images thereby avoiding the need for cell segmentation, but additionally, it makes the methods easily accessible for researchers without computational expertise by implementing them as a CellProfiler pipeline. It has been demonstrated to perform well on a wide range of bioimage classification problems, including on new datasets that have been carefully selected and annotated to minimize batch effects. This provides for the first time a realistic and reliable assessment of the whole image classification strategy.
Improved arrival-date estimates of Arctic-breeding Dunlin (Calidris alpina arcticola)
Doll, Andrew C.; Lanctot, Richard B.; Stricker, Craig A.; Yezerinac, Stephen M.; Wunder, Michael B.
2015-01-01
The use of stable isotopes in animal ecology depends on accurate descriptions of isotope dynamics within individuals. The prevailing assumption that laboratory-derived isotopic parameters apply to free-living animals is largely untested. We used stable carbon isotopes (δ13C) in whole blood from migratory Dunlin (Calidris alpina arcticola) to estimate an in situ turnover rate and individual diet-switch dates. Our in situ results indicated that turnover rates were higher in free-living birds, in comparison to the results of an experimental study on captive Dunlin and estimates derived from a theoretical allometric model. Diet-switch dates from all 3 methods were then used to estimate arrival dates to the Arctic; arrival dates calculated with the in situ turnover rate were later than those with the other turnover-rate estimates, substantially so in some cases. These later arrival dates matched dates when local snow conditions would have allowed Dunlin to settle, and agreed with anticipated arrival dates of Dunlin tracked with light-level geolocators. Our study presents a novel method for accurately estimating arrival dates for individuals of migratory species in which return dates are difficult to document. This may be particularly appropriate for species in which extrinsic tracking devices cannot easily be employed because of cost, body size, or behavioral constraints, and in habitats that do not allow individuals to be detected easily upon first arrival. Thus, this isotopic method offers an exciting alternative approach to better understand how species may be altering their arrival dates in response to changing climatic conditions.
Zhang, Y; Cedergren, R A; Nieuwenhuis, T J; Hollingsworth, R I
1993-02-01
A simple, sensitive method for the structural characterization of oligosaccharides by fast atom bombardment-mass spectrometry (FAB-MS) has been designed. Oligosaccharides are labeled with a uv chromophore (which also serves as a charge stabilizing group) and with a hydrophobic alkyl tail. The chromophore, a 2,4-dinitrophenyl group, aids uv detection during HPLC and stabilizes negative ion species formed during analysis by FAB-MS. The hydrophobic tail, provided by an octyl group, enhances the surface activity of the analytes and makes them amenable to separation by reverse-phase chromatography using a C18 bonded phase. This method was applied to the structural analysis of the components of a mixture of starch maltodextrins with a degree of polymerization 1-16, to the analysis of the structure of pure maltohexaose, and to a previously characterized oligosaccharide from a Rhizobium capsular polysaccharide. The method gave a good yield of [M-H]- anions for the derivatized compounds, which in most cases were detectable at a level of about 1 pmol. In the case of maltohexaose, four series of sequence anions corresponding to sequential loss of glycosyl residues from the reducing and nonreducing end by different mechanisms were observed. The mixture of derivatized malto-oligosaccharides could easily be separated by HPLC. Based on the relative proportions of the individual oligomers in the mixture calculated from HPLC analysis, even though the higher oligomers were present in amounts of about 0.1%, they could still be easily detected in mass spectra of the entire mixture.(ABSTRACT TRUNCATED AT 250 WORDS)
2014-01-01
Background The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. Methods We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2–20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of “positive” (statistically significant at p < 0.05) findings using empirical data of recent meta-analyses with > = 3 studies of interventions from the Cochrane Database of Systematic Reviews. Results The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Conclusions Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes. PMID:24548571
Hoque, Sheikh Ariful; Hoshino, Hiroo; Anwar, Kazi Selim; Tanaka, Atsushi; Shinagawa, Masahiko; Hayakawa, Yuko; Okitsu, Shoko; Wada, Yuichi; Ushijima, Hiroshi
2013-02-01
The postnatal transmission of human immunodeficiency virus (HIV) from mothers to children occurs through breastfeeding. Although heat treatment of expressed breast milk is a promising approach to make breastfeeding safer, it is still not popular, mainly because the recommended procedures are difficult to follow, or time-consuming, or because mothers do not know which temperature is sufficient to inactivate HIV without destroying the nutritional elements of milk. To overcome these drawbacks, a simple and rapid method of heat treatment that a mother could perform with regular household materials applying her day-to-day art of cooking was examined. This structured experiment has demonstrated that both cell-free and cell-associated HIV type 1 (HIV-1) in expressed breast milk could be inactivated once the temperature of milk reached 65°C. Furthermore, a heating method as simple as heating the milk in a pan over a stove to 65°C inhibited HIV-1 transmission retaining milk's nutritional key elements, for example, total protein, IgG, IgA, and vitamin B(12) . This study has highlighted a simple, handy, and cost-effective method of heat treatment of expressed breast milk that mothers infected with HIV could apply easily and with more confidence. Copyright © 2012 Wiley Periodicals, Inc.
Curuksu, Jeremy; Zacharias, Martin
2009-03-14
Although molecular dynamics (MD) simulations have been applied frequently to study flexible molecules, the sampling of conformational states separated by barriers is limited due to currently possible simulation time scales. Replica-exchange (Rex)MD simulations that allow for exchanges between simulations performed at different temperatures (T-RexMD) can achieve improved conformational sampling. However, in the case of T-RexMD the computational demand grows rapidly with system size. A Hamiltonian RexMD method that specifically enhances coupled dihedral angle transitions has been developed. The method employs added biasing potentials as replica parameters that destabilize available dihedral substates and was applied to study coupled dihedral transitions in nucleic acid molecules. The biasing potentials can be either fixed at the beginning of the simulation or optimized during an equilibration phase. The method was extensively tested and compared to conventional MD simulations and T-RexMD simulations on an adenine dinucleotide system and on a DNA abasic site. The biasing potential RexMD method showed improved sampling of conformational substates compared to conventional MD simulations similar to T-RexMD simulations but at a fraction of the computational demand. It is well suited to study systematically the fine structure and dynamics of large nucleic acids under realistic conditions including explicit solvent and ions and can be easily extended to other types of molecules.
Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly
Mendes-Santos, Liana Chaves; Mograbi, Daniel; Spenciere, Bárbara; Charchat-Fichman, Helenice
2015-01-01
The Clock Drawing Test (CDT) is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. Objective The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989). Methods We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn") and Mini-Mental State Examination (MMSE) were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. Results and Conclusion A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated"), equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%). The CDT specific algorithm method used had high inter-rater reliability (p<0.01), and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging. PMID:29213954
[Investigation of hypokalemia].
Lodin, Karin; Palmér, Mats
2015-12-15
Most causes of hypokalemia could be studied relatively easily by thorough medical history and basal sampling. Moreover, difficult cases of hypokalemia should be studied systematically to identify the underlying cause so that successful long-term treatments can be applied.
An RBF-FD closest point method for solving PDEs on surfaces
NASA Astrophysics Data System (ADS)
Petras, A.; Ling, L.; Ruuth, S. J.
2018-10-01
Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.
Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model
NASA Astrophysics Data System (ADS)
Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.
2009-04-01
The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.
Rapid large area fabrication of multiscale through-hole membranes.
Tahk, Dongha; Paik, Sang-Min; Lim, Jungeun; Bang, Seokyoung; Oh, Soojung; Ryu, Hyunryul; Jeon, Noo Li
2017-05-16
There are many proposed mechanisms by which single cells can be trapped; among them is the through-hole membrane for the characterization of individual microorganisms. Due to the small scale of the fabricated pores, the construction of through-hole membranes on a large scale and with relatively large areas faces many difficulties. This paper describes novel fabrication methods for a large-area, freestanding micro/nano through-hole membrane constructed from versatile membrane materials using through-hole membranes on a microfluidic chip (THMMC). This process can rapidly (<20 min) fabricate membranes with high fidelity multiscale hole size without residual layers. The through-hole site was easily customizable from the micro to the nanoscale, with a low or high aspect ratio giving rise to reliable membranes. Also, the rigidity and biocompatibility of the through-hole membrane are easily tunable by simple injection of versatile membrane materials to obtain a large area (up to 3600 mm 2 ). Membranes produced in this manner were then applied as a proof of concept for the isolation, cultivation, and quantification of individual micro-algal cells for selection with respect to the growth rate, while controlling the quorum sensing mediated metabolic and proliferative changes.
Deep Learning for Image-Based Cassava Disease Detection.
Ramcharan, Amanda; Baranowski, Kelsee; McCloskey, Peter; Ahmed, Babuali; Legg, James; Hughes, David P
2017-01-01
Cassava is the third largest source of carbohydrates for human food in the world but is vulnerable to virus diseases, which threaten to destabilize food security in sub-Saharan Africa. Novel methods of cassava disease detection are needed to support improved control which will prevent this crisis. Image recognition offers both a cost effective and scalable technology for disease detection. New deep learning models offer an avenue for this technology to be easily deployed on mobile devices. Using a dataset of cassava disease images taken in the field in Tanzania, we applied transfer learning to train a deep convolutional neural network to identify three diseases and two types of pest damage (or lack thereof). The best trained model accuracies were 98% for brown leaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage (GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic disease (CMD). The best model achieved an overall accuracy of 93% for data not used in the training process. Our results show that the transfer learning approach for image recognition of field images offers a fast, affordable, and easily deployable strategy for digital plant disease detection.
A review of plastic waste biodegradation.
Zheng, Ying; Yanful, Ernest K; Bassi, Amarjeet S
2005-01-01
With more and more plastics being employed in human lives and increasing pressure being placed on capacities available for plastic waste disposal, the need for biodegradable plastics and biodegradation of plastic wastes has assumed increasing importance in the last few years. This review looks at the technological advancement made in the development of more easily biodegradable plastics and the biodegradation of conventional plastics by microorganisms. Additives, such as pro-oxidants and starch, are applied in synthetic materials to modify and make plastics biodegradable. Recent research has shown that thermoplastics derived from polyolefins, traditionally considered resistant to biodegradation in ambient environment, are biodegraded following photo-degradation and chemical degradation. Thermoset plastics, such as aliphatic polyester and polyester polyurethane, are easily attacked by microorganisms directly because of the potential hydrolytic cleavage of ester or urethane bonds in their structures. Some microorganisms have been isolated to utilize polyurethane as a sole source of carbon and nitrogen source. Aliphatic-aromatic copolyesters have active commercial applications because of their good mechanical properties and biodegradability. Reviewing published and ongoing studies on plastic biodegradation, this paper attempts to make conclusions on potentially viable methods to reduce impacts of plastic waste on the environment.
Wang, Zhongshun; Feng, Lei; Xiao, Dongyang; Li, Ning; Li, Yao; Cao, Danfeng; Shi, Zuosen; Cui, Zhanchen; Lu, Nan
2017-11-09
The performance of surface-enhanced Raman scattering (SERS) for detecting trace amounts of analytes depends highly on the enrichment of the diluted analytes into a small region that can be detected. A super-hydrophobic delivery (SHD) process is an excellent process to enrich even femtomolar analytes for SERS detection. However, it is still challenging to easily fabricate a low detection limit, high sensitivity and reproducible SHD-SERS substrate. In this article, we present a cost-effective and fewer-step method to fabricate a SHD-SERS substrate, named the "silver nanoislands on silica spheres" (SNOSS) platform. It is easily prepared via the thermal evaporation of silver onto a layer of super-hydrophobic paint, which contains single-scale surface-fluorinated silica spheres. The SNOSS platform performs reproducible detection, which brings the relative standard deviation down to 8.85% and 5.63% for detecting 10 -8 M R6G in one spot and spot-to-spot set-ups, respectively. The coefficient of determination (R 2 ) is 0.9773 for R6G. The SNOSS platform can be applied to the quantitative detection of analytes whose concentrations range from sub-micromolar to femtomolar levels.
Nishi, Mineo; Makishima, Hideo
1996-01-01
A composition for forming anti-reflection film on resist surface which comprises an aqueous solution of a water soluble fluorine compound, and a pattern formation method which comprises the steps of coating a photoresist composition on a substrate; coating the above-mentioned composition for forming anti-reflection film; exposing the coated film to form a specific pattern; and developing the photoresist, are provided. Since the composition for forming anti-reflection film can be coated on the photoresist in the form of an aqueous solution, not only the anti-reflection film can be formed easily, but also, the film can be removed easily by rinsing with water or alkali development. Therefore, by the pattern formation method according to the present invention, it is possible to form a pattern easily with a high dimensional accuracy.
Atila, Alptug; Yilmaz, Bilal
2015-01-01
In this study, simple, fast and reliable cyclic voltammetry (CV), linear sweep voltammetry (LSV), square wave voltammetry (SWV) and differential pulse voltammetry (DPV) methods were developed and validated for determination of bosentan in pharmaceutical preparations. The proposed methods were based on electrochemical oxidation of bosentan at platinum electrode in acetonitrile solution containing 0.1 M TBACIO4. The well-defined oxidation peak was observed at 1.21 V. The calibration curves were linear for bosentan at the concentration range of 5-40 µg/mL for LSV and 5-35 µg/mL for SWV and DPV methods, respectively. Intra- and inter-day precision values for bosentan were less than 4.92, and accuracy (relative error) was better than 6.29%. The mean recovery of bosentan was 100.7% for pharmaceutical preparations. No interference was found from two tablet excipients at the selected assay conditions. Developed methods in this study are accurate, precise and can be easily applied to Tracleer and Diamond tablets as pharmaceutical preparation. PMID:25901151
A direct force model for Galilean invariant lattice Boltzmann simulation of fluid-particle flows
NASA Astrophysics Data System (ADS)
Tao, Shi; He, Qing; Chen, Baiman; Yang, Xiaoping; Huang, Simin
The lattice Boltzmann method (LBM) has been widely used in the simulation of particulate flows involving complex moving boundaries. Due to the kinetic background of LBM, the bounce-back (BB) rule and the momentum exchange (ME) method can be easily applied to the solid boundary treatment and the evaluation of fluid-solid interaction force, respectively. However, recently it has been found that both the BB and ME schemes may violate the principle of Galilean invariance (GI). Some modified BB and ME methods have been proposed to reduce the GI error. But these remedies have been recognized subsequently to be inconsistent with Newton’s Third Law. Therefore, contrary to those corrections based on the BB and ME methods, a unified iterative approach is adopted to handle the solid boundary in the present study. Furthermore, a direct force (DF) scheme is proposed to evaluate the fluid-particle interaction force. The methods preserve the efficiency of the BB and ME schemes, and the performance on the accuracy and GI is verified and validated in the test cases of particulate flows with freely moving particles.
Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2011-04-01
In this study, we propose and evaluate a method for spectral characterization of acousto-optic tunable filter (AOTF) hyperspectral imaging systems in the near-infrared (NIR) spectral region from 900 nm to 1700 nm. The proposed spectral characterization method is based on the SRM-2035 standard reference material, exhibiting distinct spectral features, which enables robust non-rigid matching of the acquired and reference spectra. The matching is performed by simultaneously optimizing the parameters of the AOTF tuning curve, spectral resolution, baseline, and multiplicative effects. In this way, the tuning curve (frequency-wavelength characteristics) and the corresponding spectral resolution of the AOTF hyperspectral imaging system can be characterized simultaneously. Also, the method enables simple spectral characterization of the entire imaging plane of hyperspectral imaging systems. The results indicate that the method is accurate and efficient and can easily be integrated with systems operating in diffuse reflection or transmission modes. Therefore, the proposed method is suitable for characterization, calibration, or validation of AOTF hyperspectral imaging systems. © 2011 Society for Applied Spectroscopy
Jäkel, Evelyn; den Outer, Peter N; Tax, Rick B; Görts, Peter C; Reinen, Henk A J M
2007-07-10
To establish trends in surface ultraviolet radiation levels, accurate and stable long-term measurements are required. The accuracy level of today's measurements has become high enough to notice even smaller effects that influence instrument sensitivity. Laboratory measurements of the sensitivity of the entrance optics have shown a decrease of as much as 0.07-0.1%/deg temperature increase. Since the entrance optics can heat to greater than 45 degrees C in Dutch summers, corrections are necessary. A method is developed to estimate the entrance optics temperatures from pyranometer measurements and meteorological data. The method enables us to correct historic data records for which temperature information is not available. The temperature retrieval method has an uncertainty of less than 2.5 degrees C, resulting in a 0.3% uncertainty in the correction to be performed. The temperature correction improves the agreement between modeled and measured doses and instrument intercomparison as performed within the Quality Assurance of Spectral Ultraviolet Measurements in Europe project. The retrieval method is easily transferable to other instruments.
A new isometric quadriceps-strengthening exercise using EMG-biofeedback.
Kesemenli, Cumhur C; Sarman, Hakan; Baran, Tuncay; Memisoglu, Kaya; Binbir, Ismail; Savas, Yilmaz; Isik, Cengiz; Boyraz, Ismail; Koc, Bunyamin
2014-01-01
A new isometric contraction quadriceps-strengthening exercise was developed to restore the quadriceps strength lost after knee surgery more rapidly. This study evaluated the results of this new method. Patients were taught to perform the isometric quadriceps-strengthening exercise in the unaffected knee in the supine position, and then they performed it in the affected knee. First, patients were taught the classical isometric quadriceps-strengthening exercise, and then they were taught our new alternative method: "pull the patella superiorly tightly and hold the leg in the same position for 10 seconds". Afterward, the quadriceps contraction was evaluated using a non-invasive Myomed 932 EMG-biofeedback device (Enraf-Nonius, The Netherlands) with gel-containing 48 mm electrodes (Türklab, The Turkey) placed on both knees. The isometric quadriceps-strengthening exercise performed using our new method had stronger contraction than the classical method (P < 0.01). The new method involving pulling the patella superiorly appears to be a better choice, which can be applied easily, leading to better patient compliance and greater quadriceps force after arthroscopic and other knee surgeries.
Line identification studies using traditional techniques and wavelength coincidence statistics
NASA Technical Reports Server (NTRS)
Cowley, Charles R.; Adelman, Saul J.
1990-01-01
Traditional line identification techniques result in the assignment of individual lines to an atomic or ionic species. These methods may be supplemented by wavelength coincidence statistics (WCS). The strength and weakness of these methods are discussed using spectra of a number of normal and peculiar B and A stars that have been studied independently by both methods. The present results support the overall findings of some earlier studies. WCS would be most useful in a first survey, before traditional methods have been applied. WCS can quickly make a global search for all species and in this way may enable identifications of an unexpected spectrum that could easily be omitted entirely from a traditional study. This is illustrated by O I. WCS is a subject to well known weakness of any statistical technique, for example, a predictable number of spurious results are to be expected. The danger of small number statistics are illustrated. WCS is at its best relative to traditional methods in finding a line-rich atomic species that is only weakly present in a complicated stellar spectrum.
Atila, Alptug; Yilmaz, Bilal
2015-01-01
In this study, simple, fast and reliable cyclic voltammetry (CV), linear sweep voltammetry (LSV), square wave voltammetry (SWV) and differential pulse voltammetry (DPV) methods were developed and validated for determination of bosentan in pharmaceutical preparations. The proposed methods were based on electrochemical oxidation of bosentan at platinum electrode in acetonitrile solution containing 0.1 M TBACIO4. The well-defined oxidation peak was observed at 1.21 V. The calibration curves were linear for bosentan at the concentration range of 5-40 µg/mL for LSV and 5-35 µg/mL for SWV and DPV methods, respectively. Intra- and inter-day precision values for bosentan were less than 4.92, and accuracy (relative error) was better than 6.29%. The mean recovery of bosentan was 100.7% for pharmaceutical preparations. No interference was found from two tablet excipients at the selected assay conditions. Developed methods in this study are accurate, precise and can be easily applied to Tracleer and Diamond tablets as pharmaceutical preparation.
Thermally-Activated Metal-to-Glass Bonding
NASA Technical Reports Server (NTRS)
Gallagher, B. D.
1986-01-01
Hermetic seals formed easily by use of metallo-organic film. Metallo-organic film thermally bonded to glass and soldered or welded to form hermetic seal. Film applied as ink consisting of silver neodecanoate in xylene. Relative amounts of ingredients selected to obtain desired viscosity. Material applied by printing or even by scribing with pen. Sealing technique useful in making solar-cell modules, microelectronic packages, and other hermetic silicon devices.
ERIC Educational Resources Information Center
Gkioulekas, Eleftherios
2013-01-01
Many limits, typically taught as examples of applying the "squeeze" theorem, can be evaluated more easily using the proposed zero-bounded limit theorem. The theorem applies to functions defined as a product of a factor going to zero and a factor that remains bounded in some neighborhood of the limit. This technique is immensely useful…
Fixation and chemical analysis of single fog and rain droplets
NASA Astrophysics Data System (ADS)
Kasahara, M.; Akashi, S.; Ma, C.-J.; Tohno, S.
Last decade, the importance of global environmental problems has been recognized worldwide. Acid rain is one of the most important global environmental problems as well as the global warming. The grasp of physical and chemical properties of fog and rain droplets is essential to make clear the physical and chemical processes of acid rain and also their effects on forests, materials and ecosystems. We examined the physical and chemical properties of single fog and raindrops by applying fixation technique. The sampling method and treatment procedure to fix the liquid droplets as a solid particle were investigated. Small liquid particles like fog droplet could be easily fixed within few minutes by exposure to cyanoacrylate vapor. The large liquid particles like raindrops were also fixed successively, but some of them were not perfect. Freezing method was applied to fix the large raindrops. Frozen liquid particles existed stably by exposure to cyanoacrylate vapor after freezing. The particle size measurement and the elemental analysis of the fixed particle were performed in individual base using microscope, and SEX-EDX, particle-induced X-ray emission (PIXE) and micro-PIXE analyses, respectively. The concentration in raindrops was dependent upon the droplet size and the elapsed time from the beginning of rainfall.
Qumquad: a UML-based approach for remodeling of legacy systems in health care.
Garde, Sebastian; Knaup, Petra; Herold, Ralf
2003-07-01
Health care information systems still comprise legacy systems to a certain extent. For reengineering legacy systems a thorough remodeling is inalienable. Current modeling techniques like the Unified Modeling Language (UML) do not offer a systematic and comprehensive process-oriented method for remodeling activities. We developed a systematic method for remodeling legacy systems in health care called Qumquad. Qumquad consists of three major steps: (i) modeling the actual state of the application system, (ii) systematic identification of weak points in this model and (iii) development of a target concept for the reimplementation considering the identified weak points. We applied Qumquad for remodeling a documentation and therapy planning system for pediatric oncology (DOSPO). As a result of our remodeling activities we regained an abstract model of the system, an analysis of the current weak points of DOSPO and possible (partly alternative) solutions to overcome the weak points. Qumquad proved to be very helpful in the reengineering process of DOSPO since we now have at our disposal a comprehensive model for the reimplementation of DOSPO that current users of the system agree on. Qumquad can easily be applied to other reengineering projects in health care.
Bi, Wentao; Wang, Man; Yang, Xiaodi; Row, Kyung Ho
2014-07-01
Poly(ionic liquid)-bonded magnetic nanospheres were easily synthesized and applied to the pretreatment and determination of phenolic compounds in water samples, which have detrimental effects on water quality and the health of living beings. The high affinity of poly(ionic liquid)s toward the target compounds as well as the magnetic behavior of Fe3 O4 were combined in this material to provide an efficient and simple magnetic solid-phase extraction approach. The adsorption behavior of the poly(ionic liquid)-bonded magnetic nanospheres was examined to optimize the synthesis. Different parameters affecting the magnetic solid-phase extraction of phenolic compounds were assessed in terms of adsorption and recovery. Under the optimal conditions, the proposed method showed excellent detection sensitivity with limits of detection in the range of 0.3-0.8 ng/mL and precision in the range of 1.2-3.3%. This method was also applied successfully to the analysis of real water samples; good spiked recoveries over the range of 82.5-99.2% were obtained. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Selectivity in analytical chemistry: two interpretations for univariate methods.
Dorkó, Zsanett; Verbić, Tatjana; Horvai, George
2015-01-01
Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, J. J., E-mail: johnjosephwilliamson@gmail.com; Evans, R. M. L.
We dynamically simulate fractionation (partitioning of particle species) during spinodal gas-liquid separation of a size-polydisperse colloid, using polydispersity up to ∼40% and a skewed parent size distribution. We introduce a novel coarse-grained Voronoi method to minimise size bias in measuring local volume fraction, along with a variety of spatial correlation functions which detect fractionation without requiring a clear distinction between the phases. These can be applied whether or not a system is phase separated, to determine structural correlations in particle size, and generalise easily to other kinds of polydispersity (charge, shape, etc.). We measure fractionation in both mean size andmore » polydispersity between the phases, its direction differing between model interaction potentials which are identical in the monodisperse case. These qualitative features are predicted by a perturbative theory requiring only a monodisperse reference as input. The results show that intricate fractionation takes place almost from the start of phase separation, so can play a role even in nonequilibrium arrested states. The methods for characterisation of inhomogeneous polydisperse systems could in principle be applied to experiment as well as modelling.« less
NASA Astrophysics Data System (ADS)
Palazzi, E.
The evaluation of atmospheric dispersion of a cloud, arising from a sudden release of flammable or toxic materials, is an essential tool for properly designing flares, vents and other safety devices and to quantify the potential risk related to the existing ones or arising from the various kinds of accidents which can occur in chemical plants. Among the methods developed to treat the important case of upward-directed jets, Hoehne's procedure for determining the behaviour and extent of flammability zone is extensively utilized, particularly concerning petrochemical plants. In a previous study, a substantial simplification of the aforesaid procedure was achieved, by correlating the experimental data with an empirical formula, allowing to obtain a mathematical description of the boundaries of the flammable cloud. Following a theoretical approach, a most general model is developed in the present work, applicable to the various kinds of design problems and/or risk evaluation regarding upward-directed releases from high velocity sources. It is also demonstrated that the model gives conservative results, if applied outside the range of the Hoehne's experimental conditions. Moreover, with simple modifications, the same approach could be easily applied to deal with the atmospheric dispersion of anyhow directed releases.
Development of a bedside viable ultrasound protocol to quantify appendicular lean tissue mass
Paris, Michael T.; Lafleur, Benoit; Dubin, Joel A.
2017-01-01
Abstract Background Ultrasound is a non‐invasive and readily available tool that can be prospectively applied at the bedside to assess muscle mass in clinical settings. The four‐site protocol, which images two anatomical sites on each quadriceps, may be a viable bedside method, but its ability to predict musculature has not been compared against whole‐body reference methods. Our primary objectives were to (i) compare the four‐site protocol's ability to predict appendicular lean tissue mass from dual‐energy X‐ray absorptiometry; (ii) optimize the predictability of the four‐site protocol with additional anatomical muscle thicknesses and easily obtained covariates; and (iii) assess the ability of the optimized protocol to identify individuals with low lean tissue mass. Methods This observational cross‐sectional study recruited 96 university and community dwelling adults. Participants underwent ultrasound scans for assessment of muscle thickness and whole‐body dual‐energy X‐ray absorptiometry scans for assessment of appendicular lean tissue. Ultrasound protocols included (i) the nine‐site protocol, which images nine anterior and posterior muscle groups in supine and prone positions, and (ii) the four‐site protocol, which images two anterior sites on each quadriceps muscle group in a supine position. Results The four‐site protocol was strongly associated (R 2 = 0.72) with appendicular lean tissue mass, but Bland–Altman analysis displayed wide limits of agreement (−5.67, 5.67 kg). Incorporating the anterior upper arm muscle thickness, and covariates age and sex, alongside the four‐site protocol, improved the association (R 2 = 0.91) with appendicular lean tissue and displayed narrower limits of agreement (−3.18, 3.18 kg). The optimized protocol demonstrated a strong ability to identify low lean tissue mass (area under the curve = 0.89). Conclusions The four‐site protocol can be improved with the addition of the anterior upper arm muscle thickness, sex, and age when predicting appendicular lean tissue mass. This optimized protocol can accurately identify low lean tissue mass, while still being easily applied at the bedside. PMID:28722298
Godbout, Julie; Tremblay, Laurence; Levasseur, Caroline; Lavigne, Patricia; Rainville, André; Mackay, John; Bousquet, Jean; Isabel, Nathalie
2017-01-01
Biological material is at the forefront of research programs, as well as application fields such as breeding, aquaculture, and reforestation. While sophisticated techniques are used to produce this material, all too often, there is no strict monitoring during the “production” process to ensure that the specific varieties are the expected ones. Confidence rather than evidence is often applied when the time comes to start a new experiment or to deploy selected varieties in the field. During the last decade, genomics research has led to the development of important resources, which have created opportunities for easily developing tools to assess the conformity of the material along the production chains. In this study, we present a simple methodology that enables the development of a traceability system which, is in fact a by-product of previous genomic projects. The plant production system in white spruce (Picea glauca) is used to illustrate our purpose. In Quebec, one of the favored strategies to produce elite varieties is to use somatic embryogenesis (SE). In order to detect human errors both upstream and downstream of the white spruce production process, this project had two main objectives: (i) to develop methods that make it possible to trace the origin of plants produced, and (ii) to generate a unique genetic fingerprint that could be used to differentiate each embryogenic cell line and ensure its genetic monitoring. Such a system had to rely on a minimum number of low-cost DNA markers and be easy to use by non-specialists. An efficient marker selection process was operationalized by testing different classification methods on simulated datasets. These datasets were generated using in-house bioinformatics tools that simulated crosses involved in the breeding program for which genotypes from hundreds of SNP markers were already available. The rate of misidentification was estimated and various sources of mishandling or contamination were identified. The method can easily be applied to other production systems for which genomic resources are already available. PMID:28791035
Cryptosporidium as a testbed for single cell genome characterization of unicellular eukaryotes.
Troell, Karin; Hallström, Björn; Divne, Anna-Maria; Alsmark, Cecilia; Arrighi, Romanico; Huss, Mikael; Beser, Jessica; Bertilsson, Stefan
2016-06-23
Infectious disease involving multiple genetically distinct populations of pathogens is frequently concurrent, but difficult to detect or describe with current routine methodology. Cryptosporidium sp. is a widespread gastrointestinal protozoan of global significance in both animals and humans. It cannot be easily maintained in culture and infections of multiple strains have been reported. To explore the potential use of single cell genomics methodology for revealing genome-level variation in clinical samples from Cryptosporidium-infected hosts, we sorted individual oocysts for subsequent genome amplification and full-genome sequencing. Cells were identified with fluorescent antibodies with an 80 % success rate for the entire single cell genomics workflow, demonstrating that the methodology can be applied directly to purified fecal samples. Ten amplified genomes from sorted single cells were selected for genome sequencing and compared both to the original population and a reference genome in order to evaluate the accuracy and performance of the method. Single cell genome coverage was on average 81 % even with a moderate sequencing effort and by combining the 10 single cell genomes, the full genome was accounted for. By a comparison to the original sample, biological variation could be distinguished and separated from noise introduced in the amplification. As a proof of principle, we have demonstrated the power of applying single cell genomics to dissect infectious disease caused by closely related parasite species or subtypes. The workflow can easily be expanded and adapted to target other protozoans, and potential applications include mapping genome-encoded traits, virulence, pathogenicity, host specificity and resistance at the level of cells as truly meaningful biological units.
NASA Astrophysics Data System (ADS)
Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher
2015-07-01
Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
NASA Astrophysics Data System (ADS)
Afan, Haitham Abdulmohsin; El-shafie, Ahmed; Mohtar, Wan Hanna Melini Wan; Yaseen, Zaher Mundher
2016-10-01
An accurate model for sediment prediction is a priority for all hydrological researchers. Many conventional methods have shown an inability to achieve an accurate prediction of suspended sediment. These methods are unable to understand the behaviour of sediment transport in rivers due to the complexity, noise, non-stationarity, and dynamism of the sediment pattern. In the past two decades, Artificial Intelligence (AI) and computational approaches have become a remarkable tool for developing an accurate model. These approaches are considered a powerful tool for solving any non-linear model, as they can deal easily with a large number of data and sophisticated models. This paper is a review of all AI approaches that have been applied in sediment modelling. The current research focuses on the development of AI application in sediment transport. In addition, the review identifies major challenges and opportunities for prospective research. Throughout the literature, complementary models superior to classical modelling.
New method for blowup of the Euler-Poisson system
NASA Astrophysics Data System (ADS)
Kwong, Man Kam; Yuen, Manwai
2016-08-01
In this paper, we provide a new method for establishing the blowup of C2 solutions for the pressureless Euler-Poisson system with attractive forces for RN (N ≥ 2) with ρ(0, x0) > 0 and Ω 0 i j ( x 0 ) = /1 2 [" separators=" ∂ i u j ( 0 , x 0 ) - ∂ j u i ( 0 , x 0 ) ] = 0 at some point x0 ∈ RN. By applying the generalized Hubble transformation div u ( t , x 0 ( t ) ) = /N a ˙ ( t ) a ( t ) to a reduced Riccati differential inequality derived from the system, we simplify the inequality into the Emden equation a ̈ ( t ) = - /λ a ( t ) N - 1 , a ( 0 ) = 1 , a ˙ ( 0 ) = /div u ( 0 , x 0 ) N . Known results on its blowup set allow us to easily obtain the blowup conditions of the Euler-Poisson system.
Stoica, Grigoreta M.; Stoica, Alexandru Dan; An, Ke; ...
2014-11-28
The problem of calculating the inverse pole figure (IPF) is analyzed from the perspective of the application of time-of flight neutron diffraction toin situmonitoring of the thermomechanical behavior of engineering materials. On the basis of a quasi-Monte Carlo (QMC) method, a consistent set of grain orientations is generated and used to compute the weighting factors for IPF normalization. The weighting factors are instrument dependent and were calculated for the engineering materials diffractometer VULCAN (Spallation Neutron Source, Oak Ridge National Laboratory). The QMC method is applied to face-centered cubic structures and can be easily extended to other crystallographic symmetries. Examples includemore » 316LN stainless steelin situloaded in tension at room temperature and an Al–2%Mg alloy, substantially deformed by cold rolling and in situannealed up to 653 K.« less
Conrad, Catharina; Miller, Miles A; Bartsch, Jörg W; Schlomann, Uwe; Lauffenburger, Douglas A
2017-01-01
Proteolytic Activity Matrix Analysis (PrAMA) is a method for simultaneously determining the activities of specific Matrix Metalloproteinases (MMPs) and A Disintegrin and Metalloproteinases (ADAMs) in complex biological samples. In mixtures of unknown proteases, PrAMA infers selective metalloproteinase activities by using a panel of moderately specific FRET-based polypeptide protease substrates in parallel, typically monitored by a plate-reader in a 96-well format. Fluorescence measurements are then quantitatively compared to a standard table of catalytic efficiencies measured from purified mixtures of individual metalloproteinases and FRET substrates. Computational inference of specific activities is performed with an easily used Matlab program, which is provided herein. Thus, we describe PrAMA as a combined experimental and mathematical approach to determine real-time metalloproteinase activities, which has previously been applied to live-cell cultures, cellular lysates, cell culture supernatants, and body fluids from patients.
von Gunten, Konstantin; Alam, Md Samrat; Hubmann, Magdalena; Ok, Yong Sik; Konhauser, Kurt O; Alessi, Daniel S
2017-07-01
A modified Community Bureau of Reference (CBR) sequential extraction method was tested to assess the composition of untreated pyrogenic carbon (biochar) and oil sands petroleum coke. Wood biochar samples were found to contain lower concentrations of metals, but had higher fractions of easily mobilized alkaline earth and transition metals. Sewage sludge biochar was determined to be less recalcitrant and had higher total metal concentrations, with most of the metals found in the more resilient extraction fractions (oxidizable, residual). Petroleum coke was the most stable material, with a similar metal distribution pattern as the sewage sludge biochar. The applied sequential extraction method represents a suitable technique to recover metals from these materials, and is a valuable tool in understanding the metal retaining and leaching capability of various biochar types and carbonaceous petroleum coke samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sizing of single fluorescently stained DNA fragments by scanning microscopy
Laib, Stephan; Rankl, Michael; Ruckstuhl, Thomas; Seeger, Stefan
2003-01-01
We describe an approach to determine DNA fragment sizes based on the fluorescence detection of single adsorbed fragments on specifically coated glass cover slips. The brightness of single fragments stained with the DNA bisintercalation dye TOTO-1 is determined by scanning the surface with a confocal microscope. The brightness of adsorbed fragments is found to be proportional to the fragment length. The method needs only minute amount of DNA, beyond inexpensive and easily available surface coatings, like poly-l-lysine, 3-aminoproyltriethoxysilane and polyornithine, are utilizable. We performed DNA-sizing of fragment lengths between 2 and 14 kb. Further, we resolved the size distribution before and after an enzymatic restriction digest. At this a separation of buffers or enzymes was unnecessary. DNA sizes were determined within an uncertainty of 7–14%. The proposed method is straightforward and can be applied to standardized microtiter plates. PMID:14602931
Panel data analysis of cardiotocograph (CTG) data.
Horio, Hiroyuki; Kikuchi, Hitomi; Ikeda, Tomoaki
2013-01-01
Panel data analysis is a statistical method, widely used in econometrics, which deals with two-dimensional panel data collected over time and over individuals. Cardiotocograph (CTG) which monitors fetal heart rate (FHR) using Doppler ultrasound and uterine contraction by strain gage is commonly used in intrapartum treatment of pregnant women. Although the relationship between FHR waveform pattern and the outcome such as umbilical blood gas data at delivery has long been analyzed, there exists no accumulated FHR patterns from large number of cases. As time-series economic fluctuations in econometrics such as consumption trend has been studied using panel data which consists of time-series and cross-sectional data, we tried to apply this method to CTG data. The panel data composed of a symbolized segment of FHR pattern can be easily handled, and a perinatologist can get the whole FHR pattern view from the microscopic level of time-series FHR data.
Effect Of Molecular Rotations On High Intensity Absorption In CO2
NASA Astrophysics Data System (ADS)
Bandrauk, Andre D.; Claveau, Lorraine
1986-10-01
In intense fields, the Rabi frequency ωR = pE/h can easily be of the order of rotational and vibrational energies of molecules. This means that rotations as well as vibrations become strongly perturbed so that perturbative methods no longer apply. We will show that nonperturbative methods can be derived from the concept of the dressed molecule. This leads to coupled equations which are used ko simulate numerically the multiphoton processes which will occur at intensities > 108 W/cm2. Furthermore, for multiphoton rotational tran-sitions, one can derive analytical models which help one understand the temporal behaviour of energy flow in a molecule in terms of its dressed spectrum, such as chaotic or regular (nonchaotic) behaviour. These results are of relevance to the manifestation of multiphoton coherences in a CO2 spectrum at very high intensities (I % 1012 W/cm2).
Finger-triggered portable PDMS suction cup for equipment-free microfluidic pumping
NASA Astrophysics Data System (ADS)
Lee, Sanghyun; Kim, Hojin; Lee, Wonhyung; Kim, Joonwon
2018-12-01
This study presents a finger-triggered portable polydimethylsiloxane suction cup that enables equipment-free microfluidic pumping. The key feature of this method is that its operation only involves a "pressing-and-releasing" action for the cup placed at the outlet of a microfluidic device, which transports the fluid at the inlet toward the outlet through a microchannel. This method is simple, but effective and powerful. The cup is portable and can easily be fabricated from a three-dimensional printed mold, used without any pre-treatment, reversibly bonded to microfluidic devices without leakage, and applied to various material-based microfluidic devices. The effect of the suction cup geometry and fabrication conditions on the pumping performance was investigated. Furthermore, we demonstrated the practical applications of the suction cup by conducting an equipment-free pumping of thermoplastic-based microfluidic devices and water-in-oil droplet generation.
A new method for E-government procurement using collaborative filtering and Bayesian approach.
Zhang, Shuai; Xi, Chengyu; Wang, Yan; Zhang, Wenyu; Chen, Yanhong
2013-01-01
Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.
A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach
Wang, Yan
2013-01-01
Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach. PMID:24385869
Marty, Michael T.; Kuhnline Sloan, Courtney D.; Bailey, Ryan C.; Sligar, Stephen G.
2012-01-01
Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics. PMID:22686186
Marty, Michael T; Sloan, Courtney D Kuhnline; Bailey, Ryan C; Sligar, Stephen G
2012-07-03
Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes, and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics.
Target surface finding using 3D SAR data
NASA Astrophysics Data System (ADS)
Ruiter, Jason R.; Burns, Joseph W.; Subotic, Nikola S.
2005-05-01
Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.
Modeling of outgassing and matrix decomposition in carbon-phenolic composites
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1994-01-01
Work done in the period Jan. - June 1994 is summarized. Two threads of research have been followed. First, the thermodynamics approach was used to model the chemical and mechanical responses of composites exposed to high temperatures. The thermodynamics approach lends itself easily to the usage of variational principles. This thermodynamic-variational approach has been applied to the transpiration cooling problem. The second thread is the development of a better algorithm to solve the governing equations resulting from the modeling. Explicit finite difference method is explored for solving the governing nonlinear, partial differential equations. The method allows detailed material models to be included and solution on massively parallel supercomputers. To demonstrate the feasibility of the explicit scheme in solving nonlinear partial differential equations, a transpiration cooling problem was solved. Some interesting transient behaviors were captured such as stress waves and small spatial oscillations of transient pressure distribution.
Research using qualitative, quantitative or mixed methods and choice based on the research.
McCusker, K; Gunaydin, S
2015-10-01
Research is fundamental to the advancement of medicine and critical to identifying the most optimal therapies unique to particular societies. This is easily observed through the dynamics associated with pharmacology, surgical technique and the medical equipment used today versus short years ago. Advancements in knowledge synthesis and reporting guidelines enhance the quality, scope and applicability of results; thus, improving health science and clinical practice and advancing health policy. While advancements are critical to the progression of optimal health care, the high cost associated with these endeavors cannot be ignored. Research fundamentally needs to be evaluated to identify the most efficient methods of evaluation. The primary objective of this paper is to look at a specific research methodology when applied to the area of clinical research, especially extracorporeal circulation and its prognosis for the future. © The Author(s) 2014.
EDC-mediated DNA attachment to nanocrystalline CVD diamond films.
Christiaens, P; Vermeeren, V; Wenmackers, S; Daenen, M; Haenen, K; Nesládek, M; vandeVen, M; Ameloot, M; Michiels, L; Wagner, P
2006-08-15
Chemical vapour deposited (CVD) diamond is a very promising material for biosensor fabrication owing both to its chemical inertness and the ability to make it electrical semiconducting that allows for connection with integrated circuits. For biosensor construction, a biochemical method to immobilize nucleic acids to a diamond surface has been developed. Nanocrystalline diamond is grown using microwave plasma-enhanced chemical vapour deposition (MPECVD). After hydrogenation of the surface, 10-undecenoic acid, an omega-unsaturated fatty acid, is tethered by 254 nm photochemical attachment. This is followed by 1-ethyl-3-[3-dimethylaminopropyl]carbodiimide (EDC)-mediated attachment of amino (NH(2))-modified dsDNA. The functionality of the covalently bound dsDNA molecules is confirmed by fluorescence measurements, PCR and gel electrophoresis during 35 denaturation and rehybridisation steps. The linking method after the fatty acid attachment can easily be applied to other biomolecules like antibodies and enzymes.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
Bardos, Tamas; Farkas, Boglarka; Mezes, Beata; Vancsodi, Jozsef; Kvell, Krisztian; Czompoly, Tamas; Nemeth, Peter; Bellyei, Arpad; Illes, Tamas
2009-11-01
A focal cartilage lesion has limited capacity to heal, and the repair modalities used at present are still unable to provide a universal solution. Pure cartilage graft implantation appears to be a simple option, but it has not been applied widely as cartilage will not reattach easily to the subchondral bone. We used a multiple-incision technique (processed chondrograft) to increase cartilage graft surface. We hypothesized that pure cartilage graft with augmented osteochondral fusion capacity may be used for cartilage repair and we compared this method with other repair techniques. Controlled laboratory study. Full-thickness focal cartilage defects were created on the medial femoral condyle of 9-month-old pigs; defects were repaired using various methods including bone marrow stimulation, autologous chondrocyte implantation, and processed chondrograft. After the repair, at weeks 6 and 24, macroscopic and histologic evaluation was carried out. Compared with other methods, processed chondrograft was found to be similarly effective in cartilage repair. Defects without repair and defects treated with bone marrow stimulation appeared slightly irregular with fibrocartilage filling. Autologous chondrocyte implantation produced hyalinelike cartilage, although its cellular organization was distinguishable from the surrounding articular cartilage. Processed chondrograft demonstrated good osteochondral integration, and the resulting tissue appeared to be hyaline cartilage. The applied cartilage surface processing method allows acceptable osteochondral integration, and the repair tissue appears to have good macroscopic and histologic characteristics. If further studies confirm its efficacy, this technique could be considered for human application in the future.
Theoretical framework for analyzing structural compliance properties of proteins.
Arikawa, Keisuke
2018-01-01
We propose methods for directly analyzing structural compliance (SC) properties of elastic network models of proteins, and we also propose methods for extracting information about motion properties from the SC properties. The analysis of SC properties involves describing the relationships between the applied forces and the deformations. When decomposing the motion according to the magnitude of SC (SC mode decomposition), we can obtain information about the motion properties under the assumption that the lower SC mode motions or the softer motions occur easily. For practical applications, the methods are formulated in a general form. The parts where forces are applied and those where deformations are evaluated are separated from each other for enabling the analyses of allosteric interactions between the specified parts. The parts are specified not only by the points but also by the groups of points (the groups are treated as flexible bodies). In addition, we propose methods for quantitatively evaluating the properties based on the screw theory and the considerations of the algebraic structures of the basic equations expressing the SC properties. These methods enable quantitative discussions about the relationships between the SC mode motions and the motions estimated from two different conformations; they also help identify the key parts that play important roles for the motions by comparing the SC properties with those of partially constrained models. As application examples, lactoferrin and ATCase are analyzed. The results show that we can understand their motion properties through their lower SC mode motions or the softer motions.
Theoretical framework for analyzing structural compliance properties of proteins
2018-01-01
We propose methods for directly analyzing structural compliance (SC) properties of elastic network models of proteins, and we also propose methods for extracting information about motion properties from the SC properties. The analysis of SC properties involves describing the relationships between the applied forces and the deformations. When decomposing the motion according to the magnitude of SC (SC mode decomposition), we can obtain information about the motion properties under the assumption that the lower SC mode motions or the softer motions occur easily. For practical applications, the methods are formulated in a general form. The parts where forces are applied and those where deformations are evaluated are separated from each other for enabling the analyses of allosteric interactions between the specified parts. The parts are specified not only by the points but also by the groups of points (the groups are treated as flexible bodies). In addition, we propose methods for quantitatively evaluating the properties based on the screw theory and the considerations of the algebraic structures of the basic equations expressing the SC properties. These methods enable quantitative discussions about the relationships between the SC mode motions and the motions estimated from two different conformations; they also help identify the key parts that play important roles for the motions by comparing the SC properties with those of partially constrained models. As application examples, lactoferrin and ATCase are analyzed. The results show that we can understand their motion properties through their lower SC mode motions or the softer motions. PMID:29607281
Brown, Angus M
2006-04-01
The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.
Visualization of DNA in highly processed botanical materials.
Lu, Zhengfei; Rubinsky, Maria; Babajanian, Silva; Zhang, Yanjun; Chang, Peter; Swanson, Gary
2018-04-15
DNA-based methods have been gaining recognition as a tool for botanical authentication in herbal medicine; however, their application in processed botanical materials is challenging due to the low quality and quantity of DNA left after extensive manufacturing processes. The low amount of DNA recovered from processed materials, especially extracts, is "invisible" by current technology, which has casted doubt on the presence of amplifiable botanical DNA. A method using adapter-ligation and PCR amplification was successfully applied to visualize the "invisible" DNA in botanical extracts. The size of the "invisible" DNA fragments in botanical extracts was around 20-220 bp compared to fragments of around 600 bp for the more easily visualized DNA in botanical powders. This technique is the first to allow characterization and visualization of small fragments of DNA in processed botanical materials and will provide key information to guide the development of appropriate DNA-based botanical authentication methods in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.
Liu, Changhong; Liu, Wei; Lu, Xuzhong; Chen, Wei; Yang, Jianbo; Zheng, Lei
2014-06-15
Crop-to-crop transgene flow may affect the seed purity of non-transgenic rice varieties, resulting in unwanted biosafety consequences. The feasibility of a rapid and nondestructive determination of transgenic rice seeds from its non-transgenic counterparts was examined by using multispectral imaging system combined with chemometric data analysis. Principal component analysis (PCA), partial least squares discriminant analysis (PLSDA), least squares-support vector machines (LS-SVM), and PCA-back propagation neural network (PCA-BPNN) methods were applied to classify rice seeds according to their genetic origins. The results demonstrated that clear differences between non-transgenic and transgenic rice seeds could be easily visualized with the nondestructive determination method developed through this study and an excellent classification (up to 100% with LS-SVM model) can be achieved. It is concluded that multispectral imaging together with chemometric data analysis is a promising technique to identify transgenic rice seeds with high efficiency, providing bright prospects for future applications. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lerner, Michael G.; Meagher, Kristin L.; Carlson, Heather A.
2008-10-01
Use of solvent mapping, based on multiple-copy minimization (MCM) techniques, is common in structure-based drug discovery. The minima of small-molecule probes define locations for complementary interactions within a binding pocket. Here, we present improved methods for MCM. In particular, a Jarvis-Patrick (JP) method is outlined for grouping the final locations of minimized probes into physical clusters. This algorithm has been tested through a study of protein-protein interfaces, showing the process to be robust, deterministic, and fast in the mapping of protein "hot spots." Improvements in the initial placement of probe molecules are also described. A final application to HIV-1 protease shows how our automated technique can be used to partition data too complicated to analyze by hand. These new automated methods may be easily and quickly extended to other protein systems, and our clustering methodology may be readily incorporated into other clustering packages.
Non-invasive method for quantitative evaluation of exogenous compound deposition on skin.
Stamatas, Georgios N; Wu, Jeff; Kollias, Nikiforos
2002-02-01
Topical application of active compounds on skin is common to both pharmaceutical and cosmetic industries. Quantification of the concentration of a compound deposited on the skin is important in determining the optimum formulation to deliver the pharmaceutical or cosmetic benefit. The most commonly used techniques to date are either invasive or not easily reproducible. In this study, we have developed a noninvasive alternative to these techniques based on spectrofluorimetry. A mathematical model based on diffusion approximation theory is utilized to correct fluorescence measurements for the attenuation caused by endogenous skin chromophore absorption. The limitation is that the compound of interest has to be either fluorescent itself or fluorescently labeled. We used the method to detect topically applied salicylic acid. Based on the mathematical model a calibration curve was constructed that is independent of endogenous chromophore concentration. We utilized the method to localize salicylic acid in epidermis and to follow its dynamics over a period of 3 d.
Zhang, Chaosheng; Tang, Ya; Luo, Lin; Xu, Weilin
2009-11-01
Outliers in urban soil geochemical databases may imply potential contaminated land. Different methodologies which can be easily implemented for the identification of global and spatial outliers were applied for Pb concentrations in urban soils of Galway City in Ireland. Due to its strongly skewed probability feature, a Box-Cox transformation was performed prior to further analyses. The graphic methods of histogram and box-and-whisker plot were effective in identification of global outliers at the original scale of the dataset. Spatial outliers could be identified by a local indicator of spatial association of local Moran's I, cross-validation of kriging, and a geographically weighted regression. The spatial locations of outliers were visualised using a geographical information system. Different methods showed generally consistent results, but differences existed. It is suggested that outliers identified by statistical methods should be confirmed and justified using scientific knowledge before they are properly dealt with.
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Dawiec, A.
2017-12-01
A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.
Liebig, J P; Göken, M; Richter, G; Mačković, M; Przybilla, T; Spiecker, E; Pierron, O N; Merle, B
2016-12-01
A new method for the preparation of freestanding thin film samples for mechanical testing in transmission electron microscopes is presented. It is based on a combination of focused ion beam (FIB) milling and electron-beam-assisted etching with xenon difluoride (XeF 2 ) precursor gas. The use of the FIB allows for the target preparation of microstructural defects and enables well-defined sample geometries which can be easily adapted in order to meet the requirements of various testing setups. In contrast to existing FIB-based preparation approaches, the area of interest is never exposed to ion beam irradiation which preserves a pristine microstructure. The method can be applied to a wide range of thin film material systems compatible with XeF 2 etching. Its feasibility is demonstrated for gold and alloyed copper thin films and its practical application is discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Detection of hazardous cavities with combined geophysical methods
NASA Astrophysics Data System (ADS)
Hegymegi, Cs.; Nyari, Zs.; Pattantyus-Abraham, M.
2003-04-01
Unknown near-surface cavities often cause problems for municipal communities all over the world. This is the situation in Hungary in many towns and villages, too. Inhabitants and owners of real estates (houses, cottages, lands) are responsible for the safety and stability of their properties. The safety of public sites belongs to the local municipal community. Both (the owner and the community) are interested in preventing accidents. Near-surface cavities (unknown caves or earlier built and forgotten cellars) usually can be easily detected by surface geophysical methods. Traditional and recently developed measuring techniques in seismics, geoelectrics and georadar are suitable for economical investigation of hazardous, potentially collapsing cavities, prior to excavation and reinforcement. This poster will show some example for detection of cellars and caves being dangerous for civil population because of possible collapse under public sites (road, yard, playground, agricultural territory, etc.). The applied and presented methods are ground penetrating radar, seismic surface tomography and analysis of single traces, geoelectric 2D and 3D resistivity profiling. Technology and processing procedure will be presented.
Filtration Isolation of Nucleic Acids: A Simple and Rapid DNA Extraction Method.
McFall, Sally M; Neto, Mário F; Reed, Jennifer L; Wagner, Robin L
2016-08-06
FINA, filtration isolation of nucleic acids, is a novel extraction method which utilizes vertical filtration via a separation membrane and absorbent pad to extract cellular DNA from whole blood in less than 2 min. The blood specimen is treated with detergent, mixed briefly and applied by pipet to the separation membrane. The lysate wicks into the blotting pad due to capillary action, capturing the genomic DNA on the surface of the separation membrane. The extracted DNA is retained on the membrane during a simple wash step wherein PCR inhibitors are wicked into the absorbent blotting pad. The membrane containing the entrapped DNA is then added to the PCR reaction without further purification. This simple method does not require laboratory equipment and can be easily implemented with inexpensive laboratory supplies. Here we describe a protocol for highly sensitive detection and quantitation of HIV-1 proviral DNA from 100 µl whole blood as a model for early infant diagnosis of HIV that could readily be adapted to other genetic targets.
Can we recognize horses by their ocular biometric traits using deep convolutional neural networks?
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Szadkowski, Mateusz
2017-08-01
This paper aims at determining the viability of horse recognition by the means of ocular biometrics and deep convolutional neural networks (deep CNNs). Fast and accurate identification of race horses before racing is crucial for ensuring that exactly the horses that were declared are participating, using methods that are non-invasive and friendly to these delicate animals. As typical iris recognition methods require lot of fine-tuning of the method parameters and high-quality data, CNNs seem like a natural candidate to be applied for recognition thanks to their potentially excellent abilities in describing texture, combined with ease of implementation in an end-to-end manner. Also, with such approach we can easily utilize both iris and periocular features without constructing complicated algorithms for each. We thus present a simple CNN classifier, able to correctly identify almost 80% of the samples in an identification scenario, and give equal error rate (EER) of less than 10% in a verification scenario.
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, Longxiao; Gu, Hanming
2018-03-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.
Competing magnetic ground states and their coupling to the crystal lattice in CuFe2Ge2
NASA Astrophysics Data System (ADS)
May, Andrew; Calder, Stuart; Parker, David; Sales, Brian; McGuire, Michael
CuFe2Ge2 has been identified as a system with competing magnetic ground states that are strongly coupled to the crystal lattice and easily manipulated by temperature or applied magnetic field. Powder neutron diffraction data reveal the emergence of antiferromagnetic (AFM) order near TN = 175 K, as well as a transition into an incommensurate AFM spin structure below approximately 125 K. Together with refined moments of approximately 1 Bohr magneton per iron, the incommensurate structure supports an itinerant picture of magnetism in CuFe2Ge2, which is consistent with theoretical calculations. Bulk magnetization measurements suggest that the spin structures are easily manipulated with an applied field, which further demonstrates the near-degeneracy of different magnetic configurations. Interestingly, the thermal expansion is found to be very anisotropic, and the c lattice parameter has anomalous temperature dependence near TN. These results show that the ground state of CuFe2Ge2 is easily manipulated by external forces, making it a potential parent compound for a rich phase diagram of emergent phenomena. Research supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division and Scientific User Facilities Division.
Phase-locked-loop interferometry applied to aspheric testing with a computer-stored compensator.
Servin, M; Malacara, D; Rodriguez-Vera, R
1994-05-01
A recently developed technique for continuous-phase determination of interferograms with a digital phase-locked loop (PLL) is applied to the null testing of aspheres. Although this PLL demodulating scheme is also a synchronous or direct interferometric technique, the separate unwrapping process is not explicitly required. The unwrapping and the phase-detection processes are achieved simultaneously within the PLL. The proposed method uses a computer-generated holographic compensator. The holographic compensator does not need to be printed out by any means; it is calculated and used from the computer. This computer-stored compensator is used as the reference signal to phase demodulate a sample interferogram obtained from the asphere being tested. Consequently the demodulated phase contains information about the wave-front departures from the ideal computer-stored aspheric interferogram. Wave-front differences of ~ 1 λ are handled easily by the proposed PLL scheme. The maximum recorded frequency in the template's interferogram as well as in the sampled interferogram are assumed to be below the Nyquist frequency.
Quintar, S E; Santagata, J P; Cortinez, V A
2005-10-15
A chemically modified electrode (CME) was prepared and studied as a potentiometric sensor for the end-point detection in the automatic titration of vanadium(V) with EDTA. The CME was constructed with a paste prepared by mixing spectral-grade graphite powder, Nujol oil and N-2-naphthoyl-N-p-tolylhydroxamic acid (NTHA). Buffer systems, pH effects and the concentration range were studied. Interference ions were separated by applying a liquid-liquid extraction procedure. The CME did not require any special conditioning before using. The electrode was constructed with very inexpensive materials and was easily made. It could be continuously used, at least two months without removing the paste. Automatic potentiometric titration curves were obtained for V(V) within 5 x 10(-5) to 2 x 10(-3)M with acceptable accuracy and precision. The developed method was applied to V(V) determination in alloys for hip prosthesis.
Efficient utilization of graphics technology for space animation
NASA Technical Reports Server (NTRS)
Panos, Gregory Peter
1989-01-01
Efficient utilization of computer graphics technology has become a major investment in the work of aerospace engineers and mission designers. These new tools are having a significant impact in the development and analysis of complex tasks and procedures which must be prepared prior to actual space flight. Design and implementation of useful methods in applying these tools has evolved into a complex interaction of hardware, software, network, video and various user interfaces. Because few people can understand every aspect of this broad mix of technology, many specialists are required to build, train, maintain and adapt these tools to changing user needs. Researchers have set out to create systems where an engineering designer can easily work to achieve goals with a minimum of technological distraction. This was accomplished with high-performance flight simulation visual systems and supercomputer computational horsepower. Control throughout the creative process is judiciously applied while maintaining generality and ease of use to accommodate a wide variety of engineering needs.
Applications of automatic differentiation in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.
1994-01-01
Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.
NASA Astrophysics Data System (ADS)
Chartosias, Marios
Acceptance of Carbon Fiber Reinforced Polymer (CFRP) structures requires a robust surface preparation method with improved process controls capable of ensuring high bond quality. Surface preparation in a production clean room environment prior to applying adhesive for bonding would minimize risk of contamination and reduce cost. Plasma treatment is a robust surface preparation process capable of being applied in a production clean room environment with process parameters that are easily controlled and documented. Repeatable and consistent processing is enabled through the development of a process parameter window utilizing techniques such as Design of Experiments (DOE) tailored to specific adhesive and substrate bonding applications. Insight from respective plasma treatment Original Equipment Manufacturers (OEMs) and screening tests determined critical process factors from non-factors and set the associated factor levels prior to execution of the DOE. Results from mode I Double Cantilever Beam (DCB) testing per ASTM D 5528 [1] standard and DOE statistical analysis software are used to produce a regression model and determine appropriate optimum settings for each factor.
Marcondes, Freddy Beretta; de Vasconcelos, Rodrigo Antunes; Marchetto, Adriano; de Andrade, André Luis Lugnani; Filho, Américo Zoppi; Etchebehere, Maurício
2015-01-01
Objetctive: Study was to translate and culturally adapt the modified Rowe score for overhead athletes. Methods: The translation and cultural adaptation process initially involved the stages of translation, synthesis, back-translation, and revision by the Translation Group. It was than created the pre-final version of the questionnaire, being the areas “function” and “pain” applied to 20 athletes that perform overhead movements and that suffered SLAP lesions in the dominant shoulder and the areas “active compression test and anterior apprehension test” and “motion” were applied to 15 health professionals. Results: During the translation process there were made little modifications in the questionnaire in order to adapt it to Brazilian culture, without changing the semantics and the idiomatic concept originally described. Conclusion: The questionnaire was easily understood by the subjects of the study, being possible to obtain the Brazilian version of the modified Rowe score for overhead athletes that underwent surgical treatment of the SLAP lesion. PMID:27047903
NASA Astrophysics Data System (ADS)
Kotan, Muhammed; Öz, Cemil
2017-12-01
An inspection system using estimated three-dimensional (3-D) surface characteristics information to detect and classify the faults to increase the quality control on the frequently used industrial components is proposed. Shape from shading (SFS) is one of the basic and classic 3-D shape recovery problems in computer vision. In our application, we developed a system using Frankot and Chellappa SFS method based on the minimization of the selected basis function. First, the specialized image acquisition system captured the images of the component. To eliminate noise, wavelet transform is applied to the taken images. Then, estimated gradients were used to obtain depth and surface profiles. Depth information was used to determine and classify the surface defects. Also, a comparison made with some linearization-based SFS algorithms was discussed. The developed system was applied to real products and the results indicated that using SFS approaches is useful and various types of defects can easily be detected in a short period of time.
Numerical aerodynamic simulation facility. [for flows about three-dimensional configurations
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Hathaway, A. W.
1978-01-01
Critical to the advancement of computational aerodynamics capability is the ability to simulate flows about three-dimensional configurations that contain both compressible and viscous effects, including turbulence and flow separation at high Reynolds numbers. Analyses were conducted of two solution techniques for solving the Reynolds averaged Navier-Stokes equations describing the mean motion of a turbulent flow with certain terms involving the transport of turbulent momentum and energy modeled by auxiliary equations. The first solution technique is an implicit approximate factorization finite-difference scheme applied to three-dimensional flows that avoids the restrictive stability conditions when small grid spacing is used. The approximate factorization reduces the solution process to a sequence of three one-dimensional problems with easily inverted matrices. The second technique is a hybrid explicit/implicit finite-difference scheme which is also factored and applied to three-dimensional flows. Both methods are applicable to problems with highly distorted grids and a variety of boundary conditions and turbulence models.
NASA Astrophysics Data System (ADS)
Pescaru, A.; Oanta, E.; Axinte, T.; Dascalescu, A.-D.
2015-11-01
Computer aided engineering is based on models of the phenomena which are expressed as algorithms. The implementations of the algorithms are usually software applications which are processing a large volume of numerical data, regardless the size of the input data. In this way, the finite element method applications used to have an input data generator which was creating the entire volume of geometrical data, starting from the initial geometrical information and the parameters stored in the input data file. Moreover, there were several data processing stages, such as: renumbering of the nodes meant to minimize the size of the band length of the system of equations to be solved, computation of the equivalent nodal forces, computation of the element stiffness matrix, assemblation of system of equations, solving the system of equations, computation of the secondary variables. The modern software application use pre-processing and post-processing programs to easily handle the information. Beside this example, CAE applications use various stages of complex computation, being very interesting the accuracy of the final results. Along time, the development of CAE applications was a constant concern of the authors and the accuracy of the results was a very important target. The paper presents the various computing techniques which were imagined and implemented in the resulting applications: finite element method programs, finite difference element method programs, applied general numerical methods applications, data generators, graphical applications, experimental data reduction programs. In this context, the use of the extended precision data types was one of the solutions, the limitations being imposed by the size of the memory which may be allocated. To avoid the memory-related problems the data was stored in files. To minimize the execution time, part of the file was accessed using the dynamic memory allocation facilities. One of the most important consequences of the paper is the design of a library which includes the optimized solutions previously tested, that may be used for the easily development of original CAE cross-platform applications. Last but not least, beside the generality of the data type solutions, there is targeted the development of a software library which may be used for the easily development of node-based CAE applications, each node having several known or unknown parameters, the system of equations being automatically generated and solved.
Valeriani, Federica; Agodi, Antonella; Casini, Beatrice; Cristina, Maria Luisa; D'Errico, Marcello Mario; Gianfranceschi, Gianluca; Liguori, Giorgio; Liguori, Renato; Mucci, Nicolina; Mura, Ida; Pasquarella, Cesira; Piana, Andrea; Sotgiu, Giovanni; Privitera, Gaetano; Protano, Carmela; Quattrocchi, Annalisa; Ripabelli, Giancarlo; Rossini, Angelo; Spagnolo, Anna Maria; Tamburro, Manuela; Tardivo, Stefano; Veronesi, Licia; Vitali, Matteo; Romano Spica, Vincenzo
2018-02-01
Reprocessing of endoscopes is key to preventing cross-infection after colonoscopy. Culture-based methods are recommended for monitoring, but alternative and rapid approaches are needed to improve surveillance and reduce turnover times. A molecular strategy based on detection of residual traces from gut microbiota was developed and tested using a multicenter survey. A simplified sampling and DNA extraction protocol using nylon-tipped flocked swabs was optimized. A multiplex real-time polymerase chain reaction (PCR) test was developed that targeted 6 bacteria genes that were amplified in 3 mixes. The method was validated by interlaboratory tests involving 5 reference laboratories. Colonoscopy devices (n = 111) were sampled in 10 Italian hospitals. Culture-based microbiology and metagenomic tests were performed to verify PCR data. The sampling method was easily applied in all 10 endoscopy units and the optimized DNA extraction and amplification protocol was successfully performed by all of the involved laboratories. This PCR-based method allowed identification of both contaminated (n = 59) and fully reprocessed endoscopes (n = 52) with high sensibility (98%) and specificity (98%), within 3-4 hours, in contrast to the 24-72 hours needed for a classic microbiology test. Results were confirmed by next-generation sequencing and classic microbiology. A novel approach for monitoring reprocessing of colonoscopy devices was developed and successfully applied in a multicenter survey. The general principle of tracing biological fluids through microflora DNA amplification was successfully applied and may represent a promising approach for hospital hygiene. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Defining the wheat gluten peptide fingerprint via a discovery and targeted proteomics approach.
Martínez-Esteso, María José; Nørgaard, Jørgen; Brohée, Marcel; Haraszi, Reka; Maquet, Alain; O'Connor, Gavin
2016-09-16
Accurate, reliable and sensitive detection methods for gluten are required to support current EU regulations. The enforcement of legislative levels requires that measurement results are comparable over time and between methods. This is not a trivial task for gluten which comprises a large number of protein targets. This paper describes a strategy for defining a set of specific analytical targets for wheat gluten. A comprehensive proteomic approach was applied by fractionating wheat gluten using RP-HPLC (reversed phase high performance liquid chromatography) followed by a multi-enzymatic digestion (LysC, trypsin and chymotrypsin) with subsequent mass spectrometric analysis. This approach identified 434 peptide sequences from gluten. Peptides were grouped based on two criteria: unique to a single gluten protein sequence; contained known immunogenic and toxic sequences in the context of coeliac disease. An LC-MS/MS method based on selected reaction monitoring (SRM) was developed on a triple quadrupole mass spectrometer for the specific detection of the target peptides. The SRM based screening approach was applied to gluten containing cereals (wheat, rye, barley and oats) and non-gluten containing flours (corn, soy and rice). A unique set of wheat gluten marker peptides were identified and are proposed as wheat specific markers. The measurement of gluten in processed food products in support of regulatory limits is performed routinely. Mass spectrometry is emerging as a viable alternative to ELISA based methods. Here we outline a set of peptide markers that are representative of gluten and consider the end user's needs in protecting those with coeliac disease. The approach taken has been applied to wheat but can be easily extended to include other species potentially enabling the MS quantification of different gluten containing species from the identified markers. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-18
... possess skills that are not easily transferable. 3. The competitive conditions within the workers... Products, Kessler Marketing Group, Youngstown, Ohio: April 30, 2010 TA-W-80,181; L'Oreal, USA Products...
Smielik, Ievgen; Hütwohl, Jan-Marco; Gierszewski, Stefanie; Witte, Klaudia; Kuhnert, Klaus-Dieter
2017-01-01
Abstract Animal behavior researchers often face problems regarding standardization and reproducibility of their experiments. This has led to the partial substitution of live animals with artificial virtual stimuli. In addition to standardization and reproducibility, virtual stimuli open new options for researchers since they are easily changeable in morphology and appearance, and their behavior can be defined. In this article, a novel toolchain to conduct behavior experiments with fish is presented by a case study in sailfin mollies Poecilia latipinna. As the toolchain holds many different and novel features, it offers new possibilities for studies in behavioral animal research and promotes the standardization of experiments. The presented method includes options to design, animate, and present virtual stimuli to live fish. The designing tool offers an easy and user-friendly way to define size, coloration, and morphology of stimuli and moreover it is able to configure virtual stimuli randomly without any user influence. Furthermore, the toolchain brings a novel method to animate stimuli in a semiautomatic way with the help of a game controller. These created swimming paths can be applied to different stimuli in real time. A presentation tool combines models and swimming paths regarding formerly defined playlists, and presents the stimuli onto 2 screens. Experiments with live sailfin mollies validated the usage of the created virtual 3D fish models in mate-choice experiments. PMID:29491963
Zhu, Xiang; Stephens, Matthew
2017-01-01
Bayesian methods for large-scale multiple regression provide attractive approaches to the analysis of genome-wide association studies (GWAS). For example, they can estimate heritability of complex traits, allowing for both polygenic and sparse models; and by incorporating external genomic data into the priors, they can increase power and yield new biological insights. However, these methods require access to individual genotypes and phenotypes, which are often not easily available. Here we provide a framework for performing these analyses without individual-level data. Specifically, we introduce a “Regression with Summary Statistics” (RSS) likelihood, which relates the multiple regression coefficients to univariate regression results that are often easily available. The RSS likelihood requires estimates of correlations among covariates (SNPs), which also can be obtained from public databases. We perform Bayesian multiple regression analysis by combining the RSS likelihood with previously proposed prior distributions, sampling posteriors by Markov chain Monte Carlo. In a wide range of simulations RSS performs similarly to analyses using the individual data, both for estimating heritability and detecting associations. We apply RSS to a GWAS of human height that contains 253,288 individuals typed at 1.06 million SNPs, for which analyses of individual-level data are practically impossible. Estimates of heritability (52%) are consistent with, but more precise, than previous results using subsets of these data. We also identify many previously unreported loci that show evidence for association with height in our analyses. Software is available at https://github.com/stephenslab/rss. PMID:29399241
Müller, Klaus; Smielik, Ievgen; Hütwohl, Jan-Marco; Gierszewski, Stefanie; Witte, Klaudia; Kuhnert, Klaus-Dieter
2017-02-01
Animal behavior researchers often face problems regarding standardization and reproducibility of their experiments. This has led to the partial substitution of live animals with artificial virtual stimuli. In addition to standardization and reproducibility, virtual stimuli open new options for researchers since they are easily changeable in morphology and appearance, and their behavior can be defined. In this article, a novel toolchain to conduct behavior experiments with fish is presented by a case study in sailfin mollies Poecilia latipinna . As the toolchain holds many different and novel features, it offers new possibilities for studies in behavioral animal research and promotes the standardization of experiments. The presented method includes options to design, animate, and present virtual stimuli to live fish. The designing tool offers an easy and user-friendly way to define size, coloration, and morphology of stimuli and moreover it is able to configure virtual stimuli randomly without any user influence. Furthermore, the toolchain brings a novel method to animate stimuli in a semiautomatic way with the help of a game controller. These created swimming paths can be applied to different stimuli in real time. A presentation tool combines models and swimming paths regarding formerly defined playlists, and presents the stimuli onto 2 screens. Experiments with live sailfin mollies validated the usage of the created virtual 3D fish models in mate-choice experiments.
The applied technologies to access clean water for remote communities
NASA Astrophysics Data System (ADS)
Rabindra, I. B.
2018-01-01
A lot of research is done to overcome the remote communities to access clean water, yet very little is utilized and implemented by the community. Various reasons can probably be made for, which is the application of research results is assessed less practical. The aims of this paper is seeking a practical approach, how to establish criteria for the design can be easier applied, at the proper locations, the simple construction, effectively producing a volume and quality of clean water designation. The methods used in this paper is a technological model assessment of treatment/filtering clean water produced a variety of previous research, to establish a model of appropriate technology for remote communities. Various research results collected from the study of literature, while the identification of opportunities and threats to its application is done using a SWOT analysis. This article discussion is looking for alternative models of clean water filtration technology from the previous research results, to be selected as appropriate technology, easily applied and bring of many benefits to the remote communities. The conclusions resulting from the discussion in this paper, expected to be used as the basic criteria of design model of clean water filtration technologies that can be accepted and applied effectively by the remote communities.
Does genomic selection have a future in plant breeding?
Jonas, Elisabeth; de Koning, Dirk-Jan
2013-09-01
Plant breeding largely depends on phenotypic selection in plots and only for some, often disease-resistance-related traits, uses genetic markers. The more recently developed concept of genomic selection, using a black box approach with no need of prior knowledge about the effect or function of individual markers, has also been proposed as a great opportunity for plant breeding. Several empirical and theoretical studies have focused on the possibility to implement this as a novel molecular method across various species. Although we do not question the potential of genomic selection in general, in this Opinion, we emphasize that genomic selection approaches from dairy cattle breeding cannot be easily applied to complex plant breeding. Copyright © 2013 Elsevier Ltd. All rights reserved.
Gibbs Ensembles for Nearly Compatible and Incompatible Conditional Models
Chen, Shyh-Huei; Wang, Yuchung J.
2010-01-01
Gibbs sampler has been used exclusively for compatible conditionals that converge to a unique invariant joint distribution. However, conditional models are not always compatible. In this paper, a Gibbs sampling-based approach — Gibbs ensemble —is proposed to search for a joint distribution that deviates least from a prescribed set of conditional distributions. The algorithm can be easily scalable such that it can handle large data sets of high dimensionality. Using simulated data, we show that the proposed approach provides joint distributions that are less discrepant from the incompatible conditionals than those obtained by other methods discussed in the literature. The ensemble approach is also applied to a data set regarding geno-polymorphism and response to chemotherapy in patients with metastatic colorectal PMID:21286232
A linear quadratic regulator approach to the stabilization of uncertain linear systems
NASA Technical Reports Server (NTRS)
Shieh, L. S.; Sunkel, J. W.; Wang, Y. J.
1990-01-01
This paper presents a linear quadratic regulator approach to the stabilization of uncertain linear systems. The uncertain systems under consideration are described by state equations with the presence of time-varying unknown-but-bounded uncertainty matrices. The method is based on linear quadratic regulator (LQR) theory and Liapunov stability theory. The robust stabilizing control law for a given uncertain system can be easily constructed from the symmetric positive-definite solution of the associated augmented Riccati equation. The proposed approach can be applied to matched and/or mismatched systems with uncertainty matrices in which only their matrix norms are bounded by some prescribed values and/or their entries are bounded by some prescribed constraint sets. Several numerical examples are presented to illustrate the results.
a 3d GIS Method Applied to Cataloging and Restoring: the Case of Aurelian Walls at Rome
NASA Astrophysics Data System (ADS)
Canciani, M.; Ceniccola, V.; Messi, M.; Saccone, M.; Zampilli, M.
2013-07-01
The project involves architecture, archaeology, restoration, graphic documentation and computer imaging. The objective is development of a method for documentation of an architectural feature, based on a three-dimensional model obtained through laser scanning technologies, linked to a database developed in GIS environment. The case study concerns a short section of Rome's Aurelian walls, including the Porta Latina. The city walls are Rome's largest single architectural monument, subject to continuous deterioration, modification and maintenance since their original construction beginning in 271 AD. The documentation system provides a flexible, precise and easily-applied instrument for recording the full appearance, materials, stratification palimpsest and conservation status, in order to identify restoration criteria and intervention priorities, and to monitor and control the use and conservation of the walls over time. The project began with an analysis and documentation campaign integrating direct, traditional recording methods with indirect, topographic instrument and 3D laser scanning recording. These recording systems permitted development of a geographic information system based on three-dimensional modelling of separate, individual elements, linked to a database and related to the various stratigraphic horizons, the construction techniques, the component materials and their state of degradation. The investigations of the extant wall fabric were further compared to historic documentation, from both graphic and descriptive sources. The resulting model constitutes the core of the GIS system for this specific monument. The methodology is notable for its low cost, precision, practicality and thoroughness, and can be applied to the entire Aurelian wall and to other monuments.
Heat flux measurements on ceramics with thin film thermocouples
NASA Technical Reports Server (NTRS)
Holanda, Raymond; Anderson, Robert C.; Liebert, Curt H.
1993-01-01
Two methods were devised to measure heat flux through a thick ceramic using thin film thermocouples. The thermocouples were deposited on the front and back face of a flat ceramic substrate. The heat flux was applied to the front surface of the ceramic using an arc lamp Heat Flux Calibration Facility. Silicon nitride and mullite ceramics were used; two thicknesses of each material was tested, with ceramic temperatures to 1500 C. Heat flux ranged from 0.05-2.5 MW/m2(sup 2). One method for heat flux determination used an approximation technique to calculate instantaneous values of heat flux vs time; the other method used an extrapolation technique to determine the steady state heat flux from a record of transient data. Neither method measures heat flux in real time but the techniques may easily be adapted for quasi-real time measurement. In cases where a significant portion of the transient heat flux data is available, the calculated transient heat flux is seen to approach the extrapolated steady state heat flux value as expected.
Kirchhoff and Ohm in action: solving electric currents in continuous extended media
NASA Astrophysics Data System (ADS)
Dolinko, A. E.
2018-03-01
In this paper we show a simple and versatile computational simulation method for determining electric currents and electric potential in 2D and 3D media with arbitrary distribution of resistivity. One of the highlights of the proposed method is that the simulation space containing the distribution of resistivity and the points of external applied voltage are introduced by means of digital images or bitmaps, which easily allows simulating any phenomena involving distributions of resistivity. The simulation is based on the Kirchhoff’s laws of electric currents and it is solved by means of an iterative procedure. The method is also generalised to account for media with distributions of reactive impedance. At the end of this work, we show an example of application of the simulation, consisting in reproducing the response obtained with the geophysical method of electric resistivity tomography in presence of soil cracks. This paper is aimed at undergraduate or graduated students interested in computational physics and electricity and also researchers involved in the area of continuous electric media, which could find a simple and powerful tool for investigation.
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
a Numerical Method for Stability Analysis of Pinned Flexible Mechanisms
NASA Astrophysics Data System (ADS)
Beale, D. G.; Lee, S. W.
1996-05-01
A technique is presented to investigate the stability of mechanisms with pin-jointed flexible members. The method relies on a special floating frame from which elastic link co-ordinates are defined. Energies are easily developed for use in a Lagrange equation formulation, leading to a set of non-linear and mixed ordinary differential-algebraic equations of motion with constraints. Stability and bifurcation analysis is handled using a numerical procedure (generalized co-ordinate partitioning) that avoids the tedious and difficult task of analytically reducing the system of equations to a number equalling the system degrees of freedom. The proposed method was then applied to (1) a slider-crank mechanism with a flexible connecting rod and crank of constant rotational speed, and (2) a four-bar linkage with a flexible coupler with a constant speed crank. In both cases, a single pinned-pinned beam bending mode is employed to develop resonance curves and stability boundaries in the crank length-crank speed parameter plane. Flip and fold bifurcations are common occurrences in both mechanisms. The accuracy of the proposed method was also verified by comparison with previous experimental results [1].
Harden, Angela; Garcia, Jo; Oliver, Sandy; Rees, Rebecca; Shepherd, Jonathan; Brunton, Ginny; Oakley, Ann
2004-09-01
Methods for systematic reviews are well developed for trials, but not for non-experimental or qualitative research. This paper describes the methods developed for reviewing research on people's perspectives and experiences ("views" studies) alongside trials within a series of reviews on young people's mental health, physical activity, and healthy eating. Reports of views studies were difficult to locate; could not easily be classified as "qualitative" or "quantitative"; and often failed to meet seven basic methodological reporting standards used in a newly developed quality assessment tool. Synthesising views studies required the adaptation of qualitative analysis techniques. The benefits of bringing together views studies in a systematic way included gaining a greater breadth of perspectives and a deeper understanding of public health issues from the point of view of those targeted by interventions. A systematic approach also aided reflection on study methods that may distort, misrepresent, or fail to pick up people's views. This methodology is likely to create greater opportunities for people's own perspectives and experiences to inform policies to promote their health.
Harden, A.; Garcia, J.; Oliver, S.; Rees, R.; Shepherd, J.; Brunton, G.; Oakley, A.
2004-01-01
Methods for systematic reviews are well developed for trials, but not for non-experimental or qualitative research. This paper describes the methods developed for reviewing research on people's perspectives and experiences ("views" studies) alongside trials within a series of reviews on young people's mental health, physical activity, and healthy eating. Reports of views studies were difficult to locate; could not easily be classified as "qualitative" or "quantitative"; and often failed to meet seven basic methodological reporting standards used in a newly developed quality assessment tool. Synthesising views studies required the adaptation of qualitative analysis techniques. The benefits of bringing together views studies in a systematic way included gaining a greater breadth of perspectives and a deeper understanding of public health issues from the point of view of those targeted by interventions. A systematic approach also aided reflection on study methods that may distort, misrepresent, or fail to pick up people's views. This methodology is likely to create greater opportunities for people's own perspectives and experiences to inform policies to promote their health. PMID:15310807
Pan, Ting-Tiao; Sun, Da-Wen; Pu, Hongbin; Wei, Qingyi
2018-03-07
A simple method based on surface-enhanced Raman scattering (SERS) was developed for the rapid determination of alternariol (AOH) in pear fruits using an easily prepared silver-nanoparticle (AgNP) substrate. The AgNP substrate was modified by pyridine to circumvent the weak affinity of the AOH molecules to the silver surface and to improve the sensitivity of detection. Quantitative analysis was performed in AOH solutions at concentrations ranging from 3.16 to 316.0 μg/L, and the limit of detection was 1.30 μg/L. The novel method was also applied to the detection of AOH residues in pear fruits purchased from the market and in pear fruits that were artificially inoculated with Alternaria alternata. AOH was not found in any of the fresh fruit, whereas it resided in the rotten and inoculated fruits. Finally, the SERS method was cross validated against HPLC. It was revealed that the SERS method has great potential utility in the rapid detection of AOH in pear fruits and other agricultural products.
McDermott, Jason E.; Bruillard, Paul; Overall, Christopher C.; ...
2015-03-09
There are many examples of groups of proteins that have similar function, but the determinants of functional specificity may be hidden by lack of sequencesimilarity, or by large groups of similar sequences with different functions. Transporters are one such protein group in that the general function, transport, can be easily inferred from the sequence, but the substrate specificity can be impossible to predict from sequence with current methods. In this paper we describe a linguistic-based approach to identify functional patterns from groups of unaligned protein sequences and its application to predict multi-drug resistance transporters (MDRs) from bacteria. We first showmore » that our method can recreate known patterns from PROSITE for several motifs from unaligned sequences. We then show that the method, MDRpred, can predict MDRs with greater accuracy and positive predictive value than a collection of currently available family-based models from the Pfam database. Finally, we apply MDRpred to a large collection of protein sequences from an environmental microbiome study to make novel predictions about drug resistance in a potential environmental reservoir.« less
Single-scale renormalisation group improvement of multi-scale effective potentials
NASA Astrophysics Data System (ADS)
Chataignier, Leonardo; Prokopec, Tomislav; Schmidt, Michael G.; Świeżewska, Bogumiła
2018-03-01
We present a new method for renormalisation group improvement of the effective potential of a quantum field theory with an arbitrary number of scalar fields. The method amounts to solving the renormalisation group equation for the effective potential with the boundary conditions chosen on the hypersurface where quantum corrections vanish. This hypersurface is defined through a suitable choice of a field-dependent value for the renormalisation scale. The method can be applied to any order in perturbation theory and it is a generalisation of the standard procedure valid for the one-field case. In our method, however, the choice of the renormalisation scale does not eliminate individual logarithmic terms but rather the entire loop corrections to the effective potential. It allows us to evaluate the improved effective potential for arbitrary values of the scalar fields using the tree-level potential with running coupling constants as long as they remain perturbative. This opens the possibility of studying various applications which require an analysis of multi-field effective potentials across different energy scales. In particular, the issue of stability of the scalar potential can be easily studied beyond tree level.
The Surface-Tension Method of Visually Inspecting Honeycomb-Core Sandwich Plates
NASA Technical Reports Server (NTRS)
Katzoff, Samuel
1960-01-01
When one face of a metal-honeycomb-core sandwich plate is heated or cooled relative to the other, heat transfer through the core causes the temperature on each face at the lines of contact with the core to be slightly different from that on the rest of the face. If a thin liquid film is applied to the face, the variation of surface tension with temperature causes the liquid to move from warmer to cooler areas and thus to develop a pattern corresponding to the temperature pattern on the face. Irregularities in the pattern identify the locations where the core is not adequately bonded to the face sheet. The pattern is easily observed when a fluorescent liquid is used and illumination is by means of ultraviolet light. Observation in ordinary light is also possible when a very deeply colored liquid is used. A method based on the use of a thermographic phosphor to observe the temperature pattern was found to be less sensitive than the surface-tension method. A sublimation method was found to be not only less sensitive but also far more troublesome.
NASA Astrophysics Data System (ADS)
Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.
2005-12-01
This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing
The use of FDEM in hydrogeophysics: A review
NASA Astrophysics Data System (ADS)
Boaga, Jacopo
2017-04-01
Hydrogeophysics is a rapidly evolving discipline emerging from geophysical methods. Geophysical methods are nowadays able to illustrate not only the fabric and the structure of the underground, but also the subsurface processes that occur within it, as fluids dynamic and biogeochemical reactions. This is a growing wide inter-disciplinary field, specifically dedicated to revealing soil properties and monitoring processes of change due to soil/bio/atmosphere interactions. The discipline involves environmental, hydrological, agricultural research and counts application for several engineering purposes. The most frequently used techniques in the hydrogeophysical framework are the electric and electromagnetic methods because they are highly sensitive to soil physical properties such as texture, salinity, mineralogy, porosity and water content. Non-invasive techniques are applied in a number of problems related to characterization of subsurface hydrology and groundwater dynamic processes. Ground based methods, as electrical tomography, proved to obtain considerable resolution but they are difficult to extend to wider exploration purposes due to their logistical limitation. Methods that don't need electrical contact with soil can be, on the contrary, easily applied to broad areas. Among these methods, a rapidly growing role is played by frequency domain electro-magnetic (FDEM) survey. This is due thanks to the improvement of multi-frequency and multi-coils instrumentation, simple time-lapse repeatability, cheap and accurate topographical referencing, and the emerging development of inversion codes. From raw terrain apparent conductivity meter, FDEM survey is becoming a key tool for 3D soil characterization and dynamics observation in near surface hydrological studies. Dozens of papers are here summarized and presented, in order to describe the promising potential of the technique.
Accuracy of cancellous bone volume fraction measured by micro-CT scanning.
Ding, M; Odgaard, A; Hvid, I
1999-03-01
Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.
A volumetric pulmonary CT segmentation method with applications in emphysema assessment
NASA Astrophysics Data System (ADS)
Silva, José Silvestre; Silva, Augusto; Santos, Beatriz S.
2006-03-01
A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.
Synthesis method for ultrananocrystalline diamond in powder employing a coaxial arc plasma gun
NASA Astrophysics Data System (ADS)
Naragino, Hiroshi; Tominaga, Aki; Hanada, Kenji; Yoshitake, Tsuyoshi
2015-07-01
A new method that enables us to synthesize ultrananocrystalline diamond (UNCD) in powder is proposed. Highly energetic carbon species ejected from a graphite cathode of a coaxial arc plasma gun were provided on a quartz plate at a high density by repeated arc discharge in a compact vacuum chamber, and resultant films automatically peeled from the plate were aggregated and powdered. The grain size was easily controlled from 2.4 to 15.0 nm by changing the arc discharge energy. It was experimentally demonstrated that the proposed method is a new and promising method that enables us to synthesize UNCD in powder easily and controllably.
The Chemophytostabilisation Process of Heavy Metal Polluted Soil
Grobelak, Anna; Napora, Anna
2015-01-01
Industrial areas are characterised by soil degradation processes that are related primarily to the deposition of heavy metals. Areas contaminated with metals are a serious source of risk due to secondary pollutant emissions and metal leaching and migration in the soil profile and into the groundwater. Consequently, the optimal solution for these areas is to apply methods of remediation that create conditions for the restoration of plant cover and ensure the protection of groundwater against pollution. Remediation activities that are applied to large-scale areas contaminated with heavy metals should mainly focus on decreasing the degree of metal mobility in the soil profile and metal bioavailability to levels that are not phytotoxic. Chemophytostabilisation is a process in which soil amendments and plants are used to immobilise metals. The main objective of this research was to investigate the effects of different doses of organic amendments (after aerobic sewage sludge digestion in the food industry) and inorganic amendments (lime, superphosphate, and potassium phosphate) on changes in the metals fractions in soils contaminated with Cd, Pb and Zn during phytostabilisation. In this study, the contaminated soil was amended with sewage sludge and inorganic amendments and seeded with grass (tall fescue) to increase the degree of immobilisation of the studied metals. The contaminated soil was collected from the area surrounding a zinc smelter in the Silesia region of Poland (pH 5.5, Cd 12 mg kg-1, Pb 1100 mg kg-1, Zn 700 mg kg-1). A plant growth experiment was conducted in a growth chamber for 5 months. Before and after plant growth, soil subsamples were subjected to chemical and physical analyses. To determine the fractions of the elements, a sequential extraction method was used according to Zeien and Brümmer. Research confirmed that the most important impacts on the Zn, Cd and Pb fractions included the combined application of sewage sludge from the food industry and the addition of lime and potassium phosphate. Certain doses of inorganic additives decreased the easily exchangeable fraction from 50% to 1%. The addition of sewage sludge caused a decrease in fraction I for Cd and Pb. In combination with the use of inorganic additives, a mobile fraction was not detected and an easily mobilisable fraction was reduced by half. For certain combinations of metals, the concentrations were detected up to a few percent. The application of sewage sludge resulted in a slight decrease in a mobile (water soluble and easily exchangeable metals) fraction of Zn, but when inorganic additives were applied, this fraction was not detected. The highest degree of immobilisation of the tested heavy metals relative to the control was achieved when using both sewage sludge and inorganic additives at an experimentally determined dose. The sequential extraction results confirmed this result. In addition, the results proved that the use of the phytostabilisation process on contaminated soils should be supported. PMID:26115341
The Chemophytostabilisation Process of Heavy Metal Polluted Soil.
Grobelak, Anna; Napora, Anna
2015-01-01
Industrial areas are characterised by soil degradation processes that are related primarily to the deposition of heavy metals. Areas contaminated with metals are a serious source of risk due to secondary pollutant emissions and metal leaching and migration in the soil profile and into the groundwater. Consequently, the optimal solution for these areas is to apply methods of remediation that create conditions for the restoration of plant cover and ensure the protection of groundwater against pollution. Remediation activities that are applied to large-scale areas contaminated with heavy metals should mainly focus on decreasing the degree of metal mobility in the soil profile and metal bioavailability to levels that are not phytotoxic. Chemophytostabilisation is a process in which soil amendments and plants are used to immobilise metals. The main objective of this research was to investigate the effects of different doses of organic amendments (after aerobic sewage sludge digestion in the food industry) and inorganic amendments (lime, superphosphate, and potassium phosphate) on changes in the metals fractions in soils contaminated with Cd, Pb and Zn during phytostabilisation. In this study, the contaminated soil was amended with sewage sludge and inorganic amendments and seeded with grass (tall fescue) to increase the degree of immobilisation of the studied metals. The contaminated soil was collected from the area surrounding a zinc smelter in the Silesia region of Poland (pH 5.5, Cd 12 mg kg-1, Pb 1100 mg kg-1, Zn 700 mg kg-1). A plant growth experiment was conducted in a growth chamber for 5 months. Before and after plant growth, soil subsamples were subjected to chemical and physical analyses. To determine the fractions of the elements, a sequential extraction method was used according to Zeien and Brümmer. Research confirmed that the most important impacts on the Zn, Cd and Pb fractions included the combined application of sewage sludge from the food industry and the addition of lime and potassium phosphate. Certain doses of inorganic additives decreased the easily exchangeable fraction from 50% to 1%. The addition of sewage sludge caused a decrease in fraction I for Cd and Pb. In combination with the use of inorganic additives, a mobile fraction was not detected and an easily mobilisable fraction was reduced by half. For certain combinations of metals, the concentrations were detected up to a few percent. The application of sewage sludge resulted in a slight decrease in a mobile (water soluble and easily exchangeable metals) fraction of Zn, but when inorganic additives were applied, this fraction was not detected. The highest degree of immobilisation of the tested heavy metals relative to the control was achieved when using both sewage sludge and inorganic additives at an experimentally determined dose. The sequential extraction results confirmed this result. In addition, the results proved that the use of the phytostabilisation process on contaminated soils should be supported.
A Modified Gibson Assembly Method for Cloning Large DNA Fragments with High GC Contents.
Li, Lei; Jiang, Weihong; Lu, Yinhua
2018-01-01
Gibson one-step, isothermal assembly method (Gibson assembly) can be used to efficiently assemble large DNA molecules by in vitro recombination involving a 5'-exonuclease, a DNA polymerase and a DNA ligase. In the past few years, this robust DNA assembly method has been widely applied to seamlessly construct genes, genetic pathways and even entire genomes. Here, we expand this method to clone large DNA fragments with high GC contents, such as antibiotic biosynthetic gene clusters from Streptomyces . Due to the low isothermal condition (50 °C) in the Gibson reaction system, the complementary overlaps with high GC contents are proposed to easily form mismatched linker pairings, which leads to low assembly efficiencies mainly due to vector self-ligation. So, we modified this classic method by the following two steps. First, a pair of universal terminal single-stranded DNA overhangs with high AT contents are added to the ends of the BAC vector. Second, two restriction enzyme sites are introduced into the respective sides of the designed overlaps to achieve the hierarchical assembly of large DNA molecules. The optimized Gibson assembly method facilitates fast acquisition of large DNA fragments with high GC contents from Streptomyces.
Sun, Yu-Yo; Yang, Dianer; Kuan, Chia-Yi
2011-01-01
A simple method to quantify cerebral infarction has great value for mechanistic and therapeutic studies in experimental stroke research. Immersion staining of unfixed brain slices with 2,3,5-triphenyltetrazolium chloride (TTC) is a popular method to determine cerebral infarction in preclinical studies. However, it is often difficult to apply immersion TTC-labeling to severely injured or soft newborn brains in rodents. Here we report an in-vivo TTC perfusion-labeling method based on osmotic opening of blood-brain-barrier with mannitol-pretreatment. This new method delineates cortical infarction correlated with the boundary of morphological cell injury, differentiates the induction or subcellular redistribution of apoptosis-related factors between viable and damaged areas, and easily determines the size of cerebral infarction in both adult and newborn mice. Using this method, we confirmed that administration of lipopolysaccharide 72 h before hypoxia-ischemia increases the damage in neonatal mouse brains, in contrast to its effect of protective preconditioning in adults. These results demonstrate a fast and inexpensive method that simplifies the task of quantifying cerebral infarction in small or severely injured brains and assists biochemical analysis of experimental cerebral ischemia. PMID:21982741
Iida, Shoko; Takushima, Akihiko; Ohura, Norihiko; Sato, Suguru; Kurita, Masakazu; Harii, Kiyonori
2013-08-01
Although bleaching treatment using all-trans retinoic acid (RA) and hydroquinone (HQ) improves epidermal melanosis, the application of two medications and the irritant dermatitis induced by RA inconvenience patients. To overcome these problems, we developed a silicone sheet containing RA and HQ. To compare the efficacy of a silicone sheet containing RA and HQ with that of conventional bleaching treatment. Silicone sheets containing 1% RA and 5% HQ were applied at night during the bleaching phase of 4 weeks, followed by application of sheets containing 5% HQ during the healing phase of 4 weeks. Hemifacial epidermal melanosis, for which the sheets were applied, was compared with a contralateral face which was treated conventionally using RA and HQ. Twenty-four Japanese women who were enrolled in this study and followed up for more than 6 months were analyzed. RA/HQ sheets improved epidermal melanosis, as did the conventional bleaching method, but irritant dermatitis occurred less in patients treated using silicone sheets. RA/HQ sheets, which are easily applied to face skin, can improve epidermal melanosis to the same extent as conventional bleaching. © 2013 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.
Osathanunkul, Maslin; Suwannapoom, Chatmongkon; Khamyong, Nuttaluck; Pintakum, Danupol; Lamphun, Santisuk Na; Triwitayakorn, Kanokporn; Osathanunkul, Kitisak; Madesis, Panagiotis
2016-01-01
Andrographis paniculata Nees is a medicinal plant with multiple pharmacological properties. It has been used over many centuries as a household remedy. A. paniculata products sold on the markets are in processed forms so it is difficult to authenticate. Therefore buying the herbal products poses a high-risk of acquiring counterfeited, substituted and/or adulterated products. Due to these issues, a reliable method to authenticate products is needed. High resolution melting analysis coupled with DNA barcoding (Bar-HRM) was applied to detect adulteration in commercial herbal products. The rbcL barcode was selected to use in primers design for HRM analysis to produce standard melting profile of A. paniculata species. DNA of the tested commercial products was isolated and their melting profiles were then generated and compared with the standard A. paniculata. The melting profiles of the rbcL amplicons of the three closely related herbal species (A. paniculata, Acanthus ebracteatus and Rhinacanthus nasutus) are clearly separated so that they can be distinguished by the developed method. The method was then used to authenticate commercial herbal products. HRM curves of all 10 samples tested are similar to A. paniculata which indicated that all tested products were contained the correct species as labeled. The method described in this study has been proved to be useful in aiding identification and/or authenticating A. paniculata. This Bar-HRM analysis has allowed us easily to determine the A. paniculata species in herbal products on the markets even they are in processed forms. We propose the use of DNA barcoding combined with High Resolution Melting analysis for authenticating of Andrographis paniculata products.The developed method can be used regardless of the type of the DNA template (fresh or dried tissue, leaf, and stem).rbcL region was chosen for the analysis and work well with our samplesWe can easily determine the A. paniculata species in herbal products tested. Abbreviations used: bp: Base pair, Tm: Melting temperature.
Wong, Sienna; Jin, J-P
2017-01-01
Study of folded structure of proteins provides insights into their biological functions, conformational dynamics and molecular evolution. Current methods of elucidating folded structure of proteins are laborious, low-throughput, and constrained by various limitations. Arising from these methods is the need for a sensitive, quantitative, rapid and high-throughput method not only analysing the folded structure of proteins, but also to monitor dynamic changes under physiological or experimental conditions. In this focused review, we outline the foundation and limitations of current protein structure-determination methods prior to discussing the advantages of an emerging antibody epitope analysis for applications in structural, conformational and evolutionary studies of proteins. We discuss the application of this method using representative examples in monitoring allosteric conformation of regulatory proteins and the determination of the evolutionary lineage of related proteins and protein isoforms. The versatility of the method described herein is validated by the ability to modulate a variety of assay parameters to meet the needs of the user in order to monitor protein conformation. Furthermore, the assay has been used to clarify the lineage of troponin isoforms beyond what has been depicted by sequence homology alone, demonstrating the nonlinear evolutionary relationship between primary structure and tertiary structure of proteins. The antibody epitope analysis method is a highly adaptable technique of protein conformation elucidation, which can be easily applied without the need for specialized equipment or technical expertise. When applied in a systematic and strategic manner, this method has the potential to reveal novel and biomedically meaningful information for structure-function relationship and evolutionary lineage of proteins. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
OpenCL based machine learning labeling of biomedical datasets
NASA Astrophysics Data System (ADS)
Amoros, Oscar; Escalera, Sergio; Puig, Anna
2011-03-01
In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and labeling speeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazur, T; Wang, Y; Fischer-Valuck, B
2015-06-15
Purpose: To develop a novel and rapid, SIFT-based algorithm for assessing feature motion on cine MR images acquired during MRI-guided radiotherapy treatments. In particular, we apply SIFT descriptors toward both partitioning cine images into respiratory states and tracking regions across frames. Methods: Among a training set of images acquired during a fraction, we densely assign SIFT descriptors to pixels within the images. We cluster these descriptors across all frames in order to produce a dictionary of trackable features. Associating the best-matching descriptors at every frame among the training images to these features, we construct motion traces for the features. Wemore » use these traces to define respiratory bins for sorting images in order to facilitate robust pixel-by-pixel tracking. Instead of applying conventional methods for identifying pixel correspondences across frames we utilize a recently-developed algorithm that derives correspondences via a matching objective for SIFT descriptors. Results: We apply these methods to a collection of lung, abdominal, and breast patients. We evaluate the procedure for respiratory binning using target sites exhibiting high-amplitude motion among 20 lung and abdominal patients. In particular, we investigate whether these methods yield minimal variation between images within a bin by perturbing the resulting image distributions among bins. Moreover, we compare the motion between averaged images across respiratory states to 4DCT data for these patients. We evaluate the algorithm for obtaining pixel correspondences between frames by tracking contours among a set of breast patients. As an initial case, we track easily-identifiable edges of lumpectomy cavities that show minimal motion over treatment. Conclusions: These SIFT-based methods reliably extract motion information from cine MR images acquired during patient treatments. While we performed our analysis retrospectively, the algorithm lends itself to prospective motion assessment. Applications of these methods include motion assessment, identifying treatment windows for gating, and determining optimal margins for treatment.« less
Mirante, Clara; Clemente, Isabel; Zambu, Graciette; Alexandre, Catarina; Ganga, Teresa; Mayer, Carlos; Brito, Miguel
2016-09-01
Helminth intestinal parasitoses are responsible for high levels of child mortality and morbidity. Hence, the capacity to diagnose these parasitoses and consequently ensure due treatment represents a factor of great importance. The main objective of this study involves comparing two methods of concentration, parasitrap and Kato-Katz, for the diagnosis of intestinal parasitoses in faecal samples. Sample processing made recourse to two different concentration methods: the commercial parasitrap® method and the Kato-Katz method. We correspondingly collected a total of 610 stool samples from pre-school and school age children. The results demonstrate the incidence of helminth parasites in 32.8% or 32.3% of the sample collected depending on whether the concentration method applied was either the parasitrap method or the Kato-Katz method. We detected a relatively high percentage of samples testing positive for two or more species of helminth parasites. We would highlight that in searching for larvae the Kato-Katz method does not prove as appropriate as the parasitrap method. Both techniques prove easily applicable even in field working conditions and returning mutually agreeing results. This study concludes in favour of the need for deworming programs and greater public awareness among the rural populations of Angola.
NASA Astrophysics Data System (ADS)
Schauberger, Bernhard; Rolinski, Susanne; Müller, Christoph
2016-12-01
Variability of crop yields is detrimental for food security. Under climate change its amplitude is likely to increase, thus it is essential to understand the underlying causes and mechanisms. Crop models are the primary tool to project future changes in crop yields under climate change. A systematic overview of drivers and mechanisms of crop yield variability (YV) can thus inform crop model development and facilitate improved understanding of climate change impacts on crop yields. Yet there is a vast body of literature on crop physiology and YV, which makes a prioritization of mechanisms for implementation in models challenging. Therefore this paper takes on a novel approach to systematically mine and organize existing knowledge from the literature. The aim is to identify important mechanisms lacking in models, which can help to set priorities in model improvement. We structure knowledge from the literature in a semi-quantitative network. This network consists of complex interactions between growing conditions, plant physiology and crop yield. We utilize the resulting network structure to assign relative importance to causes of YV and related plant physiological processes. As expected, our findings confirm existing knowledge, in particular on the dominant role of temperature and precipitation, but also highlight other important drivers of YV. More importantly, our method allows for identifying the relevant physiological processes that transmit variability in growing conditions to variability in yield. We can identify explicit targets for the improvement of crop models. The network can additionally guide model development by outlining complex interactions between processes and by easily retrieving quantitative information for each of the 350 interactions. We show the validity of our network method as a structured, consistent and scalable dictionary of literature. The method can easily be applied to many other research fields.
Wang, Qi; Huang, Lijie; Yu, Panfeng; Wang, Jianchang; Shen, Shun
2013-01-01
In the paper, we presented a magnetic solid-phase extraction (MSPE) method based on C(18)-functionalized magnetic silica nanoparticles for the analysis of puerarin in rat plasma. The approach involves two steps including synthesis of magnetic solid-phase sorbents and bioanalysis. The synthesized magnetic silica microspheres modified with chloro(dimethyl)octylsilane (namely Fe(3)O(4)@SiO(2)-C(18)) can provide an efficient way for the extraction of puerarin through C(18) hydrophobic interaction. The puerarin could be easily enriched using milligram-level Fe(3)O(4)@SiO(2)-C(18) sorbents with vibration for 10min. By means of a magnet, puerarin adsorbed with Fe(3)O(4)@SiO(2)-C(18) sorbents was easily isolated from the matrix, and desorbed with CAN. No carryover was observed, and the sorbents could be recycled in our study. The method recoveries were obtained from 85.2% to 92.3%. Limits of quantification and limits of detection of 0.1μgmL(-1) and 0.05μgmL(-1), respectively were achieved. The precision was from 8.1 to 13.7% for intra-day measurement, and from 9.4 to 15.2% for inter-day variation. The accuracy ranged from 94.7 to 106.3% for intra-day measurement, and from 93.3 to 107.8% for inter-day measurement. The MSPE method was applied for analysis of puerarin in rat plasma samples. The results indicated that it was convenient and efficient for the determination of puerarin in biosamples. Copyright © 2012 Elsevier B.V. All rights reserved.
Numerical simulation of sloshing with large deforming free surface by MPS-LES method
NASA Astrophysics Data System (ADS)
Pan, Xu-jie; Zhang, Huai-xin; Sun, Xue-yao
2012-12-01
Moving particle semi-implicit (MPS) method is a fully Lagrangian particle method which can easily solve problems with violent free surface. Although it has demonstrated its advantage in ocean engineering applications, it still has some defects to be improved. In this paper, MPS method is extended to the large eddy simulation (LES) by coupling with a sub-particle-scale (SPS) turbulence model. The SPS turbulence model turns into the Reynolds stress terms in the filtered momentum equation, and the Smagorinsky model is introduced to describe the Reynolds stress terms. Although MPS method has the advantage in the simulation of the free surface flow, a lot of non-free surface particles are treated as free surface particles in the original MPS model. In this paper, we use a new free surface tracing method and the key point is "neighbor particle". In this new method, the zone around each particle is divided into eight parts, and the particle will be treated as a free surface particle as long as there are no "neighbor particles" in any two parts of the zone. As the number density parameter judging method has a high efficiency for the free surface particles tracing, we combine it with the neighbor detected method. First, we select out the particles which may be mistreated with high probabilities by using the number density parameter judging method. And then we deal with these particles with the neighbor detected method. By doing this, the new mixed free surface tracing method can reduce the mistreatment problem efficiently. The serious pressure fluctuation is an obvious defect in MPS method, and therefore an area-time average technique is used in this paper to remove the pressure fluctuation with a quite good result. With these improvements, the modified MPS-LES method is applied to simulate liquid sloshing problems with large deforming free surface. Results show that the modified MPS-LES method can simulate the large deforming free surface easily. It can not only capture the large impact pressure accurately on rolling tank wall but also can generate all physical phenomena successfully. The good agreement between numerical and experimental results proves that the modified MPS-LES method is a good CFD methodology in free surface flow simulations.
Okamura, Yukio; Kondo, Satoshi; Sase, Ichiro; Suga, Takayuki; Mise, Kazuyuki; Furusawa, Iwao; Kawakami, Shigeki; Watanabe, Yuichiro
2000-01-01
A set of fluorescently-labeled DNA probes that hybridize with the target RNA and produce fluorescence resonance energy transfer (FRET) signals can be utilized for the detection of specific RNA. We have developed probe sets to detect and discriminate single-strand RNA molecules of plant viral genome, and sought a method to improve the FRET signals to handle in vivo applications. Consequently, we found that a double-labeled donor probe labeled with Bodipy dye yielded a remarkable increase in fluorescence intensity compared to a single-labeled donor probe used in an ordinary FRET. This double-labeled donor system can be easily applied to improve various FRET probes since the dependence upon sequence and label position in enhancement is not as strict. Furthermore this method could be applied to other nucleic acid substances, such as oligo RNA and phosphorothioate oligonucleotides (S-oligos) to enhance FRET signal. Although the double-labeled donor probes labeled with a variety of fluorophores had unexpected properties (strange UV-visible absorption spectra, decrease of intensity and decay of donor fluorescence) compared with single-labeled ones, they had no relation to FRET enhancement. This signal amplification mechanism cannot be explained simply based on our current results and knowledge of FRET. Yet it is possible to utilize this double-labeled donor system in various applications of FRET as a simple signal-enhancement method. PMID:11121494
Continuous-annealing method for producing a flexible, curved, soft magnetic amorphous alloy ribbon
NASA Astrophysics Data System (ADS)
Francoeur, Bruno; Couture, Pierre
2012-04-01
A method has been developed for continuous annealing of an amorphous alloy ribbon moving forward at several meters per second, giving a curved shape to the ribbon that remains flexible afterward and can be easily wound into a toroidal core with excellent soft magnetic properties. A heat pulse was applied by a compact system on a Metglas 2605HB1 ribbon moving forward at 5 m/s to initiate a thermal treatment at 460 °C, near crystallization onset. The treatment duration was less than 0.1 s, and the heating and cooling rates were above 10 000 °C/s, which helped preserve most of the alloy as-cast ductility state. Such high temperature rates were achieved by forcing a static contact between the moving ribbon and a temperature-controlled roller. A tensile stress and a series of bending configurations were applied on the moving ribbon during the treatment to induce the development of magnetic anisotropy and to obtain the desired natural curvature radius. The core losses at 60 Hz of a toroidal test core wound with the resulting ribbon are lower than the specific values reported by the alloy manufacturer. This method can be implemented at the casting plant for supplying a low-cost, ready-to-use ribbon, easy to handle and cut, for mass production of toroidal cores for distribution transformer kernels (core and coil only), pulse power cores, etc.
2010-01-01
Background The agitated behaviours that accompany dementia (e.g. pacing, aggression, calling out) are stressful to both nursing home residents and their carers and are difficult to treat. Increasingly more attention is being paid to alternative interventions that are associated with fewer risks than pharmacology. Lavandula angustifolia (lavender) has been thought, for centuries, to have soothing properties, but the existing evidence is limited and shows mixed results. The aim of the current study is to test the effectiveness of topically applied pure lavender oil in reducing actual counts of challenging behaviours in nursing home residents. Methods/Design We will use a blinded repeated measures design with random cross-over between lavender oil and placebo oil. Persons with moderate to severe dementia and associated behavioural problems living in aged care facilities will be included in the study. Consented, willing participants will be assigned in random order to lavender or placebo blocks for one week then switched to the other condition for the following week. In each week the oils will be applied on three days with at least a two-day wash out period between conditions. Trained observers will note presence of target behaviours and predominant type of affect displayed during the 30 minutes before and the 60 minutes after application of the oil. Nursing staff will apply 1 ml of 30% high strength essential lavender oil to reduce the risk of missing a true effect through under-dosing. The placebo will comprise of jojoba oil only. The oils will be identical in appearance and texture, but can easily be identified by smell. For blinding purposes, all staff involved in applying the oil or observing the resident will apply a masking cream containing a mixture of lavender and other essential oils to their upper lip. In addition, nursing staff will wear a nose clip during the few minutes it takes to massage the oil to the resident's forearms. Discussion If our results show that the use of lavender oil is effective in reducing challenging behaviours in individuals with dementia, it will potentially provide a safer intervention rather than reliance on pharmacology alone. The study's findings will translate easily to other countries and cultures. Trial Registration Australian New Zealand Clinical Trials Registry - ACTRN 12609000569202 PMID:20649945
NASA Astrophysics Data System (ADS)
Vitório, Paulo Cezar; Leonel, Edson Denner
2017-12-01
The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.
Online in-tube microextractor coupled with UV-Vis spectrophotometer for bisphenol A detection.
Poorahong, Sujittra; Thammakhet, Chongdee; Thavarungkul, Panote; Kanatharana, Proespichaya
2013-01-01
A simple and high extraction efficiency online in-tube microextractor (ITME) was developed for bisphenol A (BPA) detection in water samples. The ITME was fabricated by a stepwise electrodeposition of polyaniline, polyethylene glycol and polydimethylsiloxane composite (CPANI) inside a silico-steel tube. The obtained ITME coupled with UV-Vis detection at 278 nm was investigated. By this method, the extraction and pre-concentration of BPA in water were carried out in a single step. Under optimum conditions, the system provided a linear dynamic range of 0.1 to 100 μM with a limit of detection of 20 nM (S/N ≥3). A single in-tube microextractor had a good stability of more than 60 consecutive injections for 10.0 μM BPA with a relative standard deviation of less than 4%. Moreover, a good tube-to-tube reproducibility and precision were obtained. The system was applied to detect BPA in water samples from six brands of baby bottles and the results showed good agreement with those obtained from the conventional GC-MS method. Acceptable percentage recoveries from the spiked water samples were obtained, ranging from 83-102% for this new method compared with 73-107% for the GC-MS standard method. This new in-tube CPANI microextractor provided an excellent extraction efficiency and a good reproducibility. In addition, it can also be easily applied for the analysis of other polar organic compounds contaminated in water sample.
Mapping shorelines to subpixel accuracy using Landsat imagery
NASA Astrophysics Data System (ADS)
Abileah, Ron; Vignudelli, Stefano; Scozzari, Andrea
2013-04-01
A promising method to accurately map the shoreline of oceans, lakes, reservoirs, and rivers is proposed and verified in this work. The method is applied to multispectral satellite imagery in two stages. The first stage is a classification of each image pixel into land/water categories using the conventional 'dark pixel' method. The approach presented here, makes use of a single shortwave IR image band (SWIR), if available. It is well known that SWIR has the least water leaving radiance and relatively little sensitivity to water pollutants and suspended sediments. It is generally the darkest (over water) and most reliable single band for land-water discrimination. The boundary of the water cover map determined in stage 1 underestimates the water cover and often misses the true shoreline by a quantity up to one pixel. A more accurate shoreline would be obtained by connecting the center point of pixels with exactly 50-50 mix of water and land. Then, stage 2 finds the 50-50 mix points. According to the method proposed, image data is interpolated and up-sampled to ten times the original resolution. The local gradient in radiance is used to find the direction to the shore, thus searching along that path for the interpolated pixel closest to a 50-50 mix. Landsat images with 30m resolution, processed by this method, may thus provide the shoreline accurate to 3m. Compared to similar approaches available in the literature, the method proposed discriminates sub-pixels crossed by the shoreline by using a criteria based on the absolute value of radiance, rather than its gradient. Preliminary experimentation of the algorithm shows that 10m resolution accuracy is easily achieved and in some cases is often better than 5m. The proposed method can be used to study long term shoreline changes by exploiting the 30 years of archived world-wide coverage Landsat imagery. Landsat imagery is free and easily accessible for downloading. Some applications that exploit the Landsat dataset and the new method are discussed in the companion poster: "Case-studies of potential applications for highly resolved shorelines."
2012-01-01
Background Existing methods for predicting protein solubility on overexpression in Escherichia coli advance performance by using ensemble classifiers such as two-stage support vector machine (SVM) based classifiers and a number of feature types such as physicochemical properties, amino acid and dipeptide composition, accompanied with feature selection. It is desirable to develop a simple and easily interpretable method for predicting protein solubility, compared to existing complex SVM-based methods. Results This study proposes a novel scoring card method (SCM) by using dipeptide composition only to estimate solubility scores of sequences for predicting protein solubility. SCM calculates the propensities of 400 individual dipeptides to be soluble using statistic discrimination between soluble and insoluble proteins of a training data set. Consequently, the propensity scores of all dipeptides are further optimized using an intelligent genetic algorithm. The solubility score of a sequence is determined by the weighted sum of all propensity scores and dipeptide composition. To evaluate SCM by performance comparisons, four data sets with different sizes and variation degrees of experimental conditions were used. The results show that the simple method SCM with interpretable propensities of dipeptides has promising performance, compared with existing SVM-based ensemble methods with a number of feature types. Furthermore, the propensities of dipeptides and solubility scores of sequences can provide insights to protein solubility. For example, the analysis of dipeptide scores shows high propensity of α-helix structure and thermophilic proteins to be soluble. Conclusions The propensities of individual dipeptides to be soluble are varied for proteins under altered experimental conditions. For accurately predicting protein solubility using SCM, it is better to customize the score card of dipeptide propensities by using a training data set under the same specified experimental conditions. The proposed method SCM with solubility scores and dipeptide propensities can be easily applied to the protein function prediction problems that dipeptide composition features play an important role. Availability The used datasets, source codes of SCM, and supplementary files are available at http://iclab.life.nctu.edu.tw/SCM/. PMID:23282103
Revitalizing Space Operations through Total Quality Management
NASA Technical Reports Server (NTRS)
Baylis, William T.
1995-01-01
The purpose of this paper is to show the reader what total quality management (TQM) is and how to apply TQM in the space systems and management arena. TQM is easily understood, can be implemented in any type of business organization, and works.
An integrated study of pervious concrete mixture design for wearing course applications.
DOT National Transportation Integrated Search
2011-07-01
This report presents the results of the largest and most comprehensive study to date on portland cement pervious concrete (PCPC). It is designed to be widely accessible and easily applied by designers, producers, contractors, and owners. : The projec...
Medical Information Management System
NASA Technical Reports Server (NTRS)
Alterescu, S.; Hipkins, K. R.; Friedman, C. A.
1979-01-01
On-line interactive information processing system easily and rapidly handles all aspects of data management related to patient care. General purpose system is flexible enough to be applied to other data management situations found in areas such as occupational safety data, judicial information, or personnel records.
Electromagnetic bonding of plastics to aluminum
NASA Technical Reports Server (NTRS)
Sheppard, A. T.; Silbert, L.
1980-01-01
Electromagnetic curing is used to bond strain gage to aluminum tensile bar. Electromagnetic energy heats only plastic/metal interface by means of skin effect, preventing degradation of heat-treated aluminum. Process can be easily applied to other metals joined by high-temperature-curing plastic adhesives.
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2001-04-01
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations Joseph W. Rudmin (Physics Dept, James Madison University) A new system of solving systems of differential equations will be presented, which has been developed by J. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces MacClaurin Series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. The method yields high-degree solutions: 20th degree is easily obtainable. It is conceptually simple, fast, and extremely general. It has been applied to over a hundred systems of differential equations, some of which were previously unsolved, and has yet to fail to solve any system for which the MacClaurin series converges. The method is non-recursive: each coefficient in the series is calculated just once, in closed form, and its accuracy is limited only by the digital accuracy of the computer. Although the original differential equations may include any mathematical functions, the computational method includes ONLY the operations of addition, subtraction, and multiplication. Furthermore, it is perfectly suited to parallel -processing computer languages. Those who learn this system will never use Runge-Kutta or predictor-corrector methods again. Examples will be presented, including the classical many-body problem.
Phage therapy dosing: The problem(s) with multiplicity of infection (MOI).
Abedon, Stephen T
2016-01-01
The concept of bacteriophage multiplicity of infection (MOI) - ratios of phages to bacteria - historically has been less easily applied than many phage workers would prefer or, perhaps, may be aware. Here, toward clarification of the concept, I discuss multiplicity of infection in terms of semantics, history, mathematics, pharmacology, and actual practice. For phage therapy and other biocontrol purposes it is desirable, especially, not to solely employ MOI to describe what phage quantities have been applied during dosing. Why? Bacterial densities can change between bacterial challenge and phage application, may not be easily determined immediately prior to phage dosing, and/or target bacterial populations may not be homogeneous with regard to phage access and thereby inconsistent in terms of what MOI individual bacteria experience. Toward experiment reproducibility and as practiced generally for antibacterial application, phage dosing instead should be described in terms of concentrations of formulations (phage titers) as well as volumes applied and, in many cases, absolute numbers of phages delivered. Such an approach typically will be far more desirable from a pharmacological perspective than solely indicating ratios of agents to bacteria. This essay was adapted, with permission, from an appendix of the 2011 monograph, Bacteriophages and Biofilms , Nova Science Publishers.
Binding Isotherms and Time Courses Readily from Magnetic Resonance.
Xu, Jia; Van Doren, Steven R
2016-08-16
Evidence is presented that binding isotherms, simple or biphasic, can be extracted directly from noninterpreted, complex 2D NMR spectra using principal component analysis (PCA) to reveal the largest trend(s) across the series. This approach renders peak picking unnecessary for tracking population changes. In 1:1 binding, the first principal component captures the binding isotherm from NMR-detected titrations in fast, slow, and even intermediate and mixed exchange regimes, as illustrated for phospholigand associations with proteins. Although the sigmoidal shifts and line broadening of intermediate exchange distorts binding isotherms constructed conventionally, applying PCA directly to these spectra along with Pareto scaling overcomes the distortion. Applying PCA to time-domain NMR data also yields binding isotherms from titrations in fast or slow exchange. The algorithm readily extracts from magnetic resonance imaging movie time courses such as breathing and heart rate in chest imaging. Similarly, two-step binding processes detected by NMR are easily captured by principal components 1 and 2. PCA obviates the customary focus on specific peaks or regions of images. Applying it directly to a series of complex data will easily delineate binding isotherms, equilibrium shifts, and time courses of reactions or fluctuations.
A novel torsional exciter for modal vibration testing of large rotating machinery
NASA Astrophysics Data System (ADS)
Sihler, Christof
2006-10-01
A novel exciter for applying a dynamic torsional force to a rotating structure is presented in this paper. It has been developed at IPP in order to perform vibration tests with shaft assemblies of large flywheel generators (synchronous machines). The electromagnetic exciter (shaker) needs no fixture to the rotating shaft because the torque is applied by means of the stator winding of an electrical machine. Therefore, the exciter can most easily be applied in cases where a three-phase electrical machine (a motor or generator) is part of the shaft assembly. The oscillating power for the shaker is generated in a separate current-controlled DC circuit with an inductor acting as a buffer storage of magnetic energy. An AC component with adjustable frequency is superimposed on the inductor current in order to generate pulsating torques acting on the rotating shaft with the desired waveform and frequency. Since this torsional exciter does not require an external power source, can easily be installed (without contact to the rotating structure) and provides dynamic torsional forces which are sufficient for multi-megawatt applications, it is best suited for on-site tests of large rotating machinery.
Stereolithography of perfluoropolyethers for the microfabrication of robust omniphobic surfaces
NASA Astrophysics Data System (ADS)
Credi, Caterina; Levi, Marinella; Turri, Stefano; Simeone, Giovanni
2017-05-01
In this work, we provide a simple and straightforward method for the fabrication of stable highly hydrophobic and oleophobic surfaces by applying stereolithography (SL) to perfluoropolyethers (PFPEs). Inspired by the liquid repellency widely shown in nature, our approach enables to easily mimic the interplay between the chemistry and physics by microtexturing low surface tension PFPEs. To this end, UV-curable resins suitable for SL-processing were formulated by blending multifunctional (meth-)acrylates PFPEs oligomers with photoinitiator and visible dyes whose content was tuned to tailor resin SL sensitivities. Photocalorimetric studies were also performed to investigate the curing behavior of the different formulations upon SL light exposure. Being the first example of stereolithography applied to PFPEs, stereolithographic processability of new developed PFPEs photopolymer was compared with a standard photoresist taken as benchmark (DL260®). Optimized formulations were characterized by reduced laser penetration depth (<75 μm) and small critical energies thus enabling for fast printing of micrometric structures. Arrays of cylindrical pillars (85 μm diameter, 400 μm height) characterized by varied pillars spacing (200 ÷ 350 μm) were rapidly printed with high fidelity as attested by SEM examination. Contact angle measurements in static and dynamic conditions were performed to investigate the surface properties of textured samples using water and oil as the probing liquids. PFPEs liquid repellent performances were compared with those from DL260® textured surfaces arrayed by SL. High water contact angles coupled with low hysteresis asserted that high hydrophobic surfaces were successfully obtained and best-performing textured surfaces were also characterized by high oil repellency. Finally, this study demonstrated that omniphobic surfaces can be easily realized via a single-step, cost-effective, and time-saving process.
Tu, S W; Eriksson, H; Gennari, J H; Shahar, Y; Musen, M A
1995-06-01
PROTEGE-II is a suite of tools and a methodology for building knowledge-based systems and domain-specific knowledge-acquisition tools. In this paper, we show how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients. To apply PROTEGE-II, (1) we construct a decomposable problem-solving method called episodic skeletal-plan refinement, (2) we build an application ontology that consists of the terms and relations in the domain, and of method-specific distinctions not already captured in the domain terms, and (3) we specify mapping relations that link terms from the application ontology to the domain-independent terms used in the problem-solving method. From the application ontology, we automatically generate a domain-specific knowledge-acquisition tool that is custom-tailored for the application. The knowledge-acquisition tool is used for the creation and maintenance of domain knowledge used by the problem-solving method. The general goal of the PROTEGE-II approach is to produce systems and components that are reusable and easily maintained. This is the rationale for constructing ontologies and problem-solving methods that can be composed from a set of smaller-grained methods and mechanisms. This is also why we tightly couple the knowledge-acquisition tools to the application ontology that specifies the domain terms used in the problem-solving systems. Although our evaluation is still preliminary, for the application task of providing protocol-based decision support, we show that these goals of reusability and easy maintenance can be achieved. We discuss design decisions and the tradeoffs that have to be made in the development of the system.
Measures of Agreement Between Many Raters for Ordinal Classifications
Nelson, Kerrie P.; Edwards, Don
2015-01-01
Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449
User-friendly solutions for microarray quality control and pre-processing on ArrayAnalysis.org
Eijssen, Lars M. T.; Jaillard, Magali; Adriaens, Michiel E.; Gaj, Stan; de Groot, Philip J.; Müller, Michael; Evelo, Chris T.
2013-01-01
Quality control (QC) is crucial for any scientific method producing data. Applying adequate QC introduces new challenges in the genomics field where large amounts of data are produced with complex technologies. For DNA microarrays, specific algorithms for QC and pre-processing including normalization have been developed by the scientific community, especially for expression chips of the Affymetrix platform. Many of these have been implemented in the statistical scripting language R and are available from the Bioconductor repository. However, application is hampered by lack of integrative tools that can be used by users of any experience level. To fill this gap, we developed a freely available tool for QC and pre-processing of Affymetrix gene expression results, extending, integrating and harmonizing functionality of Bioconductor packages. The tool can be easily accessed through a wizard-like web portal at http://www.arrayanalysis.org or downloaded for local use in R. The portal provides extensive documentation, including user guides, interpretation help with real output illustrations and detailed technical documentation. It assists newcomers to the field in performing state-of-the-art QC and pre-processing while offering data analysts an integral open-source package. Providing the scientific community with this easily accessible tool will allow improving data quality and reuse and adoption of standards. PMID:23620278
Desbois, Nicolas; Pacquelet, Sandrine; Dubois, Adrien; Michelin, Clément; Gros, Claude P
2015-01-01
The Cu(I)-catalysed Huisgen cycloaddition, known as "click" reaction, has been applied to the synthesis of a range of triazole-linked porphyrin/corrole to DOTA/NOTA derivatives. Microwave irradiation significantly accelerates the reaction. The synthesis of heterobimetallic complexes was easily achieved in up to 60% isolated yield. Heterobimetallic complexes were easily prepared as potential MRI/PET (SPECT) bimodal contrast agents incorporating one metal (Mn, Gd) for the enhancement of contrast for MRI applications and one "cold" metal (Cu, Ga, In) for future radionuclear imaging applications. Preliminary relaxivity measurements showed that the reported complexes are promising contrast agents (CA) in MRI.
Desbois, Nicolas; Pacquelet, Sandrine; Dubois, Adrien; Michelin, Clément
2015-01-01
Summary The Cu(I)-catalysed Huisgen cycloaddition, known as “click” reaction, has been applied to the synthesis of a range of triazole-linked porphyrin/corrole to DOTA/NOTA derivatives. Microwave irradiation significantly accelerates the reaction. The synthesis of heterobimetallic complexes was easily achieved in up to 60% isolated yield. Heterobimetallic complexes were easily prepared as potential MRI/PET (SPECT) bimodal contrast agents incorporating one metal (Mn, Gd) for the enhancement of contrast for MRI applications and one “cold” metal (Cu, Ga, In) for future radionuclear imaging applications. Preliminary relaxivity measurements showed that the reported complexes are promising contrast agents (CA) in MRI. PMID:26664643
Anti-reflection coatings on large area glass sheets
NASA Technical Reports Server (NTRS)
Pastirik, E.
1980-01-01
Antireflective coatings which may be suitable for use on the covers of photovoltaic solar modules can be easily produced by a dipping process. The coatings are applied to glass by drawing sheets of glass vertically out of dilute aqueous sodium silicate solutions at a constant speed, allowing the adherent liquid film to dry, then exposing the dried film to concentrated sulfuric acid, followed by a water rinse and dry. The process produces coatings of good optical performance (96.7 percent peak transmission at 0.540 mu M wavelength) combined with excellent stain and soil resistance, and good resistance to abrasion. The process is reproduceable and easily controlled.
NASA Astrophysics Data System (ADS)
Cicone, Antonio; Zhou, Haomin; Piersanti, Mirko; Materassi, Massimo; Spogli, Luca
2017-04-01
Nonlinear and nonstationary signals are ubiquitous in real life. Their decomposition and analysis is of crucial importance in many research fields. Traditional techniques, like Fourier and wavelet Transform have been proved to be limited in this context. In the last two decades new kind of nonlinear methods have been developed which are able to unravel hidden features of these kinds of signals. In this talk we will review the state of the art and present a new method, called Adaptive Local Iterative Filtering (ALIF). This method, developed originally to study mono-dimensional signals, unlike any other technique proposed so far, can be easily generalized to study two or higher dimensional signals. Furthermore, unlike most of the similar methods, it does not require any a priori assumption on the signal itself, so that the method can be applied as it is to any kind of signals. Applications of ALIF algorithm to real life signals analysis will be presented. Like, for instance, the behavior of the water level near the coastline in presence of a Tsunami, the length of the day signal, the temperature and pressure measured at ground level on a global grid, and the radio power scintillation from GNSS signals.
Inversion of potential field data using the finite element method on parallel computers
NASA Astrophysics Data System (ADS)
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
A unified tensor level set for image segmentation.
Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2010-06-01
This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.
Determination of plasma volume in anaesthetized piglets using the carbon monoxide (CO) method.
Heltne, J K; Farstad, M; Lund, T; Koller, M E; Matre, K; Rynning, S E; Husby, P
2002-07-01
Based on measurements of the circulating red blood cell volume (V(RBC)) in seven anaesthetized piglets using carbon monoxide (CO) as a label, plasma volume (PV) was calculated for each animal. The increase in carboxyhaemoglobin (COHb) concentration following administration of a known amount of CO into a closed circuit re-breathing system was determined by diode-array spectrophotometry. Simultaneously measured haematocrit (HCT) and haemoglobin (Hb) values were used for PV calculation. The PV values were compared with simultaneously measured PVs determined using the Evans blue technique. Mean values (SD) for PV were 1708.6 (287.3)ml and 1738.7 (412.4)ml with the CO method and the Evans blue technique, respectively. Comparison of PVs determined with the two techniques demonstrated good correlation (r = 0.995). The mean difference between PV measurements was -29.9 ml and the limits of agreement (mean difference +/-2SD) were -289.1 ml and 229.3 ml. In conclusion, the CO method can be applied easily under general anaesthesia and controlled ventilation with a simple administration system. The agreement between the compared methods was satisfactory. Plasma volume determined with the CO method is safe, accurate and has no signs of major side effects.
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, FanHuag; Li, JunFeng
2015-11-01
Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.
Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui
2017-03-27
Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
TBDQ: A Pragmatic Task-Based Method to Data Quality Assessment and Improvement
Vaziri, Reza; Mohsenzadeh, Mehran; Habibi, Jafar
2016-01-01
Organizations are increasingly accepting data quality (DQ) as a major key to their success. In order to assess and improve DQ, methods have been devised. Many of these methods attempt to raise DQ by directly manipulating low quality data. Such methods operate reactively and are suitable for organizations with highly developed integrated systems. However, there is a lack of a proactive DQ method for businesses with weak IT infrastructure where data quality is largely affected by tasks that are performed by human agents. This study aims to develop and evaluate a new method for structured data, which is simple and practical so that it can easily be applied to real world situations. The new method detects the potentially risky tasks within a process, and adds new improving tasks to counter them. To achieve continuous improvement, an award system is also developed to help with the better selection of the proposed improving tasks. The task-based DQ method (TBDQ) is most appropriate for small and medium organizations, and simplicity in implementation is one of its most prominent features. TBDQ is case studied in an international trade company. The case study shows that TBDQ is effective in selecting optimal activities for DQ improvement in terms of cost and improvement. PMID:27192547
Predictive failure analysis: planning for the worst so that it never happens!
Hipple, Jack
2008-01-01
This article reviews an alternative approach to failure analysis involving a deliberate saboteurial approach rather than a checklist approach to disaster and emergency preparedness. This process is in the form of an algorithm that is easily applied to any planning situation.
Building Effective Afterschool Programs.
ERIC Educational Resources Information Center
Fashola, Olatokunbo S.
Through a comprehensive review of various afterschool programs across the United States, this resource provides a practical overview of the research and best practices that can be easily adapted and applied in the development of highly effective afterschool programs. chapters focus on: (1) "Why Afterschool Programs?" (benefits, challenges, and…
An easily regenerable enzyme reactor prepared from polymerized high internal phase emulsions.
Ruan, Guihua; Wu, Zhenwei; Huang, Yipeng; Wei, Meiping; Su, Rihui; Du, Fuyou
2016-04-22
A large-scale high-efficient enzyme reactor based on polymerized high internal phase emulsion monolith (polyHIPE) was prepared. First, a porous cross-linked polyHIPE monolith was prepared by in-situ thermal polymerization of a high internal phase emulsion containing styrene, divinylbenzene and polyglutaraldehyde. The enzyme of TPCK-Trypsin was then immobilized on the monolithic polyHIPE. The performance of the resultant enzyme reactor was assessed according to the conversion ability of Nα-benzoyl-l-arginine ethyl ester to Nα-benzoyl-l-arginine, and the protein digestibility of bovine serum albumin (BSA) and cytochrome (Cyt-C). The results showed that the prepared enzyme reactor exhibited high enzyme immobilization efficiency and fast and easy-control protein digestibility. BSA and Cyt-C could be digested in 10 min with sequence coverage of 59% and 78%, respectively. The peptides and residual protein could be easily rinsed out from reactor and the reactor could be regenerated easily with 4 M HCl without any structure destruction. Properties of multiple interconnected chambers with good permeability, fast digestion facility and easily reproducibility indicated that the polyHIPE enzyme reactor was a good selector potentially applied in proteomics and catalysis areas. Copyright © 2016 Elsevier Inc. All rights reserved.
Kawamoto, Hiroshi; Masuda, Kyoko; Nagano, Seiji; Maeda, Takuya
2018-03-01
Recent advances in adoptive immunotherapy using cytotoxic T lymphocytes (CTLs) have led to moderate therapeutic anti-cancer effects in clinical trials. However, a critical issue, namely that CTLs collected from patients are easily exhausted during expansion culture, has yet to be solved. To address this issue, we have been developing a strategy which utilizes induced pluripotent stem cell (iPSC) technology. This strategy is based on the idea that when iPSCs are produced from antigen-specific CTLs, CTLs regenerated from such iPSCs should show the same antigen specificity as the original CTLs. Pursuing this idea, we previously succeeded in regenerating melanoma antigen MART1-specific CTLs, and more recently in producing potent CTLs expressing CD8αβ heterodimer. We are now developing a novel method by which non-T derived iPSCs are transduced with exogenous T cell receptor genes. If this method is applied to Human Leukocyte Antigen (HLA) haplotype-homozygous iPSC stock, it will be possible to prepare "off-the-shelf" T cells. As a first-in-human trial, we are planning to apply our strategy to relapsed acute myeloid leukemia patients by targeting the WT1 antigen.
Skołucka-Szary, Karolina; Ramięga, Aleksandra; Piaskowska, Wanda; Janicki, Bartosz; Grala, Magdalena; Rieske, Piotr; Bartczak, Zbigniew; Piaskowski, Sylwester
2016-03-01
Chitin dihexanoate (DHCH) is the novel biocompatible and technologically friendly highly substituted chitin diester. Here we described optimization of DHCH and chitin dibutyrate (dibutyryl chitin, DBC) synthesis conditions (temperature and reaction time) to obtain desired polymers with high reaction yield, high substitution degree (close to 2) and appropriately high molecular weights. A two-step procedure, employing acidic anhydrides (hexanoic or butyric anhydride) as the acylation agent and methanesulfonic acid both as the catalyst and the reaction medium, was applied. Chemical structures of DBC and DHCH were confirmed by NMR ((1)H and (13)C) and IR investigations. Mechanical properties, thermogravimetric analysis, differential scanning calorimetry and biocompatibility (Neutral red uptake assay, Skin Sensitization and Irritation Tests) were assessed. Both polymers proved highly biocompatible (non-cytotoxic in vitro, non-irritating and non-allergic to skin) and soluble in several organic solvents (dimethylformamide, N,N-dimethylacetamide, dimethyl sulfoxide, acetone, ethanol and others). It is worth emphasizing that DHCH and DBC can be easily processed by solvent casting method and the salt-leaching method, what gives the opportunity to prepare highly porous structures, which can be further successfully applied as the material for wound dressings and scaffolds for tissue engineering. Copyright © 2015 Elsevier B.V. All rights reserved.
Huang, Chaozhang; Hu, Bin
2008-03-01
A new method was developed for the speciation of inorganic tellurium species in seawater by inductively coupled plasma-MS (ICP-MS) following selective magnetic SPE (MSPE) separation. Within the pH range of 2-9, tellurite (Te(IV)) could be quantitatively adsorbed on gamma-mercaptopropyltrimethoxysilane (gamma-MPTMS) modified silica-coated magnetic nanoparticles (MNPs), while the tellurate (Te(VI)) was not retained and remained in solution. Without filtration or centrifugation, these tellurite-loaded MNPs could be separated easily from the aqueous solution by simply applying external magnetic field. The Te(IV) adsorbed on the MNPs could be recovered quantitatively using a solution containing 2 mol/L HCl and 0.03 mol/L K2Cr2O7. Te(VI) was reduced to Te(IV) by L-cysteine prior to the determination of total tellurium, and its assay was based on subtracting Te(IV) from total tellurium. The parameters affecting the separation were investigated systematically and the optimal separation conditions were established. Under the optimal conditions, the LOD obtained for Te(IV) was 0.079 ng/L, while the precision was 7.0% (C = 10 ng/L, n = 7). The proposed method was successfully applied to the speciation of inorganic tellurium in seawater.