Neural network based automatic limit prediction and avoidance system and method
NASA Technical Reports Server (NTRS)
Calise, Anthony J. (Inventor); Prasad, Jonnalagadda V. R. (Inventor); Horn, Joseph F. (Inventor)
2001-01-01
A method for performance envelope boundary cueing for a vehicle control system comprises the steps of formulating a prediction system for a neural network and training the neural network to predict values of limited parameters as a function of current control positions and current vehicle operating conditions. The method further comprises the steps of applying the neural network to the control system of the vehicle, where the vehicle has capability for measuring current control positions and current vehicle operating conditions. The neural network generates a map of current control positions and vehicle operating conditions versus the limited parameters in a pre-determined vehicle operating condition. The method estimates critical control deflections from the current control positions required to drive the vehicle to a performance envelope boundary. Finally, the method comprises the steps of communicating the critical control deflection to the vehicle control system; and driving the vehicle control system to provide a tactile cue to an operator of the vehicle as the control positions approach the critical control deflections.
Comparison of Several Methods for Determining the Internal Resistance of Lithium Ion Cells
Schweiger, Hans-Georg; Obeidi, Ossama; Komesker, Oliver; Raschke, André; Schiemann, Michael; Zehner, Christian; Gehnen, Markus; Keller, Michael; Birke, Peter
2010-01-01
The internal resistance is the key parameter for determining power, energy efficiency and lost heat of a lithium ion cell. Precise knowledge of this value is vital for designing battery systems for automotive applications. Internal resistance of a cell was determined by current step methods, AC (alternating current) methods, electrochemical impedance spectroscopy and thermal loss methods. The outcomes of these measurements have been compared with each other. If charge or discharge of the cell is limited, current step methods provide the same results as energy loss methods. PMID:22219678
Method of Simulating Flow-Through Area of a Pressure Regulator
NASA Technical Reports Server (NTRS)
Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)
2011-01-01
The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.
A Selection Method That Succeeds!
ERIC Educational Resources Information Center
Weitman, Catheryn J.
Provided a structural selection method is carried out, it is possible to find quality early childhood personnel. The hiring process involves five definite steps, each of which establishes a base for the next. A needs assessment formulating basic minimal qualifications is the first step. The second step involves review of current job descriptions…
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-09
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
NASA Astrophysics Data System (ADS)
Yoneda, Makoto; Dohmeki, Hideo
The position control system with the advantage large torque, low vibration, and high resolution can be obtained by the constant current micro step drive applied to hybrid stepping motor. However loss is large, in order not to be concerned with load torque but to control current uniformly. As the one technique of a position control system in which high efficiency is realizable, the same sensorless control as a permanent magnet motor is effective. But, it was the purpose that the control method proposed until now controls speed. Then, this paper proposed changing the drive method of micro step drive and sensorless drive. The change of the drive method was verified from the simulation and the experiment. On no load, it was checked not producing change of a large speed at the time of a change by making electrical angle and carrying out zero reset of the integrator. On load, it was checked that a large speed change arose. The proposed system could change drive method by setting up the initial value of an integrator using the estimated result, without producing speed change. With this technique, the low loss position control system, which employed the advantage of the hybrid stepping motor, has been built.
Vail, W.B. III.
1991-12-24
Methods of operation are described for an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. The invention also discloses multiple frequency methods of operation resulting in improved accuracy of measurement while the apparatus is simultaneously moved vertically in the cased well. The multiple frequency methods of operation disclose a first A.C. current having a first frequency that is conducted from the casing into formation and a second A.C. current having a second frequency that is conducted along the casing. The multiple frequency methods of operation simultaneously provide the measurement step and two compensation steps necessary to acquire accurate results while the apparatus is moved vertically in the cased well. 6 figures.
Vail, III, William B.
1991-01-01
Methods of operation of an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. The invention also discloses multiple frequency methods of operation resulting in improved accuracy of measurement while the apparatus is simultaneously moved vertically in the cased well. The multiple frequency methods of operation disclose a first A.C. current having a first frequency that is conducted from the casing into formation and a second A.C. current having a second frequency that is conducted along the casing. The multiple frequency methods of operation simultaneously provide the measurement step and two compensation steps necessary to acquire accurate results while the apparatus is moved vertically in the cased well.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report describes the current state of development of methods for calculating optimal orbital transfers with large numbers of burns. Reported on first is the homotopy-motivated and so-called direction correction method. So far this method has been partially tested with one solver; the final step has yet to be implemented. Second is the patched transfer method. This method is rooted in some simplifying approximations made on the original optimal control problem. The transfer is broken up into single-burn segments, each single-burn solved as a predictor step and the whole problem then solved with a corrector step.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Method for fabricating carbon/lithium-ion electrode for rechargeable lithium cell
NASA Technical Reports Server (NTRS)
Attia, Alan I. (Inventor); Halpert, Gerald (Inventor); Huang, Chen-Kuo (Inventor); Surampudi, Subbarao (Inventor)
1995-01-01
The method includes steps for forming a carbon electrode composed of graphitic carbon particles adhered by an ethylene propylene diene monomer binder. An effective binder composition is disclosed for achieving a carbon electrode capable of subsequent intercalation by lithium ions. The method also includes steps for reacting the carbon electrode with lithium ions to incorporate lithium ions into graphitic carbon particles of the electrode. An electrical current is repeatedly applied to the carbon electrode to initially cause a surface reaction between the lithium ions and to the carbon and subsequently cause intercalation of the lithium ions into crystalline layers of the graphitic carbon particles. With repeated application of the electrical current, intercalation is achieved to near a theoretical maximum. Two differing multi-stage intercalation processes are disclosed. In the first, a fixed current is reapplied. In the second, a high current is initially applied, followed by a single subsequent lower current stage. Resulting carbon/lithium-ion electrodes are well suited for use as an anode in a reversible, ambient temperature, lithium cell.
A novel frequency analysis method for assessing K(ir)2.1 and Na (v)1.5 currents.
Rigby, J R; Poelzing, S
2012-04-01
Voltage clamping is an important tool for measuring individual currents from an electrically active cell. However, it is difficult to isolate individual currents without pharmacological or voltage inhibition. Herein, we present a technique that involves inserting a noise function into a standard voltage step protocol, which allows one to characterize the unique frequency response of an ion channel at different step potentials. Specifically, we compute the fast Fourier transform for a family of current traces at different step potentials for the inward rectifying potassium channel, K(ir)2.1, and the channel encoding the cardiac fast sodium current, Na(v)1.5. Each individual frequency magnitude, as a function of voltage step, is correlated to the peak current produced by each channel. The correlation coefficient vs. frequency relationship reveals that these two channels are associated with some unique frequencies with high absolute correlation. The individual IV relationship can then be recreated using only the unique frequencies with magnitudes of high absolute correlation. Thus, this study demonstrates that ion channels may exhibit unique frequency responses.
USDA-ARS?s Scientific Manuscript database
The current methods of euthanizing neonatal piglets are raising concerns from the public and scientists. Our experiment tests the use of a two-step euthanasia method using nitrous oxide (N2O) for six minutes and then carbon dioxide (CO2) as a more humane way to euthanize piglets compared to just usi...
ERIC Educational Resources Information Center
Ipek, Hava; Calik, Muammer
2008-01-01
Based on students' alternative conceptions of the topics "electric circuits", "electric charge flows within an electric circuit", "how the brightness of bulbs and the resistance changes in series and parallel circuits", the current study aims to present a combination of different conceptual change methods within a four-step constructivist teaching…
Over-current carrying characteristics of rectangular-shaped YBCO thin films prepared by MOD method
NASA Astrophysics Data System (ADS)
Hotta, N.; Yokomizu, Y.; Iioka, D.; Matsumura, T.; Kumagai, T.; Yamasaki, H.; Shibuya, M.; Nitta, T.
2008-02-01
A fault current limiter (FCL) may be manufactured at competitive qualities and prices by using rectangular-shaped YBCO films which are prepared by metal-organic deposition (MOD) method, because the MOD method can produce large size elements with a low-cost and non-vacuum technique. Prior to constructing a superconducting FCL (SFCL), AC over-current carrying experiments were conducted for 120 mm long elements where YBCO thin film of about 200 nm in thickness was coated on sapphire substrate with cerium oxide (CeO2) interlayer. In the experiments, only single cycle of the ac damping current of 50 Hz was applied to the pure YBCO element without protective metal coating or parallel resistor and the magnitude of the current was increased step by step until the breakdown phenomena occurred in the element. In each experiment, current waveforms flowing through the YBCO element and voltage waveform across the element were measured to get the voltage-current characteristics. The allowable over-current and generated voltage were successfully estimated for the pure YBCO films. It can be pointed out that the lower n-value trends to bring about the higher allowable over-current and the higher withstand voltage more than tens of volts. The YBCO film having higher n-value is sensitive to the over-current. Thus, some protective methods such as a metal coating should be employed for applying to the fault current limiter.
Development and current applications of assisted fertilization.
Palermo, Gianpiero D; Neri, Queenie V; Monahan, Devin; Kocent, Justin; Rosenwaks, Zev
2012-02-01
Since the very early establishment of in vitro insemination, it became clear that one of the limiting steps is the achievement of fertilization. Among the different assisted fertilization methods, intracytoplasmic sperm injection emerged as the ultimate technique to allow fertilization with ejaculated, epididymal, and testicular spermatozoa. This work describes the early steps that brought forth the development of intracytoplasmic sperm injection and its role in assisted reproductive techniques. The current methods to select the preferential male gamete will be elucidated and the concerns related to the offspring of severe male factor couples will be discussed. Copyright © 2012. Published by Elsevier Inc.
Formal Methods for Life-Critical Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1993-01-01
The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.
Guthold, Regina; Cowan, Melanie; Savin, Stefan; Bhatti, Lubna; Armstrong, Timothy; Bonita, Ruth
2016-01-01
Objectives. We sought to outline the framework and methods used by the World Health Organization (WHO) STEPwise approach to noncommunicable disease (NCD) surveillance (STEPS), describe the development and current status, and discuss strengths, limitations, and future directions of STEPS surveillance. Methods. STEPS is a WHO-developed, standardized but flexible framework for countries to monitor the main NCD risk factors through questionnaire assessment and physical and biochemical measurements. It is coordinated by national authorities of the implementing country. The STEPS surveys are generally household-based and interviewer-administered, with scientifically selected samples of around 5000 participants. Results. To date, 122 countries across all 6 WHO regions have completed data collection for STEPS or STEPS-aligned surveys. Conclusions. STEPS data are being used to inform NCD policies and track risk-factor trends. Future priorities include strengthening these linkages from data to action on NCDs at the country level, and continuing to develop STEPS’ capacities to enable a regular and continuous cycle of risk-factor surveillance worldwide. PMID:26696288
Next generation system modeling of NTR systems
NASA Technical Reports Server (NTRS)
Buksa, John J.; Rider, William J.
1993-01-01
The topics are presented in viewgraph form and include the following: nuclear thermal rocket (NTR) modeling challenges; current approaches; shortcomings of current analysis method; future needs; and present steps to these goals.
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
NASA Astrophysics Data System (ADS)
Salatino, Maria
2017-06-01
In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Jeong, Woo Chul; Chauhan, Munish; Sajib, Saurav Z K; Kim, Hyung Joong; Serša, Igor; Kwon, Oh In; Woo, Eung Je
2014-09-07
Magnetic Resonance Electrical Impedance Tomography (MREIT) is an MRI method that enables mapping of internal conductivity and/or current density via measurements of magnetic flux density signals. The MREIT measures only the z-component of the induced magnetic flux density B = (Bx, By, Bz) by external current injection. The measured noise of Bz complicates recovery of magnetic flux density maps, resulting in lower quality conductivity and current-density maps. We present a new method for more accurate measurement of the spatial gradient of the magnetic flux density gradient (∇ Bz). The method relies on the use of multiple radio-frequency receiver coils and an interleaved multi-echo pulse sequence that acquires multiple sampling points within each repetition time. The noise level of the measured magnetic flux density Bz depends on the decay rate of the signal magnitude, the injection current duration, and the coil sensitivity map. The proposed method uses three key steps. The first step is to determine a representative magnetic flux density gradient from multiple receiver coils by using a weighted combination and by denoising the measured noisy data. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. Numerical simulation experiments using a cylindrical phantom model with included regions of low MRI signal to noise ('defects') verified the proposed method. Experimental results using a real phantom experiment, that included three different kinds of anomalies, demonstrated that the proposed method reduced the noise level of the measured magnetic flux density. The quality of the recovered conductivity maps using denoised ∇ Bz data showed that the proposed method reduced the conductivity noise level up to 3-4 times at each anomaly region in comparison to the conventional method.
Fault current limiter with shield and adjacent cores
Darmann, Francis Anthony; Moriconi, Franco; Hodge, Eoin Patrick
2013-10-22
In a fault current limiter (FCL) of a saturated core type having at least one coil wound around a high permeability material, a method of suppressing the time derivative of the fault current at the zero current point includes the following step: utilizing an electromagnetic screen or shield around the AC coil to suppress the time derivative current levels during zero current conditions.
Star sub-pixel centroid calculation based on multi-step minimum energy difference method
NASA Astrophysics Data System (ADS)
Wang, Duo; Han, YanLi; Sun, Tengfei
2013-09-01
The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.
Electrosynthesis of nanofibers and nano-composite films
Lin, Yuehe; Liang, Liang; Liu, Jun
2006-10-17
A method for producing an array of oriented nanofibers that involves forming a solution that includes at least one electroactive species. An electrode substrate is brought into contact with the solution. A current density is applied to the electrode substrate that includes at least a first step of applying a first substantially constant current density for a first time period and a second step of applying a second substantially constant current density for a second time period. The first and second time periods are of sufficient duration to electrically deposit on the electrode substrate an array of oriented nanofibers produced from the electroactive species. Also disclosed are films that include arrays or networks of oriented nanofibers and a method for amperometrically detecting or measuring at least one analyte in a sample.
Tu, Xijuan; Ma, Shuangqin; Gao, Zhaosheng; Wang, Jing; Huang, Shaokang; Chen, Wenbin
2017-11-01
Flavonoids are frequently found as glycosylated derivatives in plant materials. To determine contents of flavonoid aglycones in these matrices, procedures for the extraction and hydrolysis of flavonoid glycosides are required. The current sample preparation method is both labour and time consuming. Develop a modified matrix solid phase dispersion (MSPD) procedure as an alternative methodology for the one-step extraction and hydrolysis of flavonoid glycosides. HPLC-DAD was applied for demonstrating the one-step extraction and hydrolysis of flavonoids in rape bee pollen. The obtained contents of flavonoid aglycones (quercetin, kaempferol, isorhamnetin) were used for the optimisation and validation of the method. The extraction and hydrolysis were accomplished in one step. The procedure completes in 2 h with silica gel as dispersant, a 1:2 ratio of sample to dispersant, and 60% aqueous ethanol with 0.3 M hydrochloric acid as the extraction solution. The relative standard deviations (RSDs) of repeatability were less than 5%, and the recoveries at two fortified levels were between 88.3 and 104.8%. The proposed methodology is simple and highly efficient, with good repeatability and recovery. Compared with currently available methods, the present work has advantages of using less time and labour, higher extraction efficiency, and less consumption of the acid catalyst. This method may have applications for the one-step extraction and hydrolysis of bioactive compounds from plant materials. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Utama, M. Iqbal Bakti; Lu, Xin; Zhan, Da; Ha, Son Tung; Yuan, Yanwen; Shen, Zexiang; Xiong, Qihua
2014-10-01
Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures.Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures. Electronic supplementary information (ESI) available: Further experiments on patterning and additional electrical characterizations data. See DOI: 10.1039/c4nr03817g
New prior sampling methods for nested sampling - Development and testing
NASA Astrophysics Data System (ADS)
Stokes, Barrie; Tuyl, Frank; Hudson, Irene
2017-06-01
Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].
Riley, Leanne; Guthold, Regina; Cowan, Melanie; Savin, Stefan; Bhatti, Lubna; Armstrong, Timothy; Bonita, Ruth
2016-01-01
We sought to outline the framework and methods used by the World Health Organization (WHO) STEPwise approach to noncommunicable disease (NCD) surveillance (STEPS), describe the development and current status, and discuss strengths, limitations, and future directions of STEPS surveillance. STEPS is a WHO-developed, standardized but flexible framework for countries to monitor the main NCD risk factors through questionnaire assessment and physical and biochemical measurements. It is coordinated by national authorities of the implementing country. The STEPS surveys are generally household-based and interviewer-administered, with scientifically selected samples of around 5000 participants. To date, 122 countries across all 6 WHO regions have completed data collection for STEPS or STEPS-aligned surveys. STEPS data are being used to inform NCD policies and track risk-factor trends. Future priorities include strengthening these linkages from data to action on NCDs at the country level, and continuing to develop STEPS' capacities to enable a regular and continuous cycle of risk-factor surveillance worldwide.
Non-pulsed electrochemical impregnation of flexible metallic battery plaques
Maskalick, Nicholas J.
1982-01-01
A method of loading active battery material into porous, flexible, metallic battery plaques, comprises the following steps: precipitating nickel hydroxide active material within the plaque, by making the plaque cathodic, at a high current density, in an electro-precipitation cell also containing a consumable nickel anode and a solution comprising nickel nitrate, having a pH of between 2.0 and 2.8; electrochemically oxidizing the precipitate in caustic formation solution; and repeating the electro-precipitation step at a low current density.
The current role of on-line extraction approaches in clinical and forensic toxicology.
Mueller, Daniel M
2014-08-01
In today's clinical and forensic toxicological laboratories, automation is of interest because of its ability to optimize processes, to reduce manual workload and handling errors and to minimize exposition to potentially infectious samples. Extraction is usually the most time-consuming step; therefore, automation of this step is reasonable. Currently, from the field of clinical and forensic toxicology, methods using the following on-line extraction techniques have been published: on-line solid-phase extraction, turbulent flow chromatography, solid-phase microextraction, microextraction by packed sorbent, single-drop microextraction and on-line desorption of dried blood spots. Most of these published methods are either single-analyte or multicomponent procedures; methods intended for systematic toxicological analysis are relatively scarce. However, the use of on-line extraction will certainly increase in the near future.
Method of making a current collector for a sodium/sulfur battery
Tischer, R.P.; Winterbottom, W.L.; Wroblowa, H.S.
1987-03-10
This specification is directed to a method of making a current collector for a sodium/sulfur battery. The current collector so-made is electronically conductive and resistant to corrosive attack by sulfur/polysulfide melts. The method includes the step of forming the current collector for the sodium/sulfur battery from a composite material formed of aluminum filled with electronically conductive fibers selected from the group of fibers consisting essentially of graphite fibers having a diameter up to 10 microns and silicon carbide fibers having a diameter in a range of 500--1,000 angstroms. 2 figs.
Method of making a current collector for a sodium/sulfur battery
Tischer, Ragnar P.; Winterbottom, Walter L.; Wroblowa, Halina S.
1987-01-01
This specification is directed to a method of making a current collector (14) for a sodium/sulfur battery (10). The current collector so-made is electronically conductive and resistant to corrosive attack by sulfur/polysulfide melts. The method includes the step of forming the current collector for the sodium/sulfur battery from a composite material (16) formed of aluminum filled with electronically conductive fibers selected from the group of fibers consisting essentially of graphite fibers having a diameter up to 10 microns and silicon carbide fibers having a diameter in a range of 500-1000 angstroms.
Strengthening the revenue cycle: a 4-step method for optimizing payment.
Clark, Jonathan J
2008-10-01
Four steps for enhancing the revenue cycle to ensure optimal payment are: *Establish key performance indicator dashboards in each department that compare current with targeted performance; *Create proper organizational structures for each department; *Ensure that high-performing leaders are hired in all management and supervisory positions; *Implement efficient processes in underperforming operations.
Mycotoxin analysis: an update.
Krska, Rudolf; Schubert-Ullrich, Patricia; Molinelli, Alexandra; Sulyok, Michael; MacDonald, Susan; Crews, Colin
2008-02-01
Mycotoxin contamination of cereals and related products used for feed can cause intoxication, especially in farm animals. Therefore, efficient analytical tools for the qualitative and quantitative analysis of toxic fungal metabolites in feed are required. Current methods usually include an extraction step, a clean-up step to reduce or eliminate unwanted co-extracted matrix components and a separation step with suitably specific detection ability. Quantitative methods of analysis for most mycotoxins use immunoaffinity clean-up with high-performance liquid chromatography (HPLC) separation in combination with UV and/or fluorescence detection. Screening of samples contaminated with mycotoxins is frequently performed by thin layer chromatography (TLC), which yields qualitative or semi-quantitative results. Nowadays, enzyme-linked immunosorbent assays (ELISA) are often used for rapid screening. A number of promising methods, such as fluorescence polarization immunoassays, dipsticks, and even newer methods such as biosensors and non-invasive techniques based on infrared spectroscopy, have shown great potential for mycotoxin analysis. Currently, there is a strong trend towards the use of multi-mycotoxin methods for the simultaneous analysis of several of the important Fusarium mycotoxins, which is best achieved by LC-MS/MS (liquid chromatography with tandem mass spectrometry). This review focuses on recent developments in the determination of mycotoxins with a special emphasis on LC-MS/MS and emerging rapid methods.
An optimized rapid bisulfite conversion method with high recovery of cell-free DNA.
Yi, Shaohua; Long, Fei; Cheng, Juanbo; Huang, Daixin
2017-12-19
Methylation analysis of cell-free DNA is a encouraging tool for tumor diagnosis, monitoring and prognosis. Sensitivity of methylation analysis is a very important matter due to the tiny amounts of cell-free DNA available in plasma. Most current methods of DNA methylation analysis are based on the difference of bisulfite-mediated deamination of cytosine between cytosine and 5-methylcytosine. However, the recovery of bisulfite-converted DNA based on current methods is very poor for the methylation analysis of cell-free DNA. We optimized a rapid method for the crucial steps of bisulfite conversion with high recovery of cell-free DNA. A rapid deamination step and alkaline desulfonation was combined with the purification of DNA on a silica column. The conversion efficiency and recovery of bisulfite-treated DNA was investigated by the droplet digital PCR. The optimization of the reaction results in complete cytosine conversion in 30 min at 70 °C and about 65% of recovery of bisulfite-treated cell-free DNA, which is higher than current methods. The method allows high recovery from low levels of bisulfite-treated cell-free DNA, enhancing the analysis sensitivity of methylation detection from cell-free DNA.
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
Method for technology-delivered healthcare measures.
Kramer-Jackman, Kelli Lee; Popkess-Vawter, Sue
2011-12-01
Current healthcare literature lacks development and evaluation methods for research and practice measures administered by technology. Researchers with varying levels of informatics experience are developing technology-delivered measures because of the numerous advantages they offer. Hasty development of technology-delivered measures can present issues that negatively influence administration and psychometric properties. The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability. Background information and Method for Technology-delivered Healthcare Measures steps are detailed.
Reisner, Sari L; Biello, Katie; Rosenberger, Joshua G; Austin, S Bryn; Haneuse, Sebastien; Perez-Brumer, Amaya; Novak, David S; Mimiaga, Matthew J
2014-11-01
Few comparative data are available internationally to examine health differences by transgender identity. A barrier to monitoring the health and well-being of transgender people is the lack of inclusion of measures to assess natal sex/gender identity status in surveys. Data were from a cross-sectional anonymous online survey of members (n > 36,000) of a sexual networking website targeting men who have sex with men in Spanish- and Portuguese-speaking countries/territories in Latin America/the Caribbean, Portugal, and Spain. Natal sex/gender identity status was assessed using a two-step method (Step 1: assigned birth sex, Step 2: current gender identity). Male-to-female (MTF) and female-to-male (FTM) participants were compared to non-transgender males in age-adjusted regression models on socioeconomic status (SES) (education, income, sex work), masculine gender conformity, psychological health and well-being (lifetime suicidality, past-week depressive distress, positive self-worth, general self-rated health, gender related stressors), and sexual health (HIV-infection, past-year STIs, past-3 month unprotected anal or vaginal sex). The two-step method identified 190 transgender participants (0.54%; 158 MTF, 32 FTM). Of the 12 health-related variables, six showed significant differences between the three groups: SES, masculine gender conformity, lifetime suicidality, depressive distress, positive self-worth, and past-year genital herpes. A two-step approach is recommended for health surveillance efforts to assess natal sex/gender identity status. Cognitive testing to formally validate assigned birth sex and current gender identity survey items in Spanish and Portuguese is encouraged.
Reisner, Sari L.; Biello, Katie; Rosenberger, Joshua G.; Austin, S. Bryn; Haneuse, Sebastien; Perez-Brumer, Amaya; Novak, David S.; Mimiaga, Matthew J.
2014-01-01
Few comparative data are available internationally to examine health differences by transgender identity. A barrier to monitoring the health and well-being of transgender people is the lack of inclusion of measures to assess natal sex/gender identity status in surveys. Data were from a cross-sectional anonymous online survey of members (n > 36,000) of a sexual networking website targeting men who have sex with men in Spanish- and Portuguese-speaking countries/ territories in Latin America/the Caribbean, Portugal, and Spain. Natal sex/gender identity status was assessed using a two-step method (Step 1: assigned birth sex, Step 2: current gender identity). Male-to-female (MTF) and female-to-male (FTM) participants were compared to non-transgender males in age-adjusted regression models on socioeconomic status (SES) (education, income, sex work), masculine gender conformity, psychological health and well-being (lifetime suicidality, past-week depressive distress, positive self-worth, general self-rated health, gender related stressors), and sexual health (HIV-infection, past-year STIs, past-3 month unprotected anal or vaginal sex). The two-step method identified 190 transgender participants (0.54%; 158 MTF, 32 FTM). Of the 12 health-related variables, six showed significant differences between the three groups: SES, masculine gender conformity, lifetime suicidality, depressive distress, positive self-worth, and past-year genital herpes. A two-step approach is recommended for health surveillance efforts to assess natal sex/gender identity status. Cognitive testing to formally validate assigned birth sex and current gender identity survey items in Spanish and Portuguese is encouraged. PMID:25030120
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
Digital enhancement of X-rays for NDT
NASA Technical Reports Server (NTRS)
Butterfield, R. L.
1980-01-01
Report is "cookbook" for digital processing of industrial X-rays. Computer techniques, previously used primarily in laboratory and developmental research, have been outlined and codified into step by step procedures for enhancing X-ray images. Those involved in nondestructive testing should find report valuable asset, particularly is visual inspection is method currently used to process X-ray images.
Estimating psychiatric manpower requirements based on patients' needs.
Faulkner, L R; Goldman, C R
1997-05-01
To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.
Direct current power delivery system and method
Zhang, Di; Garces, Luis Jose; Dai, Jian; Lai, Rixin
2016-09-06
A power transmission system includes a first unit for carrying out the steps of receiving high voltage direct current (HVDC) power from an HVDC power line, generating an alternating current (AC) component indicative of a status of the first unit, and adding the AC component to the HVDC power line. Further, the power transmission system includes a second unit for carrying out the steps of generating a direct current (DC) voltage to transfer the HVDC power on the HVDC power line, wherein the HVDC power line is coupled between the first unit and the second unit, detecting a presence or an absence of the added AC component in the HVDC power line, and determining the status of the first unit based on the added AC component.
Temperature- and field-dependent characterization of a conductor on round core cable
NASA Astrophysics Data System (ADS)
Barth, C.; van der Laan, D. C.; Bagrets, N.; Bayer, C. M.; Weiss, K.-P.; Lange, C.
2015-06-01
The conductor on round core (CORC) cable is one of the major high temperature superconductor cable concepts combining scalability, flexibility, mechanical strength, ease of fabrication and high current density; making it a possible candidate as conductor for large, high field magnets. To simulate the boundary conditions of such magnets as well as the temperature dependence of CORC cables a 1.16 m long sample consisting of 15, 4 mm wide SuperPower REBCO tapes was characterized using the ‘FBI’ (force—field—current) superconductor test facility of the Institute for Technical Physics of the Karlsruhe Institute of Technology. In a five step investigation, the CORC cable’s performance was determined at different transverse mechanical loads, magnetic background fields and temperatures as well as its response to swift current changes. In the first step, the sample’s 77 K, self-field current was measured in a liquid nitrogen bath. In the second step, the temperature dependence was measured at self-field condition and compared with extrapolated single tape data. In the third step, the magnetic background field was repeatedly cycled while measuring the current carrying capabilities to determine the impact of transverse Lorentz forces on the CORC cable sample’s performance. In the fourth step, the sample’s current carrying capabilities were measured at different background fields (2-12 T) and surface temperatures (4.2-51.5 K). Through finite element method simulations, the surface temperatures are converted into average sample temperatures and the gained field- and temperature dependence is compared with extrapolated single tape data. In the fifth step, the response of the CORC cable sample to rapid current changes (8.3 kA s-1) was observed with a fast data acquisition system. During these tests, the sample performance remains constant, no degradation is observed. The sample’s measured current carrying capabilities correlate to those of single tapes assuming field- and temperature dependence as published by the manufacturer.
NASA Astrophysics Data System (ADS)
Ming, Bin
Josephson junctions are at the heart of any superconductor device applications. A SQUID (Superconducting Quantum Interference Device), which consists of two Josephson junctions, is by far the most important example. Unfortunately, in the case of high-Tc superconductors (HTS), the quest for a robust, flexible, and high performance junction technology is yet far from the end. Currently, the only proven method to make HTS junctions is the SrTiO3(STO)-based bicrystal technology. In this thesis we concentrate on the fabrication of YBCO step-edge junctions and SQUIDs on sapphire. The step-edge method provides complete control of device locations and facilitates sophisticated, high-density layout. We select CeO2 as the buffer layer, as the key step to make device quality YBCO thin films on sapphire. With an "overhang" shadow mask produced by a novel photolithography technique, a steep step edge was fabricated on the CeO2 buffer layer by Ar+ ion milling with optimized parameters for minimum ion beam divergence. The step angle was determined to be in excess of 80° by atomic force microscopy (AFM). Josephson junctions patterned from those step edges exhibited resistively shunted junction (RSJ) like current-voltage characteristics. IcR n values in the 200--500 mV range were measured at 77K. Shapiro steps were observed under microwave irradiation, reflecting the true Josephson nature of those junctions. The magnetic field dependence of the junction Ic indicates a uniform current distribution. These results suggest that all fabrication processes are well controlled and the step edge is relatively straight and free of microstructural defects. The SQUIDs made from the same process exhibit large voltage modulation in a varying magnetic field. At 77K, our sapphire-based step-edge SQUID has a low white noise level at 3muphi0/ Hz , as compared to typically >10muphi0/ Hz from the best bicrystal STO SQUIDS. Our effort at device fabrication is chiefly motivated by the scanning SQUID microscopy (SSM) application. A scanning SQUID microscope is a non-contact, non-destructive imaging tool that can resolve weak currents beneath the sample surface by detecting their magnetic fields. Our low-noise sapphire-based step-edge SQUIDs should be particularly suitable for such an application. An earlier effort to make SNS trench junctions using focused ion beam (FIB) is reviewed in a separate chapter. (Abstract shortened by UMI.)
Cassette less SOFC stack and method of assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meinhardt, Kerry D
2014-11-18
A cassette less SOFC assembly and a method for creating such an assembly. The SOFC stack is characterized by an electrically isolated stack current path which allows welded interconnection between frame portions of the stack. In one embodiment electrically isolating a current path comprises the step of sealing a interconnect plate to a interconnect plate frame with an insulating seal. This enables the current path portion to be isolated from the structural frame an enables the cell frame to be welded together.
Ingham, Richard J; Battilocchio, Claudio; Fitzpatrick, Daniel E; Sliwinski, Eric; Hawkins, Joel M; Ley, Steven V
2015-01-01
Performing reactions in flow can offer major advantages over batch methods. However, laboratory flow chemistry processes are currently often limited to single steps or short sequences due to the complexity involved with operating a multi-step process. Using new modular components for downstream processing, coupled with control technologies, more advanced multi-step flow sequences can be realized. These tools are applied to the synthesis of 2-aminoadamantane-2-carboxylic acid. A system comprising three chemistry steps and three workup steps was developed, having sufficient autonomy and self-regulation to be managed by a single operator. PMID:25377747
Cryptosporidium is an important protozoan parasite that continues to cause waterborne disease outbreaks worldwide. Current methods to monitor for Cryptosporidium oocysts in water are microscopy-based USEPA Methods 1622 and 1623. These methods assess total levels of oocysts in s...
Gao, Min; Gu, Ming; Liu, Chun-Zhao
2006-07-11
Scutellarin, a flavone glycoside, popularly applied for the treatment of cardiopathy, has been purified in two-step purification by high-speed counter-current chromatography (HSCCC) from Erigeron breviscapus (vant.) Hand. Mazz. (Deng-zhan-hua in Chinese), a well-known traditional Chinese medicinal plant for heart disease. Two solvent systems, n-hexane-ethyl acetate-methanol-acetic acid-water (1:6:1.5:1:4, v/v/v/v/v) and ethyl acetate-n-butanol-acetonitrile-0.1% HCl (5:2:5:10, v/v/v/v) were used for the two-step purification. The purity of the collected fraction of scutellarin was 95.6%. This study supplies a new alternative method for purification of scutellarin.
Step-height measurement with a low coherence interferometer using continuous wavelet transform
NASA Astrophysics Data System (ADS)
Jian, Zhang; Suzuki, Takamasa; Choi, Samuel; Sasaki, Osami
2013-12-01
With the development of electronic technology in recent years, electronic components become increasingly miniaturized. At the same time a more accurate measurement method becomes indispensable. In the current measurement of nano-level, the Michelson interferometer with the laser diode is widely used, the method can measure the object accurately without touching the object. However it can't measure the step height that is larger than the half-wavelength. In this study, we improve the conventional Michelson interferometer by using a super luminescent diode and continuous wavelet transform, which can detect the time that maximizes the amplitude of the interference signal. We can accurately measure the surface-position of the object with this time. The method used in this experiment measured the step height of 20 microns.
2015-05-16
synthesis of iron magnetic nanoparticles is being investigated (Appendix A; Scheme IV). In the first step, precursor iron(III) chloride nanoparticles...and other methods. Currently, we are developing a two-step scheme for the synthesis of esters that will require distillation and/or column...recognize the link between them. We are developing for the above purpose, the microwave-assisted, two-step synthesis of high boiling point esters. The
A novel method to accurately locate and count large numbers of steps by photobleaching
Tsekouras, Konstantinos; Custer, Thomas C.; Jashnsaz, Hossein; Walter, Nils G.; Pressé, Steve
2016-01-01
Photobleaching event counting is a single-molecule fluorescence technique that is increasingly being used to determine the stoichiometry of protein and RNA complexes composed of many subunits in vivo as well as in vitro. By tagging protein or RNA subunits with fluorophores, activating them, and subsequently observing as the fluorophores photobleach, one obtains information on the number of subunits in a complex. The noise properties in a photobleaching time trace depend on the number of active fluorescent subunits. Thus, as fluorophores stochastically photobleach, noise properties of the time trace change stochastically, and these varying noise properties have created a challenge in identifying photobleaching steps in a time trace. Although photobleaching steps are often detected by eye, this method only works for high individual fluorophore emission signal-to-noise ratios and small numbers of fluorophores. With filtering methods or currently available algorithms, it is possible to reliably identify photobleaching steps for up to 20–30 fluorophores and signal-to-noise ratios down to ∼1. Here we present a new Bayesian method of counting steps in photobleaching time traces that takes into account stochastic noise variation in addition to complications such as overlapping photobleaching events that may arise from fluorophore interactions, as well as on-off blinking. Our method is capable of detecting ≥50 photobleaching steps even for signal-to-noise ratios as low as 0.1, can find up to ≥500 steps for more favorable noise profiles, and is computationally inexpensive. PMID:27654946
Changes in the dielectric properties of a plant stem produced by the application of voltage steps
NASA Astrophysics Data System (ADS)
Hart, F. X.
1983-03-01
Time Domain Dielectric Spectroscopy (TDDS) provides a useful method for monitoring the physiological state of a biological system which may be changing with time. A voltage step is applied to a sample and the Fourier Transform of the resulting current yields the variations of the conductance, capacitance and dielectric loss of the sample with frequency (dielectric spectrum). An important question is whether the application of the voltage step itself can produce changes which obscure those of interest. Long term monitoring of the dielectric properties of plant stems requires the use of needle electrodes with relatively large current densities and field strengths at the electrode-stem interface. Steady currents on the order of those used in TDDS have been observed to modify the distribution of plant growth hormones, to produce wounding at electrode sites, and to cause stem collapse. This paper presents the preliminary results of an investigation into the effects of the application of voltage steps on the observed dielectric spectrum of the stem of the plant Coleus.
Directional solidification processing of alloys using an applied electric field
NASA Technical Reports Server (NTRS)
McKannan, Eugene C. (Inventor); Schmidt, Deborah D. (Inventor); Ahmed, Shaffiq (Inventor); Bond, Robert W. (Inventor)
1992-01-01
A method is provided for obtaining an alloy having an ordered microstructure which comprises the steps of heating the central portion of the alloy under uniform temperature so that it enters a liquid phase while the outer portions remain solid, applying a constant electric current through the alloy during the heating step, and solidifying the liquid central portion of the alloy by subjecting it to a temperature-gradient zone so that cooling occurs in a directional manner and at a given rate of speed while maintaining the application of the constant electric current through the alloy. The method is particularly suitable for use with nickel-based superalloys. The method of the present invention produces an alloy having superior characteristics such as reduced segregation. After subsequent precipitation by heat-treatment, the alloys produced by the present invention will have excellent strength and high-temperature resistance.
Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2016-06-01
This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.
Accurate step-FMCW ultrasound ranging and comparison with pulse-echo signaling methods
NASA Astrophysics Data System (ADS)
Natarajan, Shyam; Singh, Rahul S.; Lee, Michael; Cox, Brian P.; Culjat, Martin O.; Grundfest, Warren S.; Lee, Hua
2010-03-01
This paper presents a method setup for high-frequency ultrasound ranging based on stepped frequency-modulated continuous waves (FMCW), potentially capable of producing a higher signal-to-noise ratio (SNR) compared to traditional pulse-echo signaling. In current ultrasound systems, the use of higher frequencies (10-20 MHz) to enhance resolution lowers signal quality due to frequency-dependent attenuation. The proposed ultrasound signaling format, step-FMCW, is well-known in the radar community, and features lower peak power, wider dynamic range, lower noise figure and simpler electronics in comparison to pulse-echo systems. In pulse-echo ultrasound ranging, distances are calculated using the transmit times between a pulse and its subsequent echoes. In step-FMCW ultrasonic ranging, the phase and magnitude differences at stepped frequencies are used to sample the frequency domain. Thus, by taking the inverse Fourier transform, a comprehensive range profile is recovered that has increased immunity to noise over conventional ranging methods. Step-FMCW and pulse-echo waveforms were created using custom-built hardware consisting of an arbitrary waveform generator and dual-channel super heterodyne receiver, providing high SNR and in turn, accuracy in detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiuchi, T.; Yasuoka, A.
1988-05-24
A method of controlling the solenoid current of a solenoid valve which controls suction air in an internal combustion engine, is described comprising the steps of: calculating a solenoid current control value as a function of engine operating conditions; detecting an engine coolant temperature corresponding to the solenoid temperature; determining a temperature correction value in accordance with the solenoid temperature; and calculating a driving signal for controlling the operation of the solenoid as a function of the solenoid current control value and the temperature correction value.
Computationally optimized ECoG stimulation with local safety constraints.
Guler, Seyhmus; Dannhauer, Moritz; Roig-Solvas, Biel; Gkogkidis, Alexis; Macleod, Rob; Ball, Tonio; Ojemann, Jeffrey G; Brooks, Dana H
2018-06-01
Direct stimulation of the cortical surface is used clinically for cortical mapping and modulation of local activity. Future applications of cortical modulation and brain-computer interfaces may also use cortical stimulation methods. One common method to deliver current is through electrocorticography (ECoG) stimulation in which a dense array of electrodes are placed subdurally or epidurally to stimulate the cortex. However, proximity to cortical tissue limits the amount of current that can be delivered safely. It may be desirable to deliver higher current to a specific local region of interest (ROI) while limiting current to other local areas more stringently than is guaranteed by global safety limits. Two commonly used global safety constraints bound the total injected current and individual electrode currents. However, these two sets of constraints may not be sufficient to prevent high current density locally (hot-spots). In this work, we propose an efficient approach that prevents current density hot-spots in the entire brain while optimizing ECoG stimulus patterns for targeted stimulation. Specifically, we maximize the current along a particular desired directional field in the ROI while respecting three safety constraints: one on the total injected current, one on individual electrode currents, and the third on the local current density magnitude in the brain. This third set of constraints creates a computational barrier due to the huge number of constraints needed to bound the current density at every point in the entire brain. We overcome this barrier by adopting an efficient two-step approach. In the first step, the proposed method identifies the safe brain region, which cannot contain any hot-spots solely based on the global bounds on total injected current and individual electrode currents. In the second step, the proposed algorithm iteratively adjusts the stimulus pattern to arrive at a solution that exhibits no hot-spots in the remaining brain. We report on simulations on a realistic finite element (FE) head model with five anatomical ROIs and two desired directional fields. We also report on the effect of ROI depth and desired directional field on the focality of the stimulation. Finally, we provide an analysis of optimization runtime as a function of different safety and modeling parameters. Our results suggest that optimized stimulus patterns tend to differ from those used in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.
Solidification processing of alloys using an applied electric field
NASA Technical Reports Server (NTRS)
Mckannan, Eugene C. (Inventor); Schmidt, Deborah D. (Inventor); Ahmed, Shaffiq (Inventor); Bond, Robert W. (Inventor)
1990-01-01
A method is provided for obtaining an alloy having an ordered microstructure which comprises the steps of heating the central portion of the alloy under uniform temperature so that it enters a liquid phase while the outer portions remain solid, applying a constant electric current through the alloy during the heating step, and solidifying the liquid central portion of the alloy by subjecting it to a temperature-gradient zone so that cooling occurs in a directional manner and at a given rate of speed while maintaining the application of the constant electric current through the alloy. The method of the present invention produces an alloy having superior characteristics such as reduced segregation. After subsequent precipitation by heat-treatment, the alloys produced by the present invention will have excellent strength and high-temperature resistance.
Nibber, Anjan; Thomas, Mike; Thomas, Vicky; van Aalderen, Wim; Bleecker, Eugene; Campbell, Jonathan; Roche, Nicolas; Haughney, John; Van Ganse, Eric; Park, Hye-Yun; Rhee, Chin Kook; Skinner, Derek; Chisholm, Alison; van Boven, Job FM; Soriano, Joan B.; Price, David
2016-01-01
Background Questionnaire-based surveys report that uncontrolled asthma is common in Europe, and associated with high healthcare costs. The relationship between treatment step control are less well described. To quantify the asthma burden within routine primary care in the UK, specifically the distribution of asthma control across guideline-recommended management steps and the association between patients’ control and smoking status. Methods Patients were retrospectively identified using the Optimum Patient Care Research Database and prospectively followed-up for at least 1-year. Patients’ routine clinical data and self reports were used to assess GINA control status; clinical records were used to categorise current treatment by GINA management steps and patients’ smoking status. Results A total of 105,018 eligible asthma patients were identified, mean (SD) age 45 (23) years; 55% female; 15% current and 24% ex-smokers. Only 20% of patients were controlled, 59% were partially controlled and 21% were uncontrolled. Control was only weakly correlated to GINA management steps (Spearman’s rho=0.15, P<0.001), 27.5%, 21.5%, 20.3%, 15.1% and 12.1% achieving control across Step 1 to 5, respectively. Similarly, the proportion with uncontrolled asthma rose across higher GINA steps (12.6%, 18.2%, 19.6%, 29.2% and 36.6%). About 13% of patients experienced at least one exacerbation in the 1-year follow-up period. Frequent exacerbations (2 or more per year) were very uncommon at lower treatment steps (step 1 11.6%, step 2 12.8%) but were significantly more common at steps 3 and 4 at 18.8% and 28.2% respectively (P<0.001 for trend with ascending treatment step). Conclusions In this cohort of UK primary care asthma patients, the majority failed to achieve GINA defined control. GINA management step was only weakly correlated with control status, but higher step management was associated with a greater risk of exacerbation.
NASA Astrophysics Data System (ADS)
Kamitani, A.; Takayama, T.; Tanaka, A.; Ikuno, S.
2010-11-01
The inductive method for measuring the critical current density jC in a high-temperature superconducting (HTS) thin film has been investigated numerically. In order to simulate the method, a non-axisymmetric numerical code has been developed for analyzing the time evolution of the shielding current density. In the code, the governing equation of the shielding current density is spatially discretized with the finite element method and the resulting first-order ordinary differential system is solved by using the 5th-order Runge-Kutta method with an adaptive step-size control algorithm. By using the code, the threshold current IT is evaluated for various positions of a coil. The results of computations show that, near a film edge, the accuracy of the estimating formula for jC is remarkably degraded. Moreover, even the proportional relationship between jC and IT will be lost there. Hence, the critical current density near a film edge cannot be estimated by using the inductive method.
Biolistics Transformation of Wheat
NASA Astrophysics Data System (ADS)
Sparks, Caroline A.; Jones, Huw D.
We present a complete, step-by-step guide to the production of transformed wheat plants using a particle bombardment device to deliver plasmid DNA into immature embryos and the regeneration of transgenic plants via somatic embryogenesis. Currently, this is the most commonly used method for transforming wheat and it offers some advantages. However, it will be interesting to see whether this position is challenged as facile methods are developed for delivering DNA by Agrobacterium tumefaciens or by the production of transformants via a germ-line process (see other chapters in this book).
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N
2014-06-20
The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
A Simple Method to Simultaneously Detect and Identify Spikes from Raw Extracellular Recordings.
Petrantonakis, Panagiotis C; Poirazi, Panayiota
2015-01-01
The ability to track when and which neurons fire in the vicinity of an electrode, in an efficient and reliable manner can revolutionize the neuroscience field. The current bottleneck lies in spike sorting algorithms; existing methods for detecting and discriminating the activity of multiple neurons rely on inefficient, multi-step processing of extracellular recordings. In this work, we show that a single-step processing of raw (unfiltered) extracellular signals is sufficient for both the detection and identification of active neurons, thus greatly simplifying and optimizing the spike sorting approach. The efficiency and reliability of our method is demonstrated in both real and simulated data.
Fabrication of porous anodic alumina using normal anodization and pulse anodization
NASA Astrophysics Data System (ADS)
Chin, I. K.; Yam, F. K.; Hassan, Z.
2015-05-01
This article reports on the fabrication of porous anodic alumina (PAA) by two-step anodizing the low purity commercial aluminum sheets at room temperature. Different variations of the second-step anodization were conducted: normal anodization (NA) with direct current potential difference; pulse anodization (PA) alternate between potential differences of 10 V and 0 V; hybrid pulse anodization (HPA) alternate between potential differences of 10 V and -2 V. The method influenced the film homogeneity of the PAA and the most homogeneous structure was obtained via PA. The morphological properties are further elucidated using measured current-transient profiles. The absent of current rise profile in PA indicates the anodization temperature and dissolution of the PAA structure were greatly reduced by alternating potential differences.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-16
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution
NASA Astrophysics Data System (ADS)
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-01
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
Analysis of the U.S. geological survey streamgaging network
Scott, A.G.
1987-01-01
This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U.S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3,493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19.9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17.8 percent. The existing streamgaging networks in four Districts were further analyzed to determine the impacts that satellite telemetry would have on the cost effectiveness. Satellite telemetry was not found to be cost effective on the basis of hydrologic data collection alone, given present cost of equipment and operation.This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U. S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3, 493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19. 9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17. 8 percent. Additional study results are discussed.
Calcium dependent current recordings in Xenopus laevis oocytes in microgravity
NASA Astrophysics Data System (ADS)
Wuest, Simon L.; Roesch, Christian; Ille, Fabian; Egli, Marcel
2017-12-01
Mechanical unloading by microgravity (or weightlessness) conditions triggers profound adaptation processes at the cellular and organ levels. Among other mechanisms, mechanosensitive ion channels are thought to play a key role in allowing cells to transduce mechanical forces. Previous experiments performed under microgravity have shown that gravity affects the gating properties of ion channels. Here, a method is described to record a calcium-dependent current in native Xenopus laevis oocytes under microgravity conditions during a parabolic flight. A 3-voltage-step protocol was applied to provoke a calcium-dependent current. This current increased with extracellular calcium concentration and could be reduced by applying extracellular gadolinium. The custom-made ;OoClamp; hardware was validated by comparing the results of the 3-voltage-step protocol to results obtained with a well-established two-electrode voltage clamp (TEVC). In the context of the 2nd Swiss Parabolic Flight Campaign, we tested the OoClamp and the method. The setup and experiment protocol worked well in parabolic flight. A tendency that the calcium-dependent current was smaller under microgravity than under 1 g condition could be observed. However, a conclusive statement was not possible due to the small size of the data base that could be gathered.
Guan, Zixuan; Chen, Di; Chueh, William C
2017-08-30
The oxygen incorporation reaction, which involves the transformation of an oxygen gas molecule to two lattice oxygen ions in a mixed ionic and electronic conducting solid, is a ubiquitous and fundamental reaction in solid-state electrochemistry. To understand the reaction pathway and to identify the rate-determining step, near-equilibrium measurements have been employed to quantify the exchange coefficients as a function of oxygen partial pressure and temperature. However, because the exchange coefficient contains contributions from both forward and reverse reaction rate constants and depends on both oxygen partial pressure and oxygen fugacity in the solid, unique and definitive mechanistic assessment has been challenging. In this work, we derive a current density equation as a function of both oxygen partial pressure and overpotential, and consider both near and far from equilibrium limits. Rather than considering specific reaction pathways, we generalize the multi-step oxygen incorporation reaction into the rate-determining step, preceding and following quasi-equilibrium steps, and consider the number of oxygen ions and electrons involved in each. By evaluating the dependence of current density on oxygen partial pressure and overpotential separately, one obtains the reaction orders for oxygen gas molecules and for solid-state species in the electrode. We simulated the oxygen incorporation current density-overpotential curves for praseodymium-doped ceria for various candidate rate-determining steps. This work highlights a promising method for studying the exchange kinetics far away from equilibrium.
Osterwald, C.R.; Emery, K.A.
1984-05-29
A laser scanning system for scanning the surface of photovoltaic cell in a precise, stepped raster pattern includes electric current detecting and measuring equipment for sensing the current response of the scanned cell to the laser beam at each stepped irradiated spot or pixel on the cell surface. A computer is used to control and monitor the raster position of the laser scan as well as monitoring the corresponding current responses, storing this data, operating on it, and for feeding the data to a graphical plotter for producing a visual, color-coded image of the current response of the cell to the laser scan. A translation platform driven by stepper motors in precise X and Y distances holds and rasters the cell being scanned under a stationary spot-focused laser beam.
Osterwald, Carl R.; Emery, Keith A.
1987-01-01
A laser scanning system for scanning the surface of a photovoltaic cell in a precise, stepped raster pattern includes electric current detecting and measuring equipment for sensing the current response of the scanned cell to the laser beam at each stepped irradiated spot or pixel on the cell surface. A computer is used to control and monitor the raster position of the laser scan as well as monitoring the corresponding current responses, storing this data, operating on it, and for feeding the data to a graphic plotter for producing a visual, color-coded image of the current response of the cell to the laser scan. A translation platform driven by stepper motors in precise X and Y distances holds and rasters the cell being scanned under a stationary spot-focused laser beam.
Modified conjugate gradient method for diagonalizing large matrices.
Jie, Quanlin; Liu, Dunhuan
2003-11-01
We present an iterative method to diagonalize large matrices. The basic idea is the same as the conjugate gradient (CG) method, i.e, minimizing the Rayleigh quotient via its gradient and avoiding reintroducing errors to the directions of previous gradients. Each iteration step is to find lowest eigenvector of the matrix in a subspace spanned by the current trial vector and the corresponding gradient of the Rayleigh quotient, as well as some previous trial vectors. The gradient, together with the previous trial vectors, play a similar role as the conjugate gradient of the original CG algorithm. Our numeric tests indicate that this method converges significantly faster than the original CG method. And the computational cost of one iteration step is about the same as the original CG method. It is suitable for first principle calculations.
Suba, Dávid; Urbányi, Zoltán; Salgó, András
2016-10-01
Capillary electrophoresis techniques are widely used in the analytical biotechnology. Different electrophoretic techniques are very adequate tools to monitor size-and charge heterogenities of protein drugs. Method descriptions and development studies of capillary zone electrophoresis (CZE) have been described in literature. Most of them are performed based on the classical one-factor-at-time (OFAT) approach. In this study a very simple method development approach is described for capillary zone electrophoresis: a "two-phase-four-step" approach is introduced which allows a rapid, iterative method development process and can be a good platform for CZE method. In every step the current analytical target profile and an appropriate control strategy were established to monitor the current stage of development. A very good platform was established to investigate intact and digested protein samples. Commercially available monoclonal antibody was chosen as model protein for the method development study. The CZE method was qualificated after the development process and the results were presented. The analytical system stability was represented by the calculated RSD% value of area percentage and migration time of the selected peaks (<0.8% and <5%) during the intermediate precision investigation. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Qu, Zilian; Meng, Yonggang; Zhao, Qian
2015-03-01
This paper proposes a new eddy current method, named equivalent unit method (EUM), for the thickness measurement of the top copper film of multilayer interconnects in the chemical mechanical polishing (CMP) process, which is an important step in the integrated circuit (IC) manufacturing. The influence of the underneath circuit layers on the eddy current is modeled and treated as an equivalent film thickness. By subtracting this equivalent film component, the accuracy of the thickness measurement of the top copper layer with an eddy current sensor is improved and the absolute error is 3 nm for sampler measurement.
Ren, Xiaojie; Zhao, Xinhe; Turcotte, François; Deschênes, Jean-Sébastien; Tremblay, Réjean; Jolicoeur, Mario
2017-02-11
Microalgae have the potential to rapidly accumulate lipids of high interest for the food, cosmetics, pharmaceutical and energy (e.g. biodiesel) industries. However, current lipid extraction methods show efficiency limitation and until now, extraction protocols have not been fully optimized for specific lipid compounds. The present study thus presents a novel lipid extraction method, consisting in the addition of a water treatment of biomass between the two-stage solvent extraction steps of current extraction methods. The resulting modified method not only enhances lipid extraction efficiency, but also yields a higher triacylglycerols (TAG) ratio, which is highly desirable for biodiesel production. Modification of four existing methods using acetone, chloroform/methanol (Chl/Met), chloroform/methanol/H 2 O (Chl/Met/H 2 O) and dichloromethane/methanol (Dic/Met) showed respective lipid extraction yield enhancement of 72.3, 35.8, 60.3 and 60.9%. The modified acetone method resulted in the highest extraction yield, with 68.9 ± 0.2% DW total lipids. Extraction of TAG was particularly improved with the water treatment, especially for the Chl/Met/H 2 O and Dic/Met methods. The acetone method with the water treatment led to the highest extraction level of TAG with 73.7 ± 7.3 µg/mg DW, which is 130.8 ± 10.6% higher than the maximum value obtained for the four classical methods (31.9 ± 4.6 µg/mg DW). Interestingly, the water treatment preferentially improved the extraction of intracellular fractions, i.e. TAG, sterols, and free fatty acids, compared to the lipid fractions of the cell membranes, which are constituted of phospholipids (PL), acetone mobile polar lipids and hydrocarbons. Finally, from the 32 fatty acids analyzed for both neutral lipids (NL) and polar lipids (PL) fractions, it is clear that the water treatment greatly improves NL-to-PL ratio for the four standard methods assessed. Water treatment of biomass after the first solvent extraction step helps the subsequent release of intracellular lipids in the second extraction step, thus improving the global lipids extraction yield. In addition, the water treatment positively modifies the intracellular lipid class ratios of the final extract, in which TAG ratio is significantly increased without changes in the fatty acids composition. The novel method thus provides an efficient way to improve lipid extraction yield of existing methods, as well as selectively favoring TAG, a lipid of the upmost interest for biodiesel production.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
NASA Astrophysics Data System (ADS)
Bolginov, V. V.; Rossolenko, A. N.; Shkarin, A. B.; Oboznov, V. A.; Ryazanov, V. V.
2018-03-01
We have implemented a trilayer technological approach to fabricate Nb-Cu_{0.47} Ni_{0.53}-Nb superconducting phase inverters (π -junctions) with enhanced critical current. Within this technique, all three layers of the superconductor-ferromagnet-superconductor junction deposited in a single vacuum cycle that have allowed us to obtain π -junctions with critical current density up to 20 kA/cm^2. The value achieved is a factor of 10 higher than for the step-by-step method used in earlier works. Our additional experiments have shown that this difference is related to a bilayered CuNi/Cu barrier used in the case of the step-by-step technique and interlayer diffusion at the CuNi/Cu interface. We show that the interlayer diffusion can be utilized for fine tuning of the 0{-}π transition temperature of already fabricated junctions. The results obtained open new opportunities for the CuNi-based phase inverters in digital and quantum Josephson electronics.
Prediction techniques for jet-induced effects in hover on STOVL aircraft
NASA Technical Reports Server (NTRS)
Wardwell, Douglas A.; Kuhn, Richard E.
1991-01-01
Prediction techniques for jet induced lift effects during hover are available, relatively easy to use, and produce adequate results for preliminary design work. Although deficiencies of the current method were found, it is still currently the best way to estimate jet induced lift effects short of using computational fluid dynamics. Its use is summarized. The new summarized method, represents the first step toward the use of surface pressure data in an empirical method, as opposed to just balance data in the current method, for calculating jet induced effects. Although the new method is currently limited to flat plate configurations having two circular jets of equal thrust, it has the potential of more accurately predicting jet induced effects including a means for estimating the pitching moment in hover. As this method was developed from a very limited amount of data, broader applications of the method require the inclusion of new data on additional configurations. However, within this small data base, the new method does a better job in predicting jet induced effects in hover than the current method.
Method to prevent sulfur accumulation in membrane electrode assembly
Steimke, John L; Steeper, Timothy J; Herman, David T
2014-04-29
A method of operating a hybrid sulfur electrolyzer to generate hydrogen is provided that includes the steps of providing an anolyte with a concentration of sulfur dioxide, and applying a current. During steady state generation of hydrogen a plot of applied current density versus concentration of sulfur dioxide is below a boundary line. The boundary line may be linear and extend through the origin of the graph with a slope of 0.001 in which the current density is measured in mA/cm2 and the concentration of sulfur dioxide is measured in moles of sulfur dioxide per liter of anolyte.
A novel method to accurately locate and count large numbers of steps by photobleaching.
Tsekouras, Konstantinos; Custer, Thomas C; Jashnsaz, Hossein; Walter, Nils G; Pressé, Steve
2016-11-07
Photobleaching event counting is a single-molecule fluorescence technique that is increasingly being used to determine the stoichiometry of protein and RNA complexes composed of many subunits in vivo as well as in vitro. By tagging protein or RNA subunits with fluorophores, activating them, and subsequently observing as the fluorophores photobleach, one obtains information on the number of subunits in a complex. The noise properties in a photobleaching time trace depend on the number of active fluorescent subunits. Thus, as fluorophores stochastically photobleach, noise properties of the time trace change stochastically, and these varying noise properties have created a challenge in identifying photobleaching steps in a time trace. Although photobleaching steps are often detected by eye, this method only works for high individual fluorophore emission signal-to-noise ratios and small numbers of fluorophores. With filtering methods or currently available algorithms, it is possible to reliably identify photobleaching steps for up to 20-30 fluorophores and signal-to-noise ratios down to ∼1. Here we present a new Bayesian method of counting steps in photobleaching time traces that takes into account stochastic noise variation in addition to complications such as overlapping photobleaching events that may arise from fluorophore interactions, as well as on-off blinking. Our method is capable of detecting ≥50 photobleaching steps even for signal-to-noise ratios as low as 0.1, can find up to ≥500 steps for more favorable noise profiles, and is computationally inexpensive. © 2016 Tsekouras et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Teaching the Scientific Method Using Current News Articles
ERIC Educational Resources Information Center
Palmer, Laura K.; Mahan, Carolyn G.
2013-01-01
We describe a short (less than 50 minutes) activity using news articles from sources such as "Science Daily" to teach students the steps of the scientific method and the difference between primary and secondary literature sources. The flexibility in choosing news articles to examine allowed us to tailor the activity to the specific interests of…
Utama, M Iqbal Bakti; Lu, Xin; Zhan, Da; Ha, Son Tung; Yuan, Yanwen; Shen, Zexiang; Xiong, Qihua
2014-11-07
Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures.
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
Uno, Megumi; Phansroy, Nichanan; Aso, Yuji; Ohara, Hitomi
2017-08-01
Shewanella oneidensis MR-1 generates electricity from lactic acid, but cannot utilize starch. On the other hand, Streptococcus bovis 148 metabolizes starch and produces lactic acid. Therefore, two methods were trialed for starch-fueled microbial fuel cell (MFC) in this study. In electric generation by two-step fermentation (EGT) method, starch was first converted to lactic acid by S. bovis 148. The S. bovis 148 were then removed by centrifugation, and the fermented broth was preserved for electricity generation by S. oneidensis MR-1. Another method was electric generation by parallel fermentation (EGP) method. In this method, the cultivation and subsequent fermentation processes of S. bovis 148 and S. oneidensis MR-1 were performed simultaneously. After 1, 2, and 3 terms (5-day intervals) of S. oneidensis MR-1 in the EGT fermented broth of S. bovis 148, the maximum currents at each term were 1.8, 2.4, and 2.8 mA, and the maximum current densities at each term were 41.0, 43.6, and 49.9 mW/m 2 , respectively. In the EGP method, starch was also converted into lactic acid with electricity generation. The maximum current density was 140-200 mA/m 2 , and the maximum power density of this method was 12.1 mW/m 2 . Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Variability of Currents in Great South Channel and Over Georges Bank: Observation and Modeling
1992-06-01
Rizzoli motivated me to study the driv:,: mechanism of stratified tidal rectification using diagnostic analysis methods . Conversations with Glen...drifter trajectories in the 1988 and 1989 surveys give further encouragement that the analysis method yields an accurate picture of the nontidal flow...harmonic truncation method . Scaling analysis argues that this method is not appropriate for a step topography because it is valid only when the
Design and control of the phase current of a brushless dc motor to eliminate cogging torque
NASA Astrophysics Data System (ADS)
Jang, G. H.; Lee, C. J.
2006-04-01
This paper presents a design and control method of the phase current to reduce the torque ripple of a brushless dc (BLDC) motor by eliminating cogging torque. The cogging torque is the main source of torque ripple and consequently of speed error, and it is also the excitation source to generate the vibration and noise of a motor. This research proposes a modified current wave form, which is composed of main and auxiliary currents. The former is the conventional current to generate the commutating torque. The latter generates the torque with the same magnitude and opposite sign of the corresponding cogging torque at the given position in order to eliminate the cogging torque. Time-stepping finite element method simulation considering pulse-width-modulation switching method has been performed to verify the effectiveness of the proposed method, and it shows that this proposed method reduces torque ripple by 36%. A digital-signal-processor-based controller is also developed to implement the proposed method, and it shows that this proposed method reduces the speed ripple significantly.
NASA Astrophysics Data System (ADS)
Hans, Kerstin M.-C.; Gianella, Michele; Sigrist, Markus W.
2012-03-01
On-site drug tests have gained importance, e.g., for protecting the society from impaired drivers. Since today's drug tests are majorly only positive/negative, there is a great need for a reliable, portable and preferentially quantitative drug test. In the project IrSens we aim to bridge this gap with the development of an optical sensor platform based on infrared spectroscopy and focus on cocaine detection in saliva. We combine a one-step extraction method, a sample drying technique and infrared attenuated total reflection (ATR) spectroscopy. As a first step we have developed an extraction technique that allows us to extract cocaine from saliva to an almost infrared-transparent solvent and to record ATR spectra with a commercially available Fourier Transform-infrared spectrometer. To the best of our knowledge this is the first time that such a simple and easy-to-use one-step extraction method is used to transfer cocaine from saliva into an organic solvent and detect it quantitatively. With this new method we are able to reach a current limit of detection around 10 μg/ml. This new extraction method could also be applied to waste water monitoring and controlling caffeine content in beverages.
Hybrid finite element and Brownian dynamics method for diffusion-controlled reactions.
Bauler, Patricia; Huber, Gary A; McCammon, J Andrew
2012-04-28
Diffusion is often the rate determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. This paper proposes a new hybrid diffusion method that couples the strengths of each of these two methods. The method is derived for a general multidimensional system, and is presented using a basic test case for 1D linear and radially symmetric diffusion systems.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
NASA Astrophysics Data System (ADS)
Morais, A. P.; Pino, A. V.; Souza, M. N.
2016-08-01
This in vitro study evaluated the diagnostic performance of an alternative electric bioimpedance spectroscopy technique (BIS-STEP) detect questionable occlusal carious lesions. Six specialists carried out the visual (V), radiography (R), and combined (VR) exams of 57 sound or non-cavitated occlusal carious lesion teeth classifying the occlusal surfaces in sound surface (H), enamel caries (EC), and dentinal caries (DC). Measurements were based on the current response to a step voltage excitation (BIS-STEP). A fractional electrical model was used to predict the current response in the time domain and to estimate the model parameters: Rs and Rp (resistive parameters), and C and α (fractional parameters). Histological analysis showed caries prevalence of 33.3% being 15.8% hidden caries. Combined examination obtained the best traditional diagnostic results with specificity = 59.0%, sensitivity = 70.9%, and accuracy = 60.8%. There were statistically significant differences in bioimpedance parameters between the H and EC groups (p = 0.016) and between the H and DC groups (Rs, p = 0.006; Rp, p = 0.022, and α, p = 0.041). Using a suitable threshold for the Rs, we obtained specificity = 60.7%, sensitivity = 77.9%, accuracy = 73.2%, and 100% of detection for deep lesions. It can be concluded that BIS-STEP method could be an important tool to improve the detection and management of occlusal non-cavitated primary caries and pigmented sites.
NASA Astrophysics Data System (ADS)
Omiya, Takuma; Tanaka, Akira; Shimomura, Masaru
2012-07-01
The structure of porous silicon carbide membranes that peeled off spontaneously during electrochemical etching was studied. They were fabricated from n-type 6H SiC(0001) wafers by a double-step electrochemical etching process in a hydrofluoric electrolyte. Nanoporous membranes were obtained after double-step etching with current densities of 10-20 and 60-100 mA/cm2 in the first and second steps, respectively. Microporous membranes were also fabricated after double-step etching with current densities of 100 and 200 mA/cm2. It was found that the pore diameter is influenced by the etching current in step 1, and that a higher current is required in step 2 when the current in step 1 is increased. During the etching processes in steps 1 and 2, vertical nanopore and lateral crack formations proceed, respectively. The influx pathway of hydrofluoric solution, expansion of generated gases, and transfer limitation of positive holes to the pore surface are the key factors in the peeling-off mechanism of the membrane.
A New Approach to Aircraft Robust Performance Analysis
NASA Technical Reports Server (NTRS)
Gregory, Irene M.; Tierno, Jorge E.
2004-01-01
A recently developed algorithm for nonlinear system performance analysis has been applied to an F16 aircraft to begin evaluating the suitability of the method for aerospace problems. The algorithm has a potential to be much more efficient than the current methods in performance analysis for aircraft. This paper is the initial step in evaluating this potential.
Sun, Wanjie; Larsen, Michael D; Lachin, John M
2014-04-15
In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang
2017-05-01
This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.
Recovery and purification process development for monoclonal antibody production
Ma, Junfen; Winter, Charles; Bayer, Robert
2010-01-01
Hundreds of therapeutic monoclonal antibodies (mAbs) are currently in development, and many companies have multiple antibodies in their pipelines. Current methodology used in recovery processes for these molecules are reviewed here. Basic unit operations such as harvest, Protein A affinity chromatography and additional polishing steps are surveyed. Alternative processes such as flocculation, precipitation and membrane chromatography are discussed. We also cover platform approaches to purification methods development, use of high throughput screening methods, and offer a view on future developments in purification methodology as applied to mAbs. PMID:20647768
Numerical investigation of split flows by gravity currents into two-layered stratified water bodies
NASA Astrophysics Data System (ADS)
Cortés, A.; Wells, M. G.; Fringer, O. B.; Arthur, R. S.; Rueda, F. J.
2015-07-01
The behavior of a two-dimensional (2-D) gravity current impinging upon a density step in a two-layered stratified basin is analyzed using a high-resolution Reynolds-Averaged Navier-Stokes model. The gravity current splits at the density step, and the portion of the buoyancy flux becoming an interflow is largely controlled by the vertical distribution of velocity and density within the gravity current and the magnitude of the density step between the two ambient layers. This is in agreement with recent laboratory observations. The strongest changes in the ambient density profiles occur as a result of the impingement of supercritical currents with strong density contrasts, for which a large portion of the gravity current detaches from the bottom and becomes an interflow. We characterize the current partition process in the simulated experiments using the densimetric Froude number of the current (Fr) across the density step (upstream and downstream). When underflows are formed, more supercritical currents are observed downstream of the density step compared to upstream (Fru < Frd), and thus, stronger mixing of the current with the ambient water downstream. However, when split flows and interflows are formed, smaller Fr values are identified after the current crosses the density step (Fru > Frd), which indicates lower mixing between the current and ambient water after the impingement due to the significant stripping of interfacial material at the density step.
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-01-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-05-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
Receptor-mediated gene transfer vectors: progress towards genetic pharmaceuticals.
Molas, M; Gómez-Valadés, A G; Vidal-Alabró, A; Miguel-Turu, M; Bermudez, J; Bartrons, R; Perales, J C
2003-10-01
Although specific delivery to tissues and unique cell types in vivo has been demonstrated for many non-viral vectors, current methods are still inadequate for human applications, mainly because of limitations on their efficiencies. All the steps required for an efficient receptor-mediated gene transfer process may in principle be exploited to enhance targeted gene delivery. These steps are: DNA/vector binding, internalization, subcellular trafficking, vesicular escape, nuclear import, and unpacking either for transcription or other functions (i.e., antisense, RNA interference, etc.). The large variety of vector designs that are currently available, usually aimed at improving the efficiency of these steps, has complicated the evaluation of data obtained from specific derivatives of such vectors. The importance of the structure of the final vector and the consequences of design decisions at specific steps on the overall efficiency of the vector will be discussed in detail. We emphasize in this review that stability in serum and thus, proper bioavailability of vectors to their specific receptors may be the single greatest limiting factor on the overall gene transfer efficiency in vivo. We discuss current approaches to overcome the intrinsic instability of synthetic vectors in the blood. In this regard, a summary of the structural features of the vectors obtained from current protocols will be presented and their functional characteristics evaluated. Dissecting information on molecular conjugates obtained by such methodologies, when carefully evaluated, should provide important guidelines for the creation of effective, targeted and safe DNA therapeutics.
Regression Analysis of a Disease Onset Distribution Using Diagnosis Data
Young, Jessica G.; Jewell, Nicholas P.; Samuels, Steven J.
2008-01-01
Summary We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered. PMID:17680832
SpaceNet: Modeling and Simulating Space Logistics
NASA Technical Reports Server (NTRS)
Lee, Gene; Jordan, Elizabeth; Shishko, Robert; de Weck, Olivier; Armar, Nii; Siddiqi, Afreen
2008-01-01
This paper summarizes the current state of the art in interplanetary supply chain modeling and discusses SpaceNet as one particular method and tool to address space logistics modeling and simulation challenges. Fundamental upgrades to the interplanetary supply chain framework such as process groups, nested elements, and cargo sharing, enabled SpaceNet to model an integrated set of missions as a campaign. The capabilities and uses of SpaceNet are demonstrated by a step-by-step modeling and simulation of a lunar campaign.
Towards enhanced automated elution systems for waterborne protozoa using megasonic energy.
Horton, B; Katzer, F; Desmulliez, M P Y; Bridle, H L
2018-02-01
Continuous and reliable monitoring of water sources for human consumption is imperative for public health. For protozoa, which cannot be multiplied efficiently in laboratory settings, concentration and recovery steps are key to a successful detection procedure. Recently, the use of megasonic energy was demonstrated to recover Cryptosporidium from commonly used water industry filtration procedures, forming thereby a basis for a simplified and cost effective method of elution of pathogens. In this article, we report the benefits of incorporating megasonic sonication into the current methodologies of Giardia duodenalis elution from an internationally approved filtration and elution system used within the water industry, the Filta-Max®. Megasonic energy assisted elution has many benefits over current methods since a smaller final volume of eluent allows removal of time-consuming centrifugation steps and reduces manual involvement resulting in a potentially more consistent and more cost-effective method. We also show that megasonic sonication of G. duodenalis cysts provides the option of a less damaging elution method compared to the standard Filta-Max® operation, although the elution from filter matrices is not currently fully optimised. A notable decrease in recovery of damaged cysts was observed in megasonic processed samples, potentially increasing the abilities of further genetic identification options upon isolation of the parasite from a filter sample. This work paves the way for the development of a fully automated and more cost-effective elution method of Giardia from water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
2016-02-10
using bolt hole eddy current (BHEC) techniques. Data was acquired for a wide range of crack sizes and shapes, including mid- bore , corner and through...to select the most appropriate VIC-3D surrogate model for subsequent crack sizing inversion step. Inversion results for select mid- bore , through and...the flaw. 15. SUBJECT TERMS Bolt hole eddy current (BHEC); mid- bore , corner and through-thickness crack types; VIC-3D generated surrogate models
Stepwise and stagewise approaches for spatial cluster detection
Xu, Jiale
2016-01-01
Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either hypothesis testing framework or Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic area. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power of detections. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. PMID:27246273
Stepwise and stagewise approaches for spatial cluster detection.
Xu, Jiale; Gangnon, Ronald E
2016-05-01
Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either a hypothesis testing framework or a Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with a tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic areas. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Smith, Rebecca K.; Lay, Donald C.
2018-01-01
Simple Summary The current approved method of using carbon dioxide (CO2) to euthanize newborn piglets is raising animal welfare concerns on whether the method is truly humane. A new form of euthanasia that is humane, practical, and socially acceptable is needed. Nitrous oxide (N2O), also known as laughing gas, has been shown to induce narcosis in piglets. We used a novel two-step system of exposing compromised piglets for six minutes to N2O followed by carbon dioxide and compared it to using CO2 alone. After exposure to nitrous oxide, all piglets lost posture, a sign of the onset of loss of consciousness, before being exposed to CO2 when they showed behavioral distress. On-farm use of a two-step method reduced the amount of time the piglets were exposed to CO2 but did not reduce the amount of distressful behaviors. Therefore, the results do not support the hypothesis that using N2O in a two-step system is more humane than CO2 alone. Abstract Current methods of euthanizing piglets are raising animal welfare concerns. Our experiment used a novel two-step euthanasia method, using nitrous oxide (N2O) for six minutes and then carbon dioxide (CO2) on compromised 0- to 7-day-old piglets. A commercial euthanasia chamber was modified to deliver two euthanasia treatments: the two-step method using N2O then CO2 (N2O treatment) or only CO2 (CO2 treatment). In Experiment 1, 18 piglets were individually euthanized. In Experiment 2, 18 groups of four to six piglets were euthanized. In the N2O treatment, piglets lost posture, indicating the onset of losing consciousness, before going into CO2 where they showed heavy breathing and open-mouth breathing; whereas piglets in the CO2 treatment did not lose posture until after exhibiting these behaviors (p ≤ 0.004). However, piglets in the N2O treatment took longer to lose posture compared to the CO2 treatment (p < 0.001). Piglets in the N2O treatment displayed more behavioral signs of stress and aversion: squeals/minute (p = 0.004), escape attempts per pig (p = 0.021), and righting responses per pig (p = 0.084) in a group setting. In these regards, it cannot be concluded that euthanizing piglets for 6 min with N2O and then CO2 is more humane than euthanizing with CO2 alone. PMID:29617328
Visualization of time-varying MRI data for MS lesion analysis
NASA Astrophysics Data System (ADS)
Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella
2001-05-01
Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.
Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.
Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro
2018-06-13
The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.
Health technology assessment: Off-site sterilization
Dehnavieh, Reza; Mirshekari, Nadia; Ghasemi, Sara; Goudarzi, Reza; Haghdoost, AliAkbar; Mehrolhassani, Mohammad Hossain; Moshkani, Zahra; Noori Hekmat, Somayeh
2016-01-01
Background: Every year millions of dollars are expended to equip and maintain the hospital sterilization centers, and our country is not an exception of this matter. According to this, it is important to use more effective technologies and methods in health system in order to reach more effectiveness and saving in costs. This study was conducted with the aim of evaluating the technology of regional sterilization centers. Methods: This study was done in four steps. At the first step, safety and effectiveness of technology was studied via systematic study of evidence. The next step was done to evaluate the economical aspect of off-site sterilization technology using gathered data from systematic review of the texts which were related to the technology and costs of off-site and in-site hospital sterilization. Third step was conducted to collect experiences of using technology in some selected hospitals around the world. And in the last step different aspects of acceptance and use of this technology in Iran were evaluated. Results: Review of the selected articles indicated that efficacy and effectiveness of this technology is Confirmed. The results also showed that using this method is not economical in Iran. Conclusion: According to the revealed evidences and also cost analysis, due to shortage of necessary substructures and economical aspect, installing the off-site sterilization health technology in hospitals is not possible currently. But this method can be used to provide sterilization services for clinics and outpatients centers. PMID:27390714
Superconducting magnetic shielding apparatus and method
Clem, John R.; Clem, John R.
1983-01-01
Disclosed is a method and apparatus for providing magnetic shielding around a working volume. The apparatus includes a hollow elongated superconducting shell or cylinder having an elongated low magnetic pinning central portion, and two high magnetic pinning end regions. Transition portions of varying magnetic pinning properties are interposed between the central and end portions. The apparatus further includes a solenoid substantially coextensive with and overlying the superconducting cylinder, so as to be magnetically coupled therewith. The method includes the steps passing a longitudinally directed current through the superconducting cylinder so as to depin magnetic reservoirs trapped in the cylinder. Next, a circumferentially directed current is passed through the cylinder, while a longitudinally directed current is maintained. Depinned magnetic reservoirs are moved to the end portions of the cylinder, where they are trapped.
Superconducting magnetic shielding apparatus and method
Clem, J.R.; Clem, J.R.
1983-10-11
Disclosed are a method and apparatus for providing magnetic shielding around a working volume. The apparatus includes a hollow elongated superconducting shell or cylinder having an elongated low magnetic pinning central portion, and two high magnetic pinning end regions. Transition portions of varying magnetic pinning properties are interposed between the central and end portions. The apparatus further includes a solenoid substantially coextensive with and overlying the superconducting cylinder, so as to be magnetically coupled therewith. The method includes the steps passing a longitudinally directed current through the superconducting cylinder so as to depin magnetic reservoirs trapped in the cylinder. Next, a circumferentially directed current is passed through the cylinder, while a longitudinally directed current is maintained. Depinned magnetic reservoirs are moved to the end portions of the cylinder, where they are trapped. 5 figs.
Superconducting magnetic shielding apparatus and method
Clem, J.R.
1982-07-09
Disclosed is a method and apparatus for providing magnetic shielding around a working volume. The apparatus includes a hollow elongated superconducting shell or cylinder having an elongated low magnetic pinning central portion, and two high magnetic pinning end regions. Transition portions of varying magnetic pinning properties are interposed between the central and end portions. The apparatus further includes a solenoid substantially coextensive with and overlying the superconducting cylinder, so as to be magnetically coupled therewith. The method includes the steps passing a longitudinally directed current through the superconducting cylinder so as to depin magnetic reservoirs trapped in the cylinder. Next, a circumferentially directed current is passed through the cylinder, while a longitudinally directed current is maintained. Depinned magnetic reservoirs are moved to the end portions of the cylinder, where they are trapped.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.
Fast Conceptual Cost Estimating of Aerospace Projects Using Historical Information
NASA Technical Reports Server (NTRS)
Butts, Glenn
2007-01-01
Accurate estimates can be created in less than a minute by applying powerful techniques and algorithms to create an Excel-based parametric cost model. In five easy steps you will learn how to normalize your company 's historical cost data to the new project parameters. This paper provides a complete, easy-to-understand, step by step how-to guide. Such a guide does not seem to currently exist. Over 2,000 hours of research, data collection, and trial and error, and thousands of lines of Excel Visual Basic Application (VBA) code were invested in developing these methods. While VBA is not required to use this information, it increases the power and aesthetics of the model. Implementing all of the steps described, while not required, will increase the accuracy of the results.
Clegg, Paul S; Tavacoli, Joe W; Wilde, Pete J
2016-01-28
Multiple emulsions have great potential for application in food science as a means to reduce fat content or for controlled encapsulation and release of actives. However, neither production nor stability is straightforward. Typically, multiple emulsions are prepared via two emulsification steps and a variety of approaches have been deployed to give long-term stability. It is well known that multiple emulsions can be prepared in a single step by harnessing emulsion inversion, although the resulting emulsions are usually short lived. Recently, several contrasting methods have been demonstrated which give rise to stable multiple emulsions via one-step production processes. Here we review the current state of microfluidic, polymer-stabilized and particle-stabilized approaches; these rely on phase separation, the role of electrolyte and the trapping of solvent with particles respectively.
Optimization of the current potential for stellarator coils
NASA Astrophysics Data System (ADS)
Boozer, Allen H.
2000-02-01
Stellarator plasma confinement devices have no continuous symmetries, which makes the design of appropriate coils far more subtle than for axisymmetric devices such as tokamaks. The modern method for designing coils for stellarators was developed by Peter Merkel [P. Merkel, Nucl. Fusion 27, 867 (1987)]. Although his method has yielded a number of successful stellarator designs, Merkel's method has a systematic tendency to give coils with a larger current than that required to produce a stellarator plasma with certain properties. In addition, Merkel's method does not naturally lead to a coil set with the flexibility to produce a number of interesting plasma configurations. The issues of coil efficiency and flexibility are addressed in this paper by a new method of optimizing the current potential, the first step in Merkel's method. The new method also allows the coil design to be based on a freer choice for the plasma-coil separation and to be constrained so space is preserved for plasma access.
Optimization of the current potential for stellarator coils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boozer, Allen H.; Max-Planck-Institut fuer Plasmaphysik, EURATOM-Association, D-85748 Garching,
2000-02-01
Stellarator plasma confinement devices have no continuous symmetries, which makes the design of appropriate coils far more subtle than for axisymmetric devices such as tokamaks. The modern method for designing coils for stellarators was developed by Peter Merkel [P. Merkel, Nucl. Fusion 27, 867 (1987)]. Although his method has yielded a number of successful stellarator designs, Merkel's method has a systematic tendency to give coils with a larger current than that required to produce a stellarator plasma with certain properties. In addition, Merkel's method does not naturally lead to a coil set with the flexibility to produce a number ofmore » interesting plasma configurations. The issues of coil efficiency and flexibility are addressed in this paper by a new method of optimizing the current potential, the first step in Merkel's method. The new method also allows the coil design to be based on a freer choice for the plasma-coil separation and to be constrained so space is preserved for plasma access. (c) 2000 American Institute of Physics.« less
[Current strategy in PCI for CTO].
Asakura, Yasushi
2011-02-01
Recently, CTO PCI has come into wide use all over the world and it has been standardized. The 1st step is an antegrade approach using single wire. The 2nd strategy would be parallel wire technique. And the next would be a retrograde approach. In this method, retrograde wiring with Corsair is done at first. If it is successful, externalization is established using 300 cm wire, and this system is able to provide strong back-up support. If it fails, reverse CART technique is the next step. IVUS guided wiring is a last resort. The 2nd wire is manipulated with IVUS guidance. Now, initial success rate is more than 90% with these methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morais, A. P.; Salgado de Oliveira University, Marechal Deodoro Street, 217 – Centro, Niterói, Rio de Janeiro; Pino, A. V.
This in vitro study evaluated the diagnostic performance of an alternative electric bioimpedance spectroscopy technique (BIS-STEP) detect questionable occlusal carious lesions. Six specialists carried out the visual (V), radiography (R), and combined (VR) exams of 57 sound or non-cavitated occlusal carious lesion teeth classifying the occlusal surfaces in sound surface (H), enamel caries (EC), and dentinal caries (DC). Measurements were based on the current response to a step voltage excitation (BIS-STEP). A fractional electrical model was used to predict the current response in the time domain and to estimate the model parameters: Rs and Rp (resistive parameters), and C andmore » α (fractional parameters). Histological analysis showed caries prevalence of 33.3% being 15.8% hidden caries. Combined examination obtained the best traditional diagnostic results with specificity = 59.0%, sensitivity = 70.9%, and accuracy = 60.8%. There were statistically significant differences in bioimpedance parameters between the H and EC groups (p = 0.016) and between the H and DC groups (Rs, p = 0.006; Rp, p = 0.022, and α, p = 0.041). Using a suitable threshold for the Rs, we obtained specificity = 60.7%, sensitivity = 77.9%, accuracy = 73.2%, and 100% of detection for deep lesions. It can be concluded that BIS-STEP method could be an important tool to improve the detection and management of occlusal non-cavitated primary caries and pigmented sites.« less
STEPPING - Smartphone-Based Portable Pedestrian Indoor Navigation
NASA Astrophysics Data System (ADS)
Lukianto, C.; Sternberg, H.
2011-12-01
Many current smartphones are fitted with GPS receivers, which, in combination with a map application form a pedestrian navigation system for outdoor purposes. However, once an area with insufficient satellite signal coverage is entered, these navigation systems cease to function. For indoor positioning, there are already several solutions available which are usually based on measured distances to reference points. These solutions can achieve resolutions as low as the sub-millimetre range depending on the complexity of the set-up. STEPPING project, developed at HCU Hamburg Germany aims at designing an indoor navigation system consisting of a small inertial navigation system and a new, robust sensor fusion algorithm running on a current smartphone. As this system is theoretically able to integrate any available positioning method, it is independent of a particular method and can thus be realized on a smartphone without affecting user mobility. Potential applications include --but are not limited to: Large trade fairs, airports, parking decks and shopping malls, as well as ambient assisted living scenarios.
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.
1981-01-01
Small-signal models are derived for the power stage of the voltage step-up (boost) and the current step-up (buck) converters. The modeling covers operation in both the continuous-mmf mode and the discontinuous-mmf mode. The power stage in the regulated current step-up converter on board the Dynamics Explorer Satellite is used as an example to illustrate the procedures in obtaining the small-signal functions characterizing a regulated converter.
NASA Astrophysics Data System (ADS)
Guthrey, Pierson Tyler
The relativistic Vlasov-Maxwell system (RVM) models the behavior of collisionless plasma, where electrons and ions interact via the electromagnetic fields they generate. In the RVM system, electrons could accelerate to significant fractions of the speed of light. An idea that is actively being pursued by several research groups around the globe is to accelerate electrons to relativistic speeds by hitting a plasma with an intense laser beam. As the laser beam passes through the plasma it creates plasma wakes, much like a ship passing through water, which can trap electrons and push them to relativistic speeds. Such setups are known as laser wakefield accelerators, and have the potential to yield particle accelerators that are significantly smaller than those currently in use. Ultimately, the goal of such research is to harness the resulting electron beams to generate electromagnetic waves that can be used in medical imaging applications. High-order accurate numerical discretizations of kinetic Vlasov plasma models are very effective at yielding low-noise plasma simulations, but are computationally expensive to solve because of the high dimensionality. In addition to the general difficulties inherent to numerically simulating Vlasov models, the relativistic Vlasov-Maxwell system has unique challenges not present in the non-relativistic case. One such issue is that operator splitting of the phase gradient leads to potential instabilities, thus we require an alternative to operator splitting of the phase. The goal of the current work is to develop a new class of high-order accurate numerical methods for solving kinetic Vlasov models of plasma. The main discretization in configuration space is handled via a high-order finite element method called the discontinuous Galerkin method (DG). One difficulty is that standard explicit time-stepping methods for DG suffer from time-step restrictions that are significantly worse than what a simple Courant-Friedrichs-Lewy (CFL) argument requires. The maximum stable time-step scales inversely with the highest degree in the DG polynomial approximation space and becomes progressively smaller with each added spatial dimension. In this work, we overcome this difficulty by introducing a novel time-stepping strategy: the regionally-implicit discontinuous Galerkin (RIDG) method. The RIDG is method is based on an extension of the Lax-Wendroff DG (LxW-DG) method, which previously had been shown to be equivalent (for linear constant coefficient problems) to a predictor-corrector approach, where the prediction is computed by a space-time DG method (STDG). The corrector is an explicit method that uses the space-time reconstructed solution from the predictor step. In this work, we modify the predictor to include not just local information, but also neighboring information. With this modification, we show that the stability is greatly enhanced; we show that we can remove the polynomial degree dependence of the maximum time-step and show vastly improved time-steps in multiple spatial dimensions. Upon the development of the general RIDG method, we apply it to the non-relativistic 1D1V Vlasov-Poisson equations and the relativistic 1D2V Vlasov-Maxwell equations. For each we validate the high-order method on several test cases. In the final test case, we demonstrate the ability of the method to simulate the acceleration of electrons to relativistic speeds in a simplified test case.
Yang, Zhi; Liu, Xiaoman; Wang, Kuiwu; Cao, Xiaoji; Wu, Shihua
2013-03-01
Dysosma versipellis (Hance) is a famous traditional Chinese medicine for the treatment of snakebite, weakness, condyloma accuminata, lymphadenopathy, and tumors for thousands of years. In this work, four podophyllotoxin-like lignans including 4'-demethylpodophyllotoxin (1), α-peltatin (2), podophyllotoxin (3), β-peltatin (4) as major cytotoxic principles of D. versipellis were successfully isolated and purified by several novel linear and step gradient counter-current chromatography methods using the systems of hexane/ethyl acetate/methanol/water (4:6:3:7 and 4:6:4:6, v/v/v/v). Compared with isocratic elution, linear and step-gradient elution can provide better resolution and save more time for the separation of photophyllotoxin and its congeners. Their cytotoxicities were further evaluated and their structures were validated by high-resolution electrospray TOF MS and nuclear magnetic resonance spectra. All components showed potent anticancer activity against human hepatoma cells HepG2. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young-Cheol; Kim, Hyun-Jun; Lee, Hyo-Chang
In a plasma discharge system, the power loss at powered line, matching network, and other transmission line can affect the discharge characteristics such as the power transfer efficiency, voltage and current at powered electrode, and plasma density. In this paper, we propose a method to reduce power loss by using a step down transformer mounted between the matching network and the powered electrode in a capacitively coupled argon plasma. This step down transformer decreases the power loss by reducing the current flowing through the matching network and transmission line. As a result, the power transfer efficiency was increased about 5%–10%more » by using a step down transformer. However, the plasma density was dramatically increased compared to no transformer. This can be understood by the increase in ohmic heating and the decrease in dc-self bias. By simply mounting a transformer, improvement of discharge efficiency can be achieved in capacitively coupled plasmas.« less
NASA Astrophysics Data System (ADS)
Shah, Syed Afaq Ali; Sayyad, Muhammad Hassan; Abdulkarim, Salem; Qiao, Qiquan
2018-05-01
A step-by-step heat treatment was applied to ruthenium-based N719 dye solution for its potential application in dye-sensitized solar cells (DSSCs). The effects were analyzed and compared with standard untreated devices. A significant increase in short circuit current density was observed by employing a step-by-step heating method for dye solution in DSSCs. This increase of J sc is attributed to the enhancement in dye adsorption by the surface of the semiconductor and the higher number of charge carriers generated. DSSCs fabricated by a heated dye solution have achieved an overall power conversion efficiency of 8.41% which is significantly higher than the efficiency of 7.31% achieved with DSSCs fabricated without heated dye. Electrochemical impedance spectroscopy and capacitance voltage studies were performed to understand the better performance of the device fabricated with heated dye. Furthermore, transient photocurrent and transient photovoltage measurements were also performed to gain an insight into interfacial charge carrier recombinations.
Cserpán, Dorottya; Meszéna, Domokos; Wittner, Lucia; Tóth, Kinga; Ulbert, István; Somogyvári, Zoltán
2017-01-01
Revealing the current source distribution along the neuronal membrane is a key step on the way to understanding neural computations; however, the experimental and theoretical tools to achieve sufficient spatiotemporal resolution for the estimation remain to be established. Here, we address this problem using extracellularly recorded potentials with arbitrarily distributed electrodes for a neuron of known morphology. We use simulations of models with varying complexity to validate the proposed method and to give recommendations for experimental applications. The method is applied to in vitro data from rat hippocampus. PMID:29148974
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
CFD and Thermo Mechanical Analysis on Effect of Curved vs Step Surface in IC Engine Cylinder Head
NASA Astrophysics Data System (ADS)
Balaji, S.; Ganesh, N.; Kumarasamy, A.
2017-05-01
Current research in IC engines mainly focus on various methods to achieve higher efficiency and high specific power. As a single design parameter, combustion chamber peak spring pressure has increased more than before. Apart from the structural aspects of withstanding these loads, designer faces challenges of resolving thermal aspects of cylinder head. Methods to enhance the heat transfer without compromising load withstanding capability are being constantly explored. Conventional cylinder heads have got sat inner surface. In this paper we have suggested a modification in inner surface to enhance the heat transfer capability. To increase the heat transfer rate, inner same deck surface is configured as a curved and stepped surface instead of sat. We have reported the effectiveness of extend of curvature in the inner same deck surface in a different technical paper. Here, we are making a direct comparison between stepped and curved surface only. From this analysis it has been observed that curved surface reduces the ame deck temperature considerably without compromising the structural strength factors compared to step and sat surface.
Comparative Study of Three Data Assimilation Methods for Ice Sheet Model Initialisation
NASA Astrophysics Data System (ADS)
Mosbeux, Cyrille; Gillet-Chaulet, Fabien; Gagliardini, Olivier
2015-04-01
The current global warming has direct consequences on ice-sheet mass loss contributing to sea level rise. This loss is generally driven by an acceleration of some coastal outlet glaciers and reproducing these mechanisms is one of the major issues in ice-sheet and ice flow modelling. The construction of an initial state, as close as possible to current observations, is required as a prerequisite before producing any reliable projection of the evolution of ice-sheets. For this step, inverse methods are often used to infer badly known or unknown parameters. For instance, the adjoint inverse method has been implemented and applied with success by different authors in different ice flow models in order to infer the basal drag [ Schafer et al., 2012; Gillet-chauletet al., 2012; Morlighem et al., 2010]. Others data fields, such as ice surface and bedrock topography, are easily measurable with more or less uncertainty but only locally along tracks and interpolated on finer model grid. All these approximations lead to errors on the data elevation model and give rise to an ill-posed problem inducing non-physical anomalies in flux divergence [Seroussi et al, 2011]. A solution to dissipate these divergences of flux is to conduct a surface relaxation step at the expense of the accuracy of the modelled surface [Gillet-Chaulet et al., 2012]. Other solutions, based on the inversion of ice thickness and basal drag were proposed [Perego et al., 2014; Pralong & Gudmundsson, 2011]. In this study, we create a twin experiment to compare three different assimilation algorithms based on inverse methods and nudging to constrain the bedrock friction and the bedrock elevation: (i) cyclic inversion of friction parameter and bedrock topography using adjoint method, (ii) cycles coupling inversion of friction parameter using adjoint method and nudging of bedrock topography, (iii) one step inversion of both parameters with adjoint method. The three methods show a clear improvement in parameters knowledge leading to a significant reduction of flux divergence of the model before forecasting.
Comparison of genomic-enhanced EPD systems using an external phenotypic database
USDA-ARS?s Scientific Manuscript database
The American Angus Association (AAA) is currently evaluating two methods to incorporate genomic information into their genetic evaluation program: 1) multi-trait incorporation of an externally produced molecular breeding value as an indicator trait (MT) and 2) single-step evaluation with an unweight...
USDA-ARS?s Scientific Manuscript database
Current wet chemical methods for biomass composition analysis using two-step sulfuric acid hydrolysis are time-consuming, labor-intensive, and unable to provide structural information about biomass. Infrared techniques provide fast, low-cost analysis, are non-destructive, and have shown promising re...
Briot, T; Robelet, A; Morin, N; Riou, J; Lelièvre, B; Lebelle-Dehaut, A-V
2016-07-01
In this study, a novel analytical method to quantify prion inactivating detergent in rinsing waters coming from the washer-disinfector of a hospital sterilization unit has been developed. The final aim was to obtain an easy and functional method in a routine hospital process which does not need the cleaning product manufacturer services. An ICP-MS method based on the potassium dosage of the washer-disinfector's rinsing waters was developed. Potassium hydroxide is present on the composition of the three prion inactivating detergent currently on the French market. The detergent used in this study was the Actanios LDI(®) (Anios laboratories). A Passing and Bablok regression compares concentrations measured with this developed method and with the HPLC-UV manufacturer method. According to results obtained, the developed method is easy to use in a routine hospital process. The Passing and Bablok regression showed that there is no statistical difference between the two analytical methods during the second rinsing step. Besides, both methods were linear on the third rinsing step, with a 1.5ppm difference between the concentrations measured for each method. This study shows that the ICP-MS method developed is nonspecific for the detergent, but specific for the potassium element which is present in all prion inactivating detergent currently on the French market. This method should be functional for all the prion inactivating detergent containing potassium, if the sensibility of the method is sufficient when the potassium concentration is very low in the prion inactivating detergent formulation. Copyright © 2016. Published by Elsevier Masson SAS.
Lee, Hyungseok; Cho, Dong-Woo
2016-07-05
Although various types of organs-on-chips have been introduced recently as tools for drug discovery, the current studies are limited in terms of fabrication methods. The fabrication methods currently available not only need a secondary cell-seeding process and result in severe protein absorption due to the material used, but also have difficulties in providing various cell types and extracellular matrix (ECM) environments for spatial heterogeneity in the organs-on-chips. Therefore, in this research, we introduce a novel 3D bioprinting method for organ-on-a-chip applications. With our novel 3D bioprinting method, it was possible to prepare an organ-on-a-chip in a simple one-step fabrication process. Furthermore, protein absorption on the printed platform was very low, which will lead to accurate measurement of metabolism and drug sensitivity. Moreover, heterotypic cell types and biomaterials were successfully used and positioned at the desired position for various organ-on-a-chip applications, which will promote full mimicry of the natural conditions of the organs. The liver organ was selected for the evaluation of the developed method, and liver function was shown to be significantly enhanced on the liver-on-a-chip, which was prepared by 3D bioprinting. Consequently, the results demonstrate that the suggested 3D bioprinting method is easier and more versatile for production of organs-on-chips.
Tunneling calculations for GaAs-Al(x)Ga(1-x) as graded band-gap sawtooth superlattices. Thesis
NASA Technical Reports Server (NTRS)
Forrest, Kathrine A.; Meijer, Paul H. E.
1991-01-01
Quantum mechanical tunneling calculations for sawtooth (linearly graded band-gap) and step-barrier AlGaAs superlattices were performed by means of a transfer matrix method, within the effective mass approximation. The transmission coefficient and tunneling current versus applied voltage were computed for several representative structures. Particular consideration was given to effective mass variations. The tunneling properties of step and sawtooth superlattices show some qualitative similarities. Both structures exhibit resonant tunneling, however, because they deform differently under applied fields, the J-V curves differ.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edgar, Thomas W.; Hadley, Mark D.; Manz, David O.
This document provides the methods to secure routable control system communication in the electric sector. The approach of this document yields a long-term vision for a future of secure communication, while also providing near term steps and a roadmap. The requirements for the future secure control system environment were spelled out to provide a final target. Additionally a survey and evaluation of current protocols was used to determine if any existing technology could achieve this goal. In the end a four-step path was described that brought about increasing requirement completion and culminates in the realization of the long term vision.
Planning energy-efficient bipedal locomotion on patterned terrain
NASA Astrophysics Data System (ADS)
Zamani, Ali; Bhounsule, Pranav A.; Taha, Ahmad
2016-05-01
Energy-efficient bipedal walking is essential in realizing practical bipedal systems. However, current energy-efficient bipedal robots (e.g., passive-dynamics-inspired robots) are limited to walking at a single speed and step length. The objective of this work is to address this gap by developing a method of synthesizing energy-efficient bipedal locomotion on patterned terrain consisting of stepping stones using energy-efficient primitives. A model of Cornell Ranger (a passive-dynamics inspired robot) is utilized to illustrate our technique. First, an energy-optimal trajectory control problem for a single step is formulated and solved. The solution minimizes the Total Cost Of Transport (TCOT is defined as the energy used per unit weight per unit distance travelled) subject to various constraints such as actuator limits, foot scuffing, joint kinematic limits, ground reaction forces. The outcome of the optimization scheme is a table of TCOT values as a function of step length and step velocity. Next, we parameterize the terrain to identify the location of the stepping stones. Finally, the TCOT table is used in conjunction with the parameterized terrain to plan an energy-efficient stepping strategy.
Compact mass spectrometer for plasma discharge ion analysis
Tuszewski, M.G.
1997-07-22
A mass spectrometer and methods are disclosed for mass spectrometry which are useful in characterizing a plasma. This mass spectrometer for determining type and quantity of ions present in a plasma is simple, compact, and inexpensive. It accomplishes mass analysis in a single step, rather than the usual two-step process comprised of ion extraction followed by mass filtering. Ions are captured by a measuring element placed in a plasma and accelerated by a known applied voltage. Captured ions are bent into near-circular orbits by a magnetic field such that they strike a collector, producing an electric current. Ion orbits vary with applied voltage and proton mass ratio of the ions, so that ion species may be identified. Current flow provides an indication of quantity of ions striking the collector. 7 figs.
Compact mass spectrometer for plasma discharge ion analysis
Tuszewski, Michel G.
1997-01-01
A mass spectrometer and methods for mass spectrometry which are useful in characterizing a plasma. This mass spectrometer for determining type and quantity of ions present in a plasma is simple, compact, and inexpensive. It accomplishes mass analysis in a single step, rather than the usual two-step process comprised of ion extraction followed by mass filtering. Ions are captured by a measuring element placed in a plasma and accelerated by a known applied voltage. Captured ions are bent into near-circular orbits by a magnetic field such that they strike a collector, producing an electric current. Ion orbits vary with applied voltage and proton mass ratio of the ions, so that ion species may be identified. Current flow provides an indication of quantity of ions striking the collector.
Dong, Wenbo; Wang, Kaiyin; Chen, Yu; Li, Weiping; Ye, Yanchun; Jin, Shaohua
2017-07-28
An electrochemical detection biosensor was prepared with the chitosan-immobilized-enzyme (CTS-CAT) and β-cyclodextrin-included-ferrocene (β-CD-FE) complex for the determination of H₂O₂. Ferrocene (FE) was included in β-cyclodextrin (β-CD) to increase its stability. The structure of the β-CD-FE was characterized. The inclusion amount, inclusion rate, and electrochemical properties of inclusion complexes were determined to optimize the reaction conditions for the inclusion. CTS-CAT was prepared by a step-by-step immobilization method, which overcame the disadvantages of the conventional preparation methods. The immobilization conditions were optimized to obtain the desired enzyme activity. CTS-CAT/β-CD-FE composite electrodes were prepared by compositing the CTS-CAT with the β-CD-FE complex on a glassy carbon electrode and used for the electrochemical detection of H₂O₂. It was found that the CTS-CAT could produce a strong reduction peak current in response to H₂O₂ and the β-CD-FE could amplify the current signal. The peak current exhibited a linear relationship with the H₂O₂ concentration in the range of 1.0 × 10 -7 -6.0 × 10 -3 mol/L. Our work provided a novel method for the construction of electrochemical biosensors with a fast response, good stability, high sensitivity, and a wide linear response range based on the composite of chitosan and cyclodextrin.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
NASA Astrophysics Data System (ADS)
Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan
2016-12-01
For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function provides a dynamic process of evolution from the kinetic scale particle free transport to the hydrodynamic scale wave propagation, which provides the physics for the non-equilibrium numerical shock structure construction to the near equilibrium NS solution. As a result, with the implementation of the fifth-order WENO initial reconstruction, in the smooth region the current two-stage GKS provides an accuracy of O ((Δx) 5 ,(Δt) 4) for the Euler equations, and O ((Δx) 5 ,τ2 Δt) for the NS equations, where τ is the time between particle collisions. Many numerical tests, including difficult ones for the Navier-Stokes solvers, have been used to validate the current method. Perfect numerical solutions can be obtained from the high Reynolds number boundary layer to the hypersonic viscous heat conducting flow. Following the two-stage time-stepping framework, the third-order GKS flux function can be used as well to construct a fifth-order method with the usage of both first-order and second-order time derivatives of the flux function. The use of time-accurate flux function may have great advantages on the development of higher-order CFD methods.
Chemical methods for peptide and protein production.
Chandrudu, Saranya; Simerska, Pavla; Toth, Istvan
2013-04-12
Since the invention of solid phase synthetic methods by Merrifield in 1963, the number of research groups focusing on peptide synthesis has grown exponentially. However, the original step-by-step synthesis had limitations: the purity of the final product decreased with the number of coupling steps. After the development of Boc and Fmoc protecting groups, novel amino acid protecting groups and new techniques were introduced to provide high quality and quantity peptide products. Fragment condensation was a popular method for peptide production in the 1980s, but unfortunately the rate of racemization and reaction difficulties proved less than ideal. Kent and co-workers revolutionized peptide coupling by introducing the chemoselective reaction of unprotected peptides, called native chemical ligation. Subsequently, research has focused on the development of novel ligating techniques including the famous click reaction, ligation of peptide hydrazides, and the recently reported α-ketoacid-hydroxylamine ligations with 5-oxaproline. Several companies have been formed all over the world to prepare high quality Good Manufacturing Practice peptide products on a multi-kilogram scale. This review describes the advances in peptide chemistry including the variety of synthetic peptide methods currently available and the broad application of peptides in medicinal chemistry.
NASA Astrophysics Data System (ADS)
Hamilton, Jason S.; Aguilar, Roberto; Petros, Robby A.; Verbeck, Guido F.
2017-05-01
The cellular metabolome is considered to be a representation of cellular phenotype and cellular response to changes to internal or external events. Methods to expand the coverage of the expansive physiochemical properties that makeup the metabolome currently utilize multi-step extractions and chromatographic separations prior to chemical detection, leading to lengthy analysis times. In this study, a single-step procedure for the extraction and separation of a sample using a micro-capillary as a separatory funnel to achieve analyte partitioning within an organic/aqueous immiscible solvent system is described. The separated analytes are then spotted for MALDI-MS imaging and distribution ratios are calculated. Initially, the method is applied to standard mixtures for proof of partitioning. The extraction of an individual cell is non-reproducible; therefore, a broad chemical analysis of metabolites is necessary and will be illustrated with the one-cell analysis of a single Snu-5 gastric cancer cell taken from a cellular suspension. The method presented here shows a broad partitioning dynamic range as a single-step method for lipid analysis demonstrating a decrease in ion suppression often present in MALDI analysis of lipids.
Fuzzy Filtering Method for Color Videos Corrupted by Additive Noise
Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Nino-de-Rivera, Luis
2014-01-01
A novel method for the denoising of color videos corrupted by additive noise is presented in this paper. The proposed technique consists of three principal filtering steps: spatial, spatiotemporal, and spatial postprocessing. In contrast to other state-of-the-art algorithms, during the first spatial step, the eight gradient values in different directions for pixels located in the vicinity of a central pixel as well as the R, G, and B channel correlation between the analogous pixels in different color bands are taken into account. These gradient values give the information about the level of contamination then the designed fuzzy rules are used to preserve the image features (textures, edges, sharpness, chromatic properties, etc.). In the second step, two neighboring video frames are processed together. Possible local motions between neighboring frames are estimated using block matching procedure in eight directions to perform interframe filtering. In the final step, the edges and smoothed regions in a current frame are distinguished for final postprocessing filtering. Numerous simulation results confirm that this novel 3D fuzzy method performs better than other state-of-the-art techniques in terms of objective criteria (PSNR, MAE, NCD, and SSIM) as well as subjective perception via the human vision system in the different color videos. PMID:24688428
Sanchez, Jason C; Toal, Sarah J; Wang, Zheng; Dugan, Regina E; Trogler, William C
2007-11-01
Detection of trace quantities of explosive residues plays a key role in military, civilian, and counter-terrorism applications. To advance explosives sensor technology, current methods will need to become cheaper and portable while maintaining sensitivity and selectivity. The detection of common explosives including trinitrotoluene (TNT), cyclotrimethylenetrinitramine, cyclotetramethylene-tetranitramine, pentaerythritol tetranitrate, 2,4,6-trinitrophenyl-N-methylnitramine, and trinitroglycerin may be carried out using a three-step process combining "turn-off" and "turn-on" fluorimetric sensing. This process first detects nitroaromatic explosives by their quenching of green luminescence of polymetalloles (lambda em approximately 400-510 nm). The second step places down a thin film of 2,3-diaminonaphthalene (DAN) while "erasing" the polymetallole luminescence. The final step completes the reaction of the nitramines and/or nitrate esters with DAN resulting in the formation of a blue luminescent traizole complex (lambda(em) = 450 nm) providing a "turn-on" response for nitramine and nitrate ester-based explosives. Detection limits as low as 2 ng are observed. Solid-state detection of production line explosives demonstrates the applicability of this method to real world situations. This method offers a sensitive and selective detection process for a diverse group of the most common high explosives used in military and terrorist applications today.
Using Key Performance Indicators to Drive Strategic Decision Making.
ERIC Educational Resources Information Center
Dolence, Michael G.; Norris, Donald M.
1994-01-01
A nine-step method for defining and pursuing key performance indicators (KPIs), derived from a strategic planning process, is outlined, and its applications at the University of Northern Colorado and Illinois Benedictine College are described and tabulated. A chart summarizes current and projected KPIs for Illinois Benedictine College for each…
Steps toward a Technology for the Diffusion of Innovations.
ERIC Educational Resources Information Center
Stolz, Stephanie B.
Research-based technologies for solving problems currently exist but are not being widely implemented. Although user variables, program effectiveness, and political considerations have been documented as correlates of implementation, general non-implementation of the technology still exists, due to a lack of methods. A technology of dissemination…
Current matrix element in HAL QCD's wavefunction-equivalent potential method
NASA Astrophysics Data System (ADS)
Watanabe, Kai; Ishii, Noriyoshi
2018-04-01
We give a formula to calculate a matrix element of a conserved current in the effective quantum mechanics defined by the wavefunction-equivalent potentials proposed by the HAL QCD collaboration. As a first step, a non-relativistic field theory with two-channel coupling is considered as the original theory, with which a wavefunction-equivalent HAL QCD potential is obtained in a closed analytic form. The external field method is used to derive the formula by demanding that the result should agree with the original theory. With this formula, the matrix element is obtained by sandwiching the effective current operator between the left and right eigenfunctions of the effective Hamiltonian associated with the HAL QCD potential. In addition to the naive one-body current, the effective current operator contains an additional two-body term emerging from the degrees of freedom which has been integrated out.
Theory of step on leading edge of negative corona current pulse
NASA Astrophysics Data System (ADS)
Gupta, Deepak K.; Mahajan, Sangeeta; John, P. I.
2000-03-01
Theoretical models taking into account different feedback source terms (e.g., ion-impact electron emission, photo-electron emission, field emission, etc) have been proposed for the existence and explanation of the shape of negative corona current pulse, including the step on the leading edge. In the present work, a negative corona current pulse with the step on the leading edge is obtained in the presence of ion-impact electron emission feedback source only. The step on the leading edge is explained in terms of the plasma formation process and enhancement of the feedback source. Ionization wave-like movement toward the cathode is observed after the step. The conditions for the existence of current pulse, with and without the step on the leading edge, are also described. A qualitative comparison with earlier theoretical and experimental work is also included.
Caruccio, Nicholas
2011-01-01
DNA library preparation is a common entry point and bottleneck for next-generation sequencing. Current methods generally consist of distinct steps that often involve significant sample loss and hands-on time: DNA fragmentation, end-polishing, and adaptor-ligation. In vitro transposition with Nextera™ Transposomes simultaneously fragments and covalently tags the target DNA, thereby combining these three distinct steps into a single reaction. Platform-specific sequencing adaptors can be added, and the sample can be enriched and bar-coded using limited-cycle PCR to prepare di-tagged DNA fragment libraries. Nextera technology offers a streamlined, efficient, and high-throughput method for generating bar-coded libraries compatible with multiple next-generation sequencing platforms.
Method of manufacturing a niobium-aluminum-germanium superconductive material
Wang, J.L.F.; Pickus, M.R.; Douglas, K.E.
A method for manufacturing flexible Nb/sub 3/ (Al,Ge) multifilamentary superconductive material in which a sintered porous Nb compact is infiltrated with an Al-Ge alloy. It is deformed and heat treated in a series of steps at successively higher temperatures preferably below 1000/sup 0/C during the heat treatment, cladding material such as copper can be applied to facilitate a deformation step preceding the heat treatment and can remain in place through the heat treatment to serve as a temperature stabilizer for the superconductive material produced. These lower heat treatment temperatures favor formation of filaments with reduced grain size and with more grain boundaries which in turn increase the current-carrying capacity of the superconductive material.
Crystallization of Membrane Proteins by Vapor Diffusion
Delmar, Jared A.; Bolla, Jani Reddy; Su, Chih-Chia; Yu, Edward W.
2016-01-01
X-ray crystallography remains the most robust method to determine protein structure at the atomic level. However, the bottlenecks of protein expression and purification often discourage further study. In this chapter, we address the most common problems encountered at these stages. Based on our experiences in expressing and purifying antimicrobial efflux proteins, we explain how a pure and homogenous protein sample can be successfully crystallized by the vapor diffusion method. We present our current protocols and methodologies for this technique. Case studies show step-by-step how we have overcome problems related to expression and diffraction, eventually producing high quality membrane protein crystals for structural determinations. It is our hope that a rational approach can be made of the often anecdotal process of membrane protein crystallization. PMID:25950974
Machine Learning: A Crucial Tool for Sensor Design
Zhao, Weixiang; Bhushan, Abhinav; Santamaria, Anthony D.; Simon, Melinda G.; Davis, Cristina E.
2009-01-01
Sensors have been widely used for disease diagnosis, environmental quality monitoring, food quality control, industrial process analysis and control, and other related fields. As a key tool for sensor data analysis, machine learning is becoming a core part of novel sensor design. Dividing a complete machine learning process into three steps: data pre-treatment, feature extraction and dimension reduction, and system modeling, this paper provides a review of the methods that are widely used for each step. For each method, the principles and the key issues that affect modeling results are discussed. After reviewing the potential problems in machine learning processes, this paper gives a summary of current algorithms in this field and provides some feasible directions for future studies. PMID:20191110
NASA Technical Reports Server (NTRS)
Veldkamp, Ted; Wada, Yoshihide; Aerts, Jeroen; Ward, Phillip
2016-01-01
Water scarcity -driven by climate change, climate variability, and socioeconomic developments- is recognized as one of the most important global risks, both in terms of likelihood and impact. Whilst a wide range of studies have assessed the role of long term climate change and socioeconomic trends on global water scarcity, the impact of variability is less well understood. Moreover, the interactions between different forcing mechanisms, and their combined effect on changes in water scarcity conditions, are often neglected. Therefore, we provide a first step towards a framework for global water scarcity risk assessments, applying probabilistic methods to estimate water scarcity risks for different return periods under current and future conditions while using multiple climate and socioeconomic scenarios.
The general alcoholics anonymous tools of recovery: the adoption of 12-step practices and beliefs.
Greenfield, Brenna L; Tonigan, J Scott
2013-09-01
Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step work have received minimal attention and even less is known about how step work predicts later substance use. The current study (1) compared endorsements of step work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step work, the General Alcoholics Anonymous Tools of Recovery (GAATOR); (2) evaluated the underlying factor structure of the GAATOR and changes in step work over time; (3) examined changes in the endorsement of step work over time; and (4) investigated how, if at all, 12-step work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising behavioral step work and spiritual step work. Behavioral step work did not change over time, but was predicted by having a sponsor, while Spiritual step work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral step work did not prospectively predict substance use. In contrast, spiritual step work predicted percent days abstinent. Behavioral step work and spiritual step work appear to be conceptually distinct components of step work that have distinct predictors and unique impacts on outcomes. PsycINFO Database Record (c) 2013 APA, all rights reserved.
A simplified method to recover urinary vesicles for clinical applications, and sample banking.
Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry
2014-12-23
Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking.
A Simplified Method to Recover Urinary Vesicles for Clinical Applications, and Sample Banking
Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry
2014-01-01
Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking. PMID:25532487
A thioacidolysis method tailored for higher‐throughput quantitative analysis of lignin monomers
Foster, Cliff; Happs, Renee M.; Doeppke, Crissa; Meunier, Kristoffer; Gehan, Jackson; Yue, Fengxia; Lu, Fachuang; Davis, Mark F.
2016-01-01
Abstract Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β‐O‐4 linkages. Current thioacidolysis methods are low‐throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non‐chlorinated organic solvent and is tailored for higher‐throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1–2 mg of biomass per assay and has been quantified using fast‐GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, including standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day‐to‐day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. The method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses. PMID:27534715
A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.
Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less
A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers
Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.; ...
2016-09-14
Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less
A modular computational framework for automated peak extraction from ion mobility spectra
2014-01-01
Background An ion mobility (IM) spectrometer coupled with a multi-capillary column (MCC) measures volatile organic compounds (VOCs) in the air or in exhaled breath. This technique is utilized in several biotechnological and medical applications. Each peak in an MCC/IM measurement represents a certain compound, which may be known or unknown. For clustering and classification of measurements, the raw data matrix must be reduced to a set of peaks. Each peak is described by its coordinates (retention time in the MCC and reduced inverse ion mobility) and shape (signal intensity, further shape parameters). This fundamental step is referred to as peak extraction. It is the basis for identifying discriminating peaks, and hence putative biomarkers, between two classes of measurements, such as a healthy control group and a group of patients with a confirmed disease. Current state-of-the-art peak extraction methods require human interaction, such as hand-picking approximate peak locations, assisted by a visualization of the data matrix. In a high-throughput context, however, it is preferable to have robust methods for fully automated peak extraction. Results We introduce PEAX, a modular framework for automated peak extraction. The framework consists of several steps in a pipeline architecture. Each step performs a specific sub-task and can be instantiated by different methods implemented as modules. We provide open-source software for the framework and several modules for each step. Additionally, an interface that allows easy extension by a new module is provided. Combining the modules in all reasonable ways leads to a large number of peak extraction methods. We evaluate all combinations using intrinsic error measures and by comparing the resulting peak sets with an expert-picked one. Conclusions Our software PEAX is able to automatically extract peaks from MCC/IM measurements within a few seconds. The automatically obtained results keep up with the results provided by current state-of-the-art peak extraction methods. This opens a high-throughput context for the MCC/IM application field. Our software is available at http://www.rahmannlab.de/research/ims. PMID:24450533
A modular computational framework for automated peak extraction from ion mobility spectra.
D'Addario, Marianna; Kopczynski, Dominik; Baumbach, Jörg Ingo; Rahmann, Sven
2014-01-22
An ion mobility (IM) spectrometer coupled with a multi-capillary column (MCC) measures volatile organic compounds (VOCs) in the air or in exhaled breath. This technique is utilized in several biotechnological and medical applications. Each peak in an MCC/IM measurement represents a certain compound, which may be known or unknown. For clustering and classification of measurements, the raw data matrix must be reduced to a set of peaks. Each peak is described by its coordinates (retention time in the MCC and reduced inverse ion mobility) and shape (signal intensity, further shape parameters). This fundamental step is referred to as peak extraction. It is the basis for identifying discriminating peaks, and hence putative biomarkers, between two classes of measurements, such as a healthy control group and a group of patients with a confirmed disease. Current state-of-the-art peak extraction methods require human interaction, such as hand-picking approximate peak locations, assisted by a visualization of the data matrix. In a high-throughput context, however, it is preferable to have robust methods for fully automated peak extraction. We introduce PEAX, a modular framework for automated peak extraction. The framework consists of several steps in a pipeline architecture. Each step performs a specific sub-task and can be instantiated by different methods implemented as modules. We provide open-source software for the framework and several modules for each step. Additionally, an interface that allows easy extension by a new module is provided. Combining the modules in all reasonable ways leads to a large number of peak extraction methods. We evaluate all combinations using intrinsic error measures and by comparing the resulting peak sets with an expert-picked one. Our software PEAX is able to automatically extract peaks from MCC/IM measurements within a few seconds. The automatically obtained results keep up with the results provided by current state-of-the-art peak extraction methods. This opens a high-throughput context for the MCC/IM application field. Our software is available at http://www.rahmannlab.de/research/ims.
A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review
Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha
2017-01-01
Objective: To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. Methods: For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. Results: The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert’s evaluation. Conclusion: The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems. PMID:28507994
Serang, Oliver
2015-08-01
Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.
Detection of Only Viable Bacterial Spores Using a Live/Dead Indicator in Mixed Populations
NASA Technical Reports Server (NTRS)
Behar, Alberto E.; Stam, Christina N.; Smiley, Ronald
2013-01-01
This method uses a photoaffinity label that recognizes DNA and can be used to distinguish populations of bacterial cells from bacterial spores without the use of heat shocking during conventional culture, and live from dead bacterial spores using molecular-based methods. Biological validation of commercial sterility using traditional and alternative technologies remains challenging. Recovery of viable spores is cumbersome, as the process requires substantial incubation time, and the extended time to results limits the ability to quickly evaluate the efficacy of existing technologies. Nucleic acid amplification approaches such as PCR (polymerase chain reaction) have shown promise for improving time to detection for a wide range of applications. Recent real-time PCR methods are particularly promising, as these methods can be made at least semi-quantitative by correspondence to a standard curve. Nonetheless, PCR-based methods are rarely used for process validation, largely because the DNA from dead bacterial cells is highly stable and hence, DNA-based amplification methods fail to discriminate between live and inactivated microorganisms. Currently, no published method has been shown to effectively distinguish between live and dead bacterial spores. This technology uses a DNA binding photoaffinity label that can be used to distinguish between live and dead bacterial spores with detection limits ranging from 109 to 102 spores/mL. An environmental sample suspected of containing a mixture of live and dead vegetative cells and bacterial endospores is treated with a photoaffinity label. This step will eliminate any vegetative cells (live or dead) and dead endospores present in the sample. To further determine the bacterial spore viability, DNA is extracted from the spores and total population is quantified by real-time PCR. The current NASA standard assay takes 72 hours for results. Part of this procedure requires a heat shock step at 80 degC for 15 minutes before the sample can be plated. Using a photoaffinity label would remove this step from the current assay as the label readily penetrates both live and dead bacterial cells. Secondly, the photoaffinity label can only penetrate dead bacterial spores, leaving behind the viable spore population. This would allow for rapid bacterial spore detection in a matter of hours compared to the several days that it takes for the NASA standard assay.
Performance of the AOAC use-dilution method with targeted modifications: collaborative study.
Tomasino, Stephen F; Parker, Albert E; Hamilton, Martin A; Hamilton, Gordon C
2012-01-01
The U.S. Environmental Protection Agency (EPA), in collaboration with an industry work group, spearheaded a collaborative study designed to further enhance the AOAC use-dilution method (UDM). Based on feedback from laboratories that routinely conduct the UDM, improvements to the test culture preparation steps were prioritized. A set of modifications, largely based on culturing the test microbes on agar as specified in the AOAC hard surface carrier test method, were evaluated in a five-laboratory trial. The modifications targeted the preparation of the Pseudomonas aeruginosa test culture due to the difficulty in separating the pellicle from the broth in the current UDM. The proposed modifications (i.e., the modified UDM) were compared to the current UDM methodology for P. aeruginosa and Staphylococcus aureus. Salmonella choleraesuis was not included in the study. The goal was to determine if the modifications reduced method variability. Three efficacy response variables were statistically analyzed: the number of positive carriers, the log reduction, and the pass/fail outcome. The scope of the collaborative study was limited to testing one liquid disinfectant (an EPA-registered quaternary ammonium product) at two levels of presumed product efficacies, high and low. Test conditions included use of 400 ppm hard water as the product diluent and a 5% organic soil load (horse serum) added to the inoculum. Unfortunately, the study failed to support the adoption of the major modification (use of an agar-based approach to grow the test cultures) based on an analysis of method's variability. The repeatability and reproducibility standard deviations for the modified method were equal to or greater than those for the current method across the various test variables. However, the authors propose retaining the frozen stock preparation step of the modified method, and based on the statistical equivalency of the control log densities, support its adoption as a procedural change to the current UDM. The current UDM displayed acceptable responsiveness to changes in product efficacy; acceptable repeatability across multiple tests in each laboratory for the control counts and log reductions; and acceptable reproducibility across multiple laboratories for the control log density values and log reductions. Although the data do not support the adoption of all modifications, the UDM collaborative study data are valuable for assessing sources of method variability and a reassessment of the performance standard for the UDM.
Hughes Clarke, John E.
2016-01-01
Field observations of turbidity currents remain scarce, and thus there is continued debate about their internal structure and how they modify underlying bedforms. Here, I present the results of a new imaging method that examines multiple surge-like turbidity currents within a delta front channel, as they pass over crescent-shaped bedforms. Seven discrete flows over a 2-h period vary in speed from 0.5 to 3.0 ms−1. Only flows that exhibit a distinct acoustically attenuating layer at the base, appear to cause bedform migration. That layer thickens abruptly downstream of the bottom of the lee slope of the bedform, and the upper surface of the layer fluctuates rapidly at that point. The basal layer is inferred to reflect a strong near-bed gradient in density and the thickening is interpreted as a hydraulic jump. These results represent field-scale flow observations in support of a cyclic step origin of crescent-shaped bedforms. PMID:27283503
Kuniya, Toshikazu; Sano, Hideki
2016-05-10
In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.
A Formal Approach to Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.
Step-by-step magic state encoding for efficient fault-tolerant quantum computation
Goto, Hayato
2014-01-01
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387
Step-by-step magic state encoding for efficient fault-tolerant quantum computation.
Goto, Hayato
2014-12-16
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
A high-throughput semi-automated preparation for filtered synaptoneurosomes.
Murphy, Kathryn M; Balsor, Justin; Beshara, Simon; Siu, Caitlin; Pinto, Joshua G A
2014-09-30
Synaptoneurosomes have become an important tool for studying synaptic proteins. The filtered synaptoneurosomes preparation originally developed by Hollingsworth et al. (1985) is widely used and is an easy method to prepare synaptoneurosomes. The hand processing steps in that preparation, however, are labor intensive and have become a bottleneck for current proteomic studies using synaptoneurosomes. For this reason, we developed new steps for tissue homogenization and filtration that transform the preparation of synaptoneurosomes to a high-throughput, semi-automated process. We implemented a standardized protocol with easy to follow steps for homogenizing multiple samples simultaneously using a FastPrep tissue homogenizer (MP Biomedicals, LLC) and then filtering all of the samples in centrifugal filter units (EMD Millipore, Corp). The new steps dramatically reduce the time to prepare synaptoneurosomes from hours to minutes, increase sample recovery, and nearly double enrichment for synaptic proteins. These steps are also compatible with biosafety requirements for working with pathogen infected brain tissue. The new high-throughput semi-automated steps to prepare synaptoneurosomes are timely technical advances for studies of low abundance synaptic proteins in valuable tissue samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Quasi-multi-pulse voltage source converter design with two control degrees of freedom
NASA Astrophysics Data System (ADS)
Vural, A. M.; Bayindir, K. C.
2015-05-01
In this article, the design details of a quasi-multi-pulse voltage source converter (VSC) switched at line frequency of 50 Hz are given in a step-by-step process. The proposed converter is comprised of four 12-pulse converter units, which is suitable for the simulation of single-/multi-converter flexible alternating current transmission system devices as well as high voltage direct current systems operating at the transmission level. The magnetic interface of the converter is originally designed with given all parameters for 100 MVA operation. The so-called two-angle control method is adopted to control the voltage magnitude and the phase angle of the converter independently. PSCAD simulation results verify both four-quadrant converter operation and closed-loop control of the converter operated as static synchronous compensator (STATCOM).
Solid State Gas Sensor Research in Germany – a Status Report
Moos, Ralf; Sahner, Kathy; Fleischer, Maximilian; Guth, Ulrich; Barsan, Nicolae; Weimar, Udo
2009-01-01
This status report overviews activities of the German gas sensor research community. It highlights recent progress in the field of potentiometric, amperometric, conductometric, impedimetric, and field effect-based gas sensors. It is shown that besides step-by-step improvements of conventional principles, e.g. by the application of novel materials, novel principles turned out to enable new markets. In the field of mixed potential gas sensors, novel materials allow for selective detection of combustion exhaust components. The same goal can be reached by using zeolites for impedimetric gas sensors. Operando spectroscopy is a powerful tool to learn about the mechanisms in n-type and in p-type conductometric sensors and to design knowledge-based improved sensor devices. Novel deposition methods are applied to gain direct access to the material morphology as well as to obtain dense thick metal oxide films without high temperature steps. Since conductometric and impedimetric sensors have the disadvantage that a current has to pass the gas sensitive film, film morphology, electrode materials, and geometrical issues affect the sensor signal. Therefore, one tries to measure directly the Fermi level position either by measuring the gas-dependent Seebeck coefficient at high temperatures or at room temperature by applying a modified miniaturized Kelvin probe method, where surface adsorption-based work function changes drive the drain-source current of a field effect transistor. PMID:22408529
Melendez, Johan H; Santaus, Tonya M; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A; Geddes, Chris D
2016-10-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by detection of the genomic target often involving polymerase chain reaction (PCR)-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (gonorrhea, GC) DNA. Our approach is based on the use of highly focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the current study, we show that highly focused microwaves at 2.45 GHz, using 12.3-mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification, in less than 10 min total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward toward the development of a point-of-care (POC) platform for detection of gonorrhea infections. Copyright © 2016 Elsevier Inc. All rights reserved.
Prioritizing Scientific Data for Transmission
NASA Technical Reports Server (NTRS)
Castano, Rebecca; Anderson, Robert; Estlin, Tara; DeCoste, Dennis; Gaines, Daniel; Mazzoni, Dominic; Fisher, Forest; Judd, Michele
2004-01-01
A software system has been developed for prioritizing newly acquired geological data onboard a planetary rover. The system has been designed to enable efficient use of limited communication resources by transmitting the data likely to have the most scientific value. This software operates onboard a rover by analyzing collected data, identifying potential scientific targets, and then using that information to prioritize data for transmission to Earth. Currently, the system is focused on the analysis of acquired images, although the general techniques are applicable to a wide range of data modalities. Image prioritization is performed using two main steps. In the first step, the software detects features of interest from each image. In its current application, the system is focused on visual properties of rocks. Thus, rocks are located in each image and rock properties, such as shape, texture, and albedo, are extracted from the identified rocks. In the second step, the features extracted from a group of images are used to prioritize the images using three different methods: (1) identification of key target signature (finding specific rock features the scientist has identified as important), (2) novelty detection (finding rocks we haven t seen before), and (3) representative rock sampling (finding the most average sample of each rock type). These methods use techniques such as K-means unsupervised clustering and a discrimination-based kernel classifier to rank images based on their interest level.
Ceramic Stereolithography: Additive Manufacturing for Ceramics by Photopolymerization
NASA Astrophysics Data System (ADS)
Halloran, John W.
2016-07-01
Ceramic stereolithography and related additive manufacturing methods involving photopolymerization of ceramic powder suspensions are reviewed in terms of the capabilities of current devices. The practical fundamentals of the cure depth, cure width, and cure profile are related to the optical properties of the monomer, ceramic, and photo-active components. Postpolymerization steps, including harvesting and cleaning the objects, binder burnout, and sintering, are discussed and compared with conventional methods. The prospects for practical manufacturing are discussed.
EPA is currently considering a quantitative polymerase chain reaction (qPCR) method, targeting Enterococcus spp., for beach monitoring. Improvements in the method’s cost-effectiveness may be realized by the use of newer instrumentation such as the Applied Biosystems StepOneTM a...
Teaching Poetry in Elementary Grades: A Review of Related Literature.
ERIC Educational Resources Information Center
Amann, Theresa N.
In order to assess current ideas, reveal their shortcomings, and suggest steps for future investigation, this review of the literature on teaching poetry discusses definitions of poetry, references on teaching poetry, teaching methods, poetic forms, experimental research, and the benefits of poetry. The paper concludes that the lack of empirical…
An illustration of new methods in machine condition monitoring, Part I: stochastic resonance
NASA Astrophysics Data System (ADS)
Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.
2017-05-01
There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.
Low cost hydrogen/novel membrane technology for hydrogen separation from synthesis gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-02-01
To make the coal-to-hydrogen route economically attractive, improvements are being sought in each step of the process: coal gasification, water-carbon monoxide shift reaction, and hydrogen separation. This report addresses the use of membranes in the hydrogen separation step. The separation of hydrogen from synthesis gas is a major cost element in the manufacture of hydrogen from coal. Separation by membranes is an attractive, new, and still largely unexplored approach to the problem. Membrane processes are inherently simple and efficient and often have lower capital and operating costs than conventional processes. In this report current ad future trends in hydrogen productionmore » and use are first summarized. Methods of producing hydrogen from coal are then discussed, with particular emphasis on the Texaco entrained flow gasifier and on current methods of separating hydrogen from this gas stream. The potential for membrane separations in the process is then examined. In particular, the use of membranes for H{sub 2}/CO{sub 2}, H{sub 2}/CO, and H{sub 2}/N{sub 2} separations is discussed. 43 refs., 14 figs., 6 tabs.« less
A high performance pMOSFET with two-step recessed SiGe-S/D structure for 32 nm node and beyond
NASA Astrophysics Data System (ADS)
Yasutake, Nobuaki; Azuma, Atsushi; Ishida, Tatsuya; Ohuchi, Kazuya; Aoki, Nobutoshi; Kusunoki, Naoki; Mori, Shinji; Mizushima, Ichiro; Morooka, Tetsu; Kawanaka, Shigeru; Toyoshima, Yoshiaki
2007-11-01
A novel SiGe-S/D structure for high performance pMOSFET called two-step recessed SiGe-source/drain (S/D) is developed with careful optimization of recessed SiGe-S/D structure. With this method, hole mobility, short channel effect and S/D resistance in pMOSFET are improved compared with conventional recessed SiGe-S/D structure. To enhance device performance such as drain current drivability, SiGe region has to be closer to channel region. Then, conventional deep SiGe-S/D region with carefully optimized shallow SiGe SDE region showed additional device performance improvement without SCE degradation. As a result, high performance 24 nm gate length pMOSFET was demonstrated with drive current of 451 μA/μm at ∣ Vdd∣ of 0.9 V and Ioff of 100 nA/μm (552 μA/μm at ∣ Vdd∣ of 1.0 V). Furthermore, by combining with Vdd scaling, we indicate the extendability of two-step recessed SiGe-S/D structure down to 15 nm node generation.
Optimal design of neural stimulation current waveforms.
Halpern, Mark
2009-01-01
This paper contains results on the design of electrical signals for delivering charge through electrodes to achieve neural stimulation. A generalization of the usual constant current stimulation phase to a stepped current waveform is presented. The electrode current design is then formulated as the calculation of the current step sizes to minimize the peak electrode voltage while delivering a specified charge in a given number of time steps. This design problem can be formulated as a finite linear program, or alternatively by using techniques for discrete-time linear system design.
How [NOT] to Measure a Solar Cell to Get the Highest Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, Keith
The multibillion-dollar photovoltaic (PV) industry sells products by the watt; the calibration labs measure this parameter at the cell and module level with the lowest possible uncertainty of 1-2 percent. The methods and procedures to achieve a measured 50 percent efficiency on a thin-film solar cell are discussed. This talk will describe methods that ignore procedures that increase the uncertainty. Your questions will be answered concerning 'Everything you Always Wanted to Know about Efficiency Enhancements But Were Afraid to Ask.' The talk will cover a step-by-step procedure using examples found in literature or encountered in customer samples by the Nationalmore » Renewable Energy Laboratory's (NREL's) PV Performance Characterization Group on how to artificially enhance the efficiency. The procedures will describe methods that have been used to enhance the current voltage and fill factor.« less
NASA Astrophysics Data System (ADS)
Du, Xiaofeng; Song, William; Munro, Malcolm
Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.
Study on the Application of TOPSIS Method to the Introduction of Foreign Players in CBA Games
NASA Astrophysics Data System (ADS)
Zhongyou, Xing
The TOPSIS method is a multiple attribute decision-making method. This paper introduces the current situation of the introduction of foreign players in CBA games, presents the principles and calculation steps of TOPSIS method in detail, and applies it to the quantitative evaluation of the comprehensively competitive ability during the introduction of foreign players. Through the analysis of practical application, we found that the TOPSIS method has relatively high rationality and applicability when it is used to evaluate the comprehensively competitive ability during the introduction of foreign players.
NASA Astrophysics Data System (ADS)
Wang, Zhan-zhi; Xiong, Ying
2013-04-01
A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.
Aqueous processing of composite lithium ion electrode material
Li, Jianlin; Armstrong, Beth L.; Daniel, Claus; Wood, III, David L.
2017-06-20
A method of making a battery electrode includes the steps of dispersing an active electrode material and a conductive additive in water with at least one dispersant to create a mixed dispersion; treating a surface of a current collector to raise the surface energy of the surface to at least the surface tension of the mixed dispersion; depositing the dispersed active electrode material and conductive additive on a current collector; and heating the coated surface to remove water from the coating.
Aqueous processing of composite lithium ion electrode material
Li, Jianlin; Armstrong, Beth L; Daniel, Claus; Wood, III, David L
2015-02-17
A method of making a battery electrode includes the steps of dispersing an active electrode material and a conductive additive in water with at least one dispersant to create a mixed dispersion; treating a surface of a current collector to raise the surface energy of the surface to at least the surface tension of the mixed dispersion; depositing the dispersed active electrode material and conductive additive on a current collector; and heating the coated surface to remove water from the coating.
One-Step Hydrothermal-Electrochemical Route to Carbon-Stabilized Anatase Powders
NASA Astrophysics Data System (ADS)
Tao, Ying; Yi, Danqing; Zhu, Baojun
2013-04-01
Black carbon-stabilized anatase particles were prepared by a simple one-step hydrothermal-electrochemical method using glucose and titanium citrate as the carbon and titanium source, respectively. Morphological, chemical, structural, and electrochemical characterizations of these powders were carried out by Raman spectroscopy, Fourier-transform infrared spectroscopy, x-ray diffraction, scanning electron microscopy, and cyclic voltammetry. It was revealed that 200-nm carbon/anatase TiO2 was homogeneously dispersed, and the powders exhibited excellent cyclic performance at high current rates of 0.05 V/s. The powders are interesting potential materials that could be used as anodes for lithium-ion batteries.
An intraorganizational model for developing and spreading quality improvement innovations.
Kellogg, Katherine C; Gainer, Lindsay A; Allen, Adrienne S; OʼSullivan, Tatum; Singer, Sara J
Recent policy reforms encourage quality improvement (QI) innovations in primary care, but practitioners lack clear guidance regarding spread inside organizations. We designed this study to identify how large organizations can facilitate intraorganizational spread of QI innovations. We conducted ethnographic observation and interviews in a large, multispecialty, community-based medical group that implemented three QI innovations across 10 primary care sites using a new method for intraorganizational process development and spread. We compared quantitative outcomes achieved through the group's traditional versus new method, created a process model describing the steps in the new method, and identified barriers and facilitators at each step. The medical group achieved substantial improvement using its new method of intraorganizational process development and spread of QI innovations: standard work for rooming and depression screening, vaccine error rates and order compliance, and Pap smear error rates. Our model details nine critical steps for successful intraorganizational process development (set priorities, assess the current state, develop the new process, and measure and refine) and spread (develop support, disseminate information, facilitate peer-to-peer training, reinforce, and learn and adapt). Our results highlight the importance of utilizing preexisting organizational structures such as established communication channels, standardized roles, common workflows, formal authority, and performance measurement and feedback systems when developing and spreading QI processes inside an organization. In particular, we detail how formal process advocate positions in each site for each role can facilitate the spread of new processes. Successful intraorganizational spread is possible and sustainable. Developing and spreading new QI processes across sites inside an organization requires creating a shared understanding of the necessary process steps, considering the barriers that may arise at each step, and leveraging preexisting organizational structures to facilitate intraorganizational process development and spread.
An intraorganizational model for developing and spreading quality improvement innovations
Kellogg, Katherine C.; Gainer, Lindsay A.; Allen, Adrienne S.; O'Sullivan, Tatum; Singer, Sara J.
2017-01-01
Background: Recent policy reforms encourage quality improvement (QI) innovations in primary care, but practitioners lack clear guidance regarding spread inside organizations. Purpose: We designed this study to identify how large organizations can facilitate intraorganizational spread of QI innovations. Methodology/Approach: We conducted ethnographic observation and interviews in a large, multispecialty, community-based medical group that implemented three QI innovations across 10 primary care sites using a new method for intraorganizational process development and spread. We compared quantitative outcomes achieved through the group’s traditional versus new method, created a process model describing the steps in the new method, and identified barriers and facilitators at each step. Findings: The medical group achieved substantial improvement using its new method of intraorganizational process development and spread of QI innovations: standard work for rooming and depression screening, vaccine error rates and order compliance, and Pap smear error rates. Our model details nine critical steps for successful intraorganizational process development (set priorities, assess the current state, develop the new process, and measure and refine) and spread (develop support, disseminate information, facilitate peer-to-peer training, reinforce, and learn and adapt). Our results highlight the importance of utilizing preexisting organizational structures such as established communication channels, standardized roles, common workflows, formal authority, and performance measurement and feedback systems when developing and spreading QI processes inside an organization. In particular, we detail how formal process advocate positions in each site for each role can facilitate the spread of new processes. Practice Implications: Successful intraorganizational spread is possible and sustainable. Developing and spreading new QI processes across sites inside an organization requires creating a shared understanding of the necessary process steps, considering the barriers that may arise at each step, and leveraging preexisting organizational structures to facilitate intraorganizational process development and spread. PMID:27428788
NASA Astrophysics Data System (ADS)
Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio
2017-11-01
Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.
Kimoto, Minoru; Okada, Kyoji; Sakamoto, Hitoshi; Kondou, Takanori
2017-05-01
[Purpose] To improve walking efficiency could be useful for reducing fatigue and extending possible period of walking in children with cerebral palsy (CP). For this purpose, current study compared conventional parameters of gross motor performance, step length, and cadence in the evaluation of walking efficiency in children with CP. [Subjects and Methods] Thirty-one children with CP (21 boys, 10 girls; mean age, 12.3 ± 2.7 years) participated. Parameters of gross motor performance, including the maximum step length (MSL), maximum side step length, step number, lateral step up number, and single leg standing time, were measured in both dominant and non-dominant sides. Spatio-temporal parameters of walking, including speed, step length, and cadence, were calculated. Total heart beat index (THBI), a parameter of walking efficiency, was also calculated from heartbeats and walking distance in 10 minutes of walking. To analyze the relationships between these parameters and the THBI, the coefficients of determination were calculated using stepwise analysis. [Results] The MSL of the dominant side best accounted for the THBI (R 2 =0.759). [Conclusion] The MSL of the dominant side was the best explanatory parameter for walking efficiency in children with CP.
Development of a fast and efficient method for hepatitis A virus concentration from green onion.
Zheng, Yan; Hu, Yuan
2017-11-01
Hepatitis A virus (HAV) can cause serious liver disease and even death. HAV outbreaks are associated with the consumption of raw or minimally processed produce, making it a major public health concern. Infections have occurred despite the fact that effective HAV vaccine has been available. Development of a rapid and sensitive HAV detection method is necessary for an investigation of an HAV outbreak. Detection of HAV is complicated by the lack of a reliable culture method. In addition, due to the low infectious dose of HAV, these methods must be very sensitive. Current methods rely on efficient sample preparation and concentration steps followed by sensitive molecular detection techniques. Using green onions which was involved in most recent HAV outbreaks as a representative produce, a method of capturing virus particles was developed using carboxyl-derivatized magnetic beads in this study. Carboxyl beads, like antibody-coated beads or cationic beads, detect HAV at a level as low as 100 pfu/25g of green onions. RNA from virus concentrated in this manner can be released by heat-shock (98°C 5min) for molecular detection without sacrificing sensitivity. Bypassing the RNA extraction procedure saves time and removes multiple manipulation steps, which makes large scale HAV screening possible. In addition, the inclusion of beef extract and pectinase rather than NP40 in the elution buffer improved the HAV liberation from the food matrix over current methods by nearly 10 fold. The method proposed in this study provides a promising tool to improve food risk assessment and protect public health. Published by Elsevier B.V.
The General Alcoholics Anonymous Tools of Recovery: The Adoption of 12-Step Practices and Beliefs
Greenfield, Brenna L.; Tonigan, J. Scott
2013-01-01
Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step-work have received minimal attention and even less is known about how step-work predicts later substance use. The current study (1) compared endorsements of step-work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step-work, the General Alcoholics Anonymous Tools of Recovery (GAATOR), (2) evaluated the underlying factor structure of the GAATOR and changes in step-work over time, (3) examined changes in the endorsement of step-work over time, and (4) investigated how, if at all, 12-step-work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step-work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising Behavioral Step-Work and Spiritual Step-Work. Behavioral Step-Work did not change over time, but was predicted by having a sponsor, while Spiritual Step-Work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral Step-Work did not prospectively predict substance use. In contrast, Spiritual Step-Work predicted percent days abstinent, an effect that is consistent with recent work on the mediating effects of spiritual growth, AA, and increased abstinence. Behavioral and Spiritual Step-Work appear to be conceptually distinct components of step-work that have distinct predictors and unique impacts on outcomes. PMID:22867293
NASA Technical Reports Server (NTRS)
Chen, D. Y.; Owen, H. A., Jr.; Wilson, T. G.
1980-01-01
This paper presents an algorithm and equations for designing the energy-storage reactor for dc-to-dc converters which are constrained to operate in the discontinuous-reactor-current mode. This design procedure applied to the three widely used single-winding configurations: the voltage step-up, the current step-up, and the voltage-or-current step-up converters. A numerical design example is given to illustrate the use of the design algorithm and design equations.
Evaluation of standardized sample collection, packaging, and ...
Journal Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures.
Dong, Wenbo; Wang, Kaiyin; Chen, Yu; Li, Weiping; Ye, Yanchun; Jin, Shaohua
2017-01-01
An electrochemical detection biosensor was prepared with the chitosan-immobilized-enzyme (CTS-CAT) and β-cyclodextrin-included-ferrocene (β-CD-FE) complex for the determination of H2O2. Ferrocene (FE) was included in β-cyclodextrin (β-CD) to increase its stability. The structure of the β-CD-FE was characterized. The inclusion amount, inclusion rate, and electrochemical properties of inclusion complexes were determined to optimize the reaction conditions for the inclusion. CTS-CAT was prepared by a step-by-step immobilization method, which overcame the disadvantages of the conventional preparation methods. The immobilization conditions were optimized to obtain the desired enzyme activity. CTS-CAT/β-CD-FE composite electrodes were prepared by compositing the CTS-CAT with the β-CD-FE complex on a glassy carbon electrode and used for the electrochemical detection of H2O2. It was found that the CTS-CAT could produce a strong reduction peak current in response to H2O2 and the β-CD-FE could amplify the current signal. The peak current exhibited a linear relationship with the H2O2 concentration in the range of 1.0 × 10−7–6.0 × 10−3 mol/L. Our work provided a novel method for the construction of electrochemical biosensors with a fast response, good stability, high sensitivity, and a wide linear response range based on the composite of chitosan and cyclodextrin. PMID:28773229
Six-sigma application in tire-manufacturing company: a case study
NASA Astrophysics Data System (ADS)
Gupta, Vikash; Jain, Rahul; Meena, M. L.; Dangayach, G. S.
2017-09-01
Globalization, advancement of technologies, and increment in the demand of the customer change the way of doing business in the companies. To overcome these barriers, the six-sigma define-measure-analyze-improve-control (DMAIC) method is most popular and useful. This method helps to trim down the wastes and generating the potential ways of improvement in the process as well as service industries. In the current research, the DMAIC method was used for decreasing the process variations of bead splice causing wastage of material. This six-sigma DMAIC research was initiated by problem identification through voice of customer in the define step. The subsequent step constitutes of gathering the specification data of existing tire bead. This step was followed by the analysis and improvement steps, where the six-sigma quality tools such as cause-effect diagram, statistical process control, and substantial analysis of existing system were implemented for root cause identification and reduction in process variation. The process control charts were used for systematic observation and control the process. Utilizing DMAIC methodology, the standard deviation was decreased from 2.17 to 1.69. The process capability index (C p) value was enhanced from 1.65 to 2.95 and the process performance capability index (C pk) value was enhanced from 0.94 to 2.66. A DMAIC methodology was established that can play a key role for reducing defects in the tire-manufacturing process in India.
Jia, Xianbo; Lin, Xinjian; Chen, Jichen
2017-11-02
Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.
Fully implicit adaptive mesh refinement solver for 2D MHD
NASA Astrophysics Data System (ADS)
Philip, B.; Chacon, L.; Pernice, M.
2008-11-01
Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)
Segmentation and determination of joint space width in foot radiographs
NASA Astrophysics Data System (ADS)
Schenk, O.; de Muinck Keizer, D. M.; Bernelot Moens, H. J.; Slump, C. H.
2016-03-01
Joint damage in rheumatoid arthritis is frequently assessed using radiographs of hands and feet. Evaluation includes measurements of the joint space width (JSW) and detection of erosions. Current visual scoring methods are timeconsuming and subject to inter- and intra-observer variability. Automated measurement methods avoid these limitations and have been fairly successful in hand radiographs. This contribution aims at foot radiographs. Starting from an earlier proposed automated segmentation method we have developed a novel model based image analysis algorithm for JSW measurements. This method uses active appearance and active shape models to identify individual bones. The model compiles ten submodels, each representing a specific bone of the foot (metatarsals 1-5, proximal phalanges 1-5). We have performed segmentation experiments using 24 foot radiographs, randomly selected from a large database from the rheumatology department of a local hospital: 10 for training and 14 for testing. Segmentation was considered successful if the joint locations are correctly determined. Segmentation was successful in only 14%. To improve results a step-by-step analysis will be performed. We performed JSW measurements on 14 randomly selected radiographs. JSW was successfully measured in 75%, mean and standard deviation are 2.30+/-0.36mm. This is a first step towards automated determination of progression of RA and therapy response in feet using radiographs.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2004-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Wang, Ye; Huang, Zhi Xiang; Shi, Yumeng; Wong, Jen It; Ding, Meng; Yang, Hui Ying
2015-01-01
Transition metal cobalt (Co) nanoparticle was designed as catalyst to promote the conversion reaction of Sn to SnO2 during the delithiation process which is deemed as an irreversible reaction. The designed nanocomposite, named as SnO2/Co3O4/reduced-graphene-oxide (rGO), was synthesized by a simple two-step method composed of hydrothermal (1st step) and solvothermal (2nd step) synthesis processes. Compared to the pristine SnO2/rGO and SnO2/Co3O4 electrodes, SnO2/Co3O4/rGO nanocomposites exhibit significantly enhanced electrochemical performance as the anode material of lithium-ion batteries (LIBs). The SnO2/Co3O4/rGO nanocomposites can deliver high specific capacities of 1038 and 712 mAh g−1 at the current densities of 100 and 1000 mA g−1, respectively. In addition, the SnO2/Co3O4/rGO nanocomposites also exhibit 641 mAh g−1 at a high current density of 1000 mA g−1 after 900 cycles, indicating an ultra-long cycling stability under high current density. Through ex-situ TEM analysis, the excellent electrochemical performance was attributed to the catalytic effect of Co nanoparticles to promote the conversion of Sn to SnO2 and the decomposition of Li2O during the delithiation process. Based on the results, herein we propose a new method in employing the catalyst to increase the capacity of alloying-dealloying type anode material to beyond its theoretical value and enhance the electrochemical performance. PMID:25776280
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Mark W.
2015-07-28
In a manufacturing process, a need is identified and a product is created to fill this need. While design and engineering of the final product is important, the tools and fixtures that aid in the creation of the final product are just as important, if not more so. Power supplies assembled at the TA-55 PF-5 have been designed by an excellent engineering team. The task in PF-5 now is to ensure that all steps of the assembly and manufacturing process can be completed safely, reliably, and in a quality repeatable manner. One of these process steps involves soldering fine wiresmore » to an electrical connector. During the process development phase, the method of soldering included placing the power supply in a vice in order to manipulate it into a position conducive to soldering. This method is unacceptable from a reliability, repeatability, and ergonomic standpoint. To combat these issues, a fixture was designed to replace the current method. To do so, a twelve step engineering design process was used to create the fixture that would provide a solution to a multitude of problems, and increase the safety and efficiency of production.« less
Method of Conjugate Radii for Solving Linear and Nonlinear Systems
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1999-01-01
This paper describes a method to solve a system of N linear equations in N steps. A quadratic form is developed involving the sum of the squares of the residuals of the equations. Equating the quadratic form to a constant yields a surface which is an ellipsoid. For different constants, a family of similar ellipsoids can be generated. Starting at an arbitrary point an orthogonal basis is constructed and the center of the family of similar ellipsoids is found in this basis by a sequence of projections. The coordinates of the center in this basis are the solution of linear system of equations. A quadratic form in N variables requires N projections. That is, the current method is an exact method. It is shown that the sequence of projections is equivalent to a special case of the Gram-Schmidt orthogonalization process. The current method enjoys an advantage not shared by the classic Method of Conjugate Gradients. The current method can be extended to nonlinear systems without modification. For nonlinear equations the Method of Conjugate Gradients has to be augmented with a line-search procedure. Results for linear and nonlinear problems are presented.
Post, Richard F.
2001-01-01
An apparatus and method is disclosed for reducing inductive coupling between levitation and drive coils within a magnetic levitation system. A pole array has a magnetic field. A levitation coil is positioned so that in response to motion of the magnetic field of the pole array a current is induced in the levitation coil. A first drive coil having a magnetic field coupled to drive the pole array also has a magnetic flux which induces a parasitic current in the levitation coil. A second drive coil having a magnetic field is positioned to attenuate the parasitic current in the levitation coil by canceling the magnetic flux of the first drive coil which induces the parasitic current. Steps in the method include generating a magnetic field with a pole array for levitating an object; inducing current in a levitation coil in response to motion of the magnetic field of the pole array; generating a magnetic field with a first drive coil for propelling the object; and generating a magnetic field with a second drive coil for attenuating effects of the magnetic field of the first drive coil on the current in the levitation coil.
Method for surface treatment of a cadmium zinc telluride crystal
James, Ralph; Burger, Arnold; Chen, Kuo-Tong; Chang, Henry
1999-01-01
A method for treatment of the surface of a CdZnTe (CZT) crystal that reduces surface roughness (increases surface planarity) and provides an oxide coating to reduce surface leakage currents and thereby, improve resolution. A two step process is disclosed, etching the surface of a CZT crystal with a solution of lactic acid and bromine in ethylene glycol, following the conventional bromine/methanol etch treatment, and after attachment of electrical contacts, oxidizing the CZT crystal surface.
A Newton method for the magnetohydrodynamic equilibrium equations
NASA Astrophysics Data System (ADS)
Oliver, Hilary James
We have developed and implemented a (J, B) space Newton method to solve the full nonlinear three dimensional magnetohydrodynamic equilibrium equations in toroidal geometry. Various cases have been run successfully, demonstrating significant improvement over Picard iteration, including a 3D stellarator equilibrium at β = 2%. The algorithm first solves the equilibrium force balance equation for the current density J, given a guess for the magnetic field B. This step is taken from the Picard-iterative PIES 3D equilibrium code. Next, we apply Newton's method to Ampere's Law by expansion of the functional J(B), which is defined by the first step. An analytic calculation in magnetic coordinates, of how the Pfirsch-Schlüter currents vary in the plasma in response to a small change in the magnetic field, yields the Newton gradient term (analogous to ∇f . δx in Newton's method for f(x) = 0). The algorithm is computationally feasible because we do this analytically, and because the gradient term is flux surface local when expressed in terms of a vector potential in an Ar=0 gauge. The equations are discretized by a hybrid spectral/offset grid finite difference technique, and leading order radial dependence is factored from Fourier coefficients to improve finite- difference accuracy near the polar-like origin. After calculating the Newton gradient term we transfer the equation from the magnetic grid to a fixed background grid, which greatly improves the code's performance.
Accurate identification of RNA editing sites from primitive sequence with deep neural networks.
Ouyang, Zhangyi; Liu, Feng; Zhao, Chenghui; Ren, Chao; An, Gaole; Mei, Chuan; Bo, Xiaochen; Shu, Wenjie
2018-04-16
RNA editing is a post-transcriptional RNA sequence alteration. Current methods have identified editing sites and facilitated research but require sufficient genomic annotations and prior-knowledge-based filtering steps, resulting in a cumbersome, time-consuming identification process. Moreover, these methods have limited generalizability and applicability in species with insufficient genomic annotations or in conditions of limited prior knowledge. We developed DeepRed, a deep learning-based method that identifies RNA editing from primitive RNA sequences without prior-knowledge-based filtering steps or genomic annotations. DeepRed achieved 98.1% and 97.9% area under the curve (AUC) in training and test sets, respectively. We further validated DeepRed using experimentally verified U87 cell RNA-seq data, achieving 97.9% positive predictive value (PPV). We demonstrated that DeepRed offers better prediction accuracy and computational efficiency than current methods with large-scale, mass RNA-seq data. We used DeepRed to assess the impact of multiple factors on editing identification with RNA-seq data from the Association of Biomolecular Resource Facilities and Sequencing Quality Control projects. We explored developmental RNA editing pattern changes during human early embryogenesis and evolutionary patterns in Drosophila species and the primate lineage using DeepRed. Our work illustrates DeepRed's state-of-the-art performance; it may decipher the hidden principles behind RNA editing, making editing detection convenient and effective.
Study on the mechanism of Si-glass-Si two step anodic bonding process
NASA Astrophysics Data System (ADS)
Hu, Lifang; Wang, Hao; Xue, Yongzhi; Shi, Fangrong; Chen, Shaoping
2018-04-01
Si-glass-Si was successfully bonded together through a two-step anodic bonding process. The bonding current in each step of the two-step bonding process was investigated, and found to be quite different. The first bonding current decreased quickly to a relatively small value, but for the second bonding step, there were two current peaks; the current first decreased, then increased, and then decreased again. The second current peak occurred earlier with higher temperature and voltage. The two-step anodic bonding process was investigated in terms of bonding current. SEM and EDS tests were conducted to investigate the interfacial structure of the Si-glass-Si samples. The two bonding interfaces were almost the same, but after an etching process, transitional layers could be found in the bonding interface and a deeper trench with a thickness of ~1.5 µm could be found in the second bonding interface. Atomic force microscopy mapping results indicated that sodium precipitated from the back of the glass, which makes the roughness of the surface become coarse. Tensile tests indicated that the fracture occurred at the glass substrate and that the bonding strength increased with the increment of bonding temperature and voltage with the maximum strength of 6.4 MPa.
Kim, Yu-Ri; Park, Sung Ha; Lee, Jong-Kwon; Jeong, Jayoung; Kim, Ja Hei; Meang, Eun-Ho; Yoon, Tae Hyun; Lim, Seok Tae; Oh, Jae-Min; An, Seong Soo A; Kim, Meyoung-Kon
2014-01-01
Currently, products made with nanomaterials are used widely, especially in biology, bio-technologies, and medical areas. However, limited investigations on potential toxicities of nanomaterials are available. Hence, diverse and systemic toxicological data with new methods for nanomaterials are needed. In order to investigate the nanotoxicology of nanoparticles (NPs), the Research Team for Nano-Associated Safety Assessment (RT-NASA) was organized in three parts and launched. Each part focused on different contents of research directions: investigators in part I were responsible for the efficient management and international cooperation on nano-safety studies; investigators in part II performed the toxicity evaluations on target organs such as assessment of genotoxicity, immunotoxicity, or skin penetration; and investigators in part III evaluated the toxicokinetics of NPs with newly developed techniques for toxicokinetic analyses and methods for estimating nanotoxicity. The RT-NASA study was carried out in six steps: need assessment, physicochemical property, toxicity evaluation, toxicokinetics, peer review, and risk communication. During the need assessment step, consumer responses were analyzed based on sex, age, education level, and household income. Different sizes of zinc oxide and silica NPs were purchased and coated with citrate, L-serine, and L-arginine in order to modify surface charges (eight different NPs), and each of the NPs were characterized by various techniques, for example, zeta potentials, scanning electron microscopy, and transmission electron microscopy. Evaluation of the “no observed adverse effect level” and systemic toxicities of all NPs were performed by thorough evaluation steps and the toxicokinetics step, which included in vivo studies with zinc oxide and silica NPs. A peer review committee was organized to evaluate and verify the reliability of toxicity tests, and the risk communication step was also needed to convey the current findings to academia, industry, and consumers. Several limitations were encountered in the RT-NASA project, and they are discussed for consideration for improvements in future studies. PMID:25565821
Kim, Yu-Ri; Park, Sung Ha; Lee, Jong-Kwon; Jeong, Jayoung; Kim, Ja Hei; Meang, Eun-Ho; Yoon, Tae Hyun; Lim, Seok Tae; Oh, Jae-Min; An, Seong Soo A; Kim, Meyoung-Kon
2014-01-01
Currently, products made with nanomaterials are used widely, especially in biology, bio-technologies, and medical areas. However, limited investigations on potential toxicities of nanomaterials are available. Hence, diverse and systemic toxicological data with new methods for nanomaterials are needed. In order to investigate the nanotoxicology of nanoparticles (NPs), the Research Team for Nano-Associated Safety Assessment (RT-NASA) was organized in three parts and launched. Each part focused on different contents of research directions: investigators in part I were responsible for the efficient management and international cooperation on nano-safety studies; investigators in part II performed the toxicity evaluations on target organs such as assessment of genotoxicity, immunotoxicity, or skin penetration; and investigators in part III evaluated the toxicokinetics of NPs with newly developed techniques for toxicokinetic analyses and methods for estimating nanotoxicity. The RT-NASA study was carried out in six steps: need assessment, physicochemical property, toxicity evaluation, toxicokinetics, peer review, and risk communication. During the need assessment step, consumer responses were analyzed based on sex, age, education level, and household income. Different sizes of zinc oxide and silica NPs were purchased and coated with citrate, L-serine, and L-arginine in order to modify surface charges (eight different NPs), and each of the NPs were characterized by various techniques, for example, zeta potentials, scanning electron microscopy, and transmission electron microscopy. Evaluation of the "no observed adverse effect level" and systemic toxicities of all NPs were performed by thorough evaluation steps and the toxicokinetics step, which included in vivo studies with zinc oxide and silica NPs. A peer review committee was organized to evaluate and verify the reliability of toxicity tests, and the risk communication step was also needed to convey the current findings to academia, industry, and consumers. Several limitations were encountered in the RT-NASA project, and they are discussed for consideration for improvements in future studies.
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less
Failure modes and effects analysis for ocular brachytherapy.
Lee, Yongsook C; Kim, Yongbok; Huynh, Jason Wei-Yeong; Hamilton, Russell J
The aim of the study was to identify potential failure modes (FMs) having a high risk and to improve our current quality management (QM) program in Collaborative Ocular Melanoma Study (COMS) ocular brachytherapy by undertaking a failure modes and effects analysis (FMEA) and a fault tree analysis (FTA). Process mapping and FMEA were performed for COMS ocular brachytherapy. For all FMs identified in FMEA, risk priority numbers (RPNs) were determined by assigning and multiplying occurrence, severity, and lack of detectability values, each ranging from 1 to 10. FTA was performed for the major process that had the highest ranked FM. Twelve major processes, 121 sub-process steps, 188 potential FMs, and 209 possible causes were identified. For 188 FMs, RPN scores ranged from 1.0 to 236.1. The plaque assembly process had the highest ranked FM. The majority of FMs were attributable to human failure (85.6%), and medical physicist-related failures were the most numerous (58.9% of all causes). After FMEA, additional QM methods were included for the top 10 FMs and 6 FMs with severity values > 9.0. As a result, for these 16 FMs and the 5 major processes involved, quality control steps were increased from 8 (50%) to 15 (93.8%), and major processes having quality assurance steps were increased from 2 to 4. To reduce high risk in current clinical practice, we proposed QM methods. They mainly include a check or verification of procedures/steps and the use of checklists for both ophthalmology and radiation oncology staff, and intraoperative ultrasound-guided plaque positioning for ophthalmology staff. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Automatic concrete cracks detection and mapping of terrestrial laser scan data
NASA Astrophysics Data System (ADS)
Rabah, Mostafa; Elhattab, Ahmed; Fayad, Atef
2013-12-01
Terrestrial laser scanning has become one of the standard technologies for object acquisition in surveying engineering. The high spatial resolution of imaging and the excellent capability of measuring the 3D space by laser scanning bear a great potential if combined for both data acquisition and data compilation. Automatic crack detection from concrete surface images is very effective for nondestructive testing. The crack information can be used to decide the appropriate rehabilitation method to fix the cracked structures and prevent any catastrophic failure. In practice, cracks on concrete surfaces are traced manually for diagnosis. On the other hand, automatic crack detection is highly desirable for efficient and objective crack assessment. The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.
Methods and compositions for efficient nucleic acid sequencing
Drmanac, Radoje
2006-07-04
Disclosed are novel methods and compositions for rapid and highly efficient nucleic acid sequencing based upon hybridization with two sets of small oligonucleotide probes of known sequences. Extremely large nucleic acid molecules, including chromosomes and non-amplified RNA, may be sequenced without prior cloning or subcloning steps. The methods of the invention also solve various current problems associated with sequencing technology such as, for example, high noise to signal ratios and difficult discrimination, attaching many nucleic acid fragments to a surface, preparing many, longer or more complex probes and labelling more species.
Methods and compositions for efficient nucleic acid sequencing
Drmanac, Radoje
2002-01-01
Disclosed are novel methods and compositions for rapid and highly efficient nucleic acid sequencing based upon hybridization with two sets of small oligonucleotide probes of known sequences. Extremely large nucleic acid molecules, including chromosomes and non-amplified RNA, may be sequenced without prior cloning or subcloning steps. The methods of the invention also solve various current problems associated with sequencing technology such as, for example, high noise to signal ratios and difficult discrimination, attaching many nucleic acid fragments to a surface, preparing many, longer or more complex probes and labelling more species.
Han, Yaohui; Mou, Lan; Xu, Gengchi; Yang, Yiqiang; Ge, Zhenlin
2015-03-01
To construct a three-dimensional finite element model comparing between one-step and two-step methods in torque control of anterior teeth during space closure. Dicom image data including maxilla and upper teeth were obtained though cone-beam CT. A three-dimensional model was set up and the maxilla, upper teeth and periodontium were separated using Mimics software. The models were instantiated using Pro/Engineer software, and Abaqus finite element analysis software was used to simulate the sliding mechanics by loading 1.47 Nforce on traction hooks with different heights (2, 4, 6, 8, 10, 12 and 14 mm, respectively) in order to compare the initial displacement between six maxillary anterior teeth (one-step method) and four maxillary anterior teeth (two-step method). When moving anterior teeth bodily, initial displacements of central incisors in two-step method and in one-step method were 29.26 × 10⁻⁶ mm and 15.75 × 10⁻⁶ mm, respectively. The initial displacements of lateral incisors in two-step method and in one-step method were 46.76 × 10(-6) mm and 23.18 × 10(-6) mm, respectively. Under the same amount of light force, the initial displacement of anterior teeth in two-step method was doubled compared with that in one-step method. The root and crown of the canine couldn't obtain the same amount of displacement in one-step method. Two-step method could produce more initial displacement than one-step method. Therefore, two-step method was easier to achieve torque control of the anterior teeth during space closure.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
Label-Free Immuno-Sensors for the Fast Detection of Listeria in Food.
Morlay, Alexandra; Roux, Agnès; Templier, Vincent; Piat, Félix; Roupioz, Yoann
2017-01-01
Foodborne diseases are a major concern for both food industry and health organizations due to the economic costs and potential threats for human lives. For these reasons, specific regulations impose the research of pathogenic bacteria in food products. Nevertheless, current methods, references and alternatives, take up to several days and require many handling steps. In order to improve pathogen detection in food, we developed an immune-sensor, based on Surface Plasmon Resonance imaging (SPRi) and bacterial growth which allows the detection of a very low number of Listeria monocytogenes in food sample in one day. Adequate sensitivity is achieved by the deposition of several antibodies in a micro-array format allowing real-time detection. This label-free method thus reduces handling and time to result compared with current methods.
Spatial-time-state fusion algorithm for defect detection through eddy current pulsed thermography
NASA Astrophysics Data System (ADS)
Xiao, Xiang; Gao, Bin; Woo, Wai Lok; Tian, Gui Yun; Xiao, Xiao Ting
2018-05-01
Eddy Current Pulsed Thermography (ECPT) has received extensive attention due to its high sensitive of detectability on surface and subsurface cracks. However, it remains as a difficult challenge in unsupervised detection as to identify defects without knowing any prior knowledge. This paper presents a spatial-time-state features fusion algorithm to obtain fully profile of the defects by directional scanning. The proposed method is intended to conduct features extraction by using independent component analysis (ICA) and automatic features selection embedding genetic algorithm. Finally, the optimal feature of each step is fused to obtain defects reconstruction by applying common orthogonal basis extraction (COBE) method. Experiments have been conducted to validate the study and verify the efficacy of the proposed method on blind defect detection.
Ericson, M. Nance; Rochelle, James M.
1994-01-01
A logarithmic current measurement circuit for operating upon an input electric signal utilizes a quad, dielectrically isolated, well-matched, monolithic bipolar transistor array. One group of circuit components within the circuit cooperate with two transistors of the array to convert the input signal logarithmically to provide a first output signal which is temperature-dependant, and another group of circuit components cooperate with the other two transistors of the array to provide a second output signal which is temperature-dependant. A divider ratios the first and second output signals to provide a resultant output signal which is independent of temperature. The method of the invention includes the operating steps performed by the measurement circuit.
NASA Astrophysics Data System (ADS)
Wang, Chunbai; Mitra, Ambar K.
2016-01-01
Any boundary surface evolving in viscous fluid is driven with surface capillary currents. By step function defined for the fluid-structure interface, surface currents are found near a flat wall in a logarithmic form. The general flat-plate boundary layer is demonstrated through the interface kinematics. The dynamics analysis elucidates the relationship of the surface currents with the adhering region as well as the no-slip boundary condition. The wall skin friction coefficient, displacement thickness, and the logarithmic velocity-defect law of the smooth flat-plate boundary-layer flow are derived with the advent of the forced evolving boundary method. This fundamental theory has wide applications in applied science and engineering.
Measurement Development in Reflective Supervision: History, Methods, and Next Steps
ERIC Educational Resources Information Center
Tomlin, Angela M.; Heller, Sherryl Scott
2016-01-01
This issue of the "ZERO TO THREE" journal provides a snapshot of the current state of measurement of reflective supervision within the infant-family field. In this article, the authors introduce the issue by providing a brief history of the development of reflective supervision in the field of infant mental health, with a specific focus…
Post Viking planetary protection requirements study
NASA Technical Reports Server (NTRS)
Wolfson, R. P.
1977-01-01
Past planetary quarantine requirements were reviewed in the light of present Viking data to determine the steps necessary to prevent contamination of the Martian surface on future missions. The currently used term planetary protection reflects a broader scope of understanding of the problems involved. Various methods of preventing contamination are discussed in relation to proposed projects, specifically the 1984 Rover Mission.
An optimized method for measuring methylmalonic acid in low volumes of serum using UPLC-MS/MS
USDA-ARS?s Scientific Manuscript database
Background: Methylmalonic acid (MMA) is a metabolic intermediate which is transformed to succinic acid (SA) by a vitamin B12-dependent catalytic step. MMA is broadly used as a clinical biomarker of functional vitamin B12 status. However, currently validated protocols use between 100 -1000 µL of se...
Overview of radar intra-pulse modulation recognition
NASA Astrophysics Data System (ADS)
Zang, Hanlin; Li, Yanling
2018-05-01
This paper introduces the current radar intra-pulse modulation method, describes the status quo and development direction of the intentional modulation and unintentional modulation in the pulse, and summarizes the existing problems and prospects for the future. Looking forward to the future, and providing a reference direction for the research on radar signal recognition in the next step.
Extraction of high-quality mRNA from Cryptosporidium parvum is a key step in PCR detection of viable oocysts in environmental samples. Current methods for monitoring oocysts are limited to water samples; therefore, the goal of this study was to develop a rapid and sensitive proce...
Method of monolithic module assembly
Gee, James M.; Garrett, Stephen E.; Morgan, William P.; Worobey, Walter
1999-01-01
Methods for "monolithic module assembly" which translate many of the advantages of monolithic module construction of thin-film PV modules to wafered c-Si PV modules. Methods employ using back-contact solar cells positioned atop electrically conductive circuit elements affixed to a planar support so that a circuit capable of generating electric power is created. The modules are encapsulated using encapsulant materials such as EVA which are commonly used in photovoltaic module manufacture. The methods of the invention allow multiple cells to be electrically connected in a single encapsulation step rather than by sequential soldering which characterizes the currently used commercial practices.
Perovskite solar cell with an efficient TiO₂ compact film.
Ke, Weijun; Fang, Guojia; Wang, Jing; Qin, Pingli; Tao, Hong; Lei, Hongwei; Liu, Qin; Dai, Xin; Zhao, Xingzhong
2014-09-24
A perovskite solar cell with a thin TiO2 compact film prepared by thermal oxidation of sputtered Ti film achieved a high efficiency of 15.07%. The thin TiO2 film prepared by thermal oxidation is very dense and inhibits the recombination process at the interface. The optimum thickness of the TiO2 compact film prepared by thermal oxidation is thinner than that prepared by spin-coating method. Also, the TiO2 compact film and the TiO2 porous film can be sintered at the same time. This one-step sintering process leads to a lower dark current density, a lower series resistance, and a higher recombination resistance than those of two-step sintering. Therefore, the perovskite solar cell with the TiO2 compact film prepared by thermal oxidation has a higher short-circuit current density and a higher fill factor.
Frequency optimization in the eddy current test for high purity niobium
NASA Astrophysics Data System (ADS)
Joung, Mijoung; Jung, Yoochul; Kim, Hyungjin
2017-01-01
The eddy current test (ECT) is frequently used as a non-destructive method to check for the defects of high purity niobium (RRR300, Residual Resistivity Ratio) in a superconducting radio frequency (SRF) cavity. Determining an optimal frequency corresponding to specific material properties and probe specification is a very important step. The ECT experiments for high purity Nb were performed to determine the optimal frequency using the standard sample of high purity Nb having artificial defects. The target depth was considered with the treatment step that the niobium receives as the SRF cavity material. The results were analysed via the selectivity that led to a specific result, depending on the size of the defects. According to the results, the optimal frequency was determined to be 200 kHz, and a few features of the ECT for the high purity Nb were observed.
NASA Astrophysics Data System (ADS)
Singh, Kirmender; Bhattacharyya, A. B.
2017-03-01
Gummel Symmetry Test (GST) has been a benchmark industry standard for MOSFET models and is considered as one of important tests by the modeling community. BSIM4 MOSFET model fails to pass GST as the drain current equation is not symmetrical because drain and source potentials are not referenced to bulk. BSIM6 MOSFET model overcomes this limitation by taking all terminal biases with reference to bulk and using proper velocity saturation (v -E) model. The drain current equation in BSIM6 is charge based and continuous in all regions of operation. It, however, adopts a complicated method to compute source and drain charges. In this work we propose to use conventional charge based method formulated by Enz for obtaining simpler analytical drain current expression that passes GST. For this purpose we adopt two steps: (i) In the first step we use a modified first-order hyperbolic v -E model with adjustable coefficients which is integrable, simple and accurate, and (ii) In the second we use a multiplying factor in the modified first-order hyperbolic v -E expression to obtain correct monotonic asymptotic behavior around the origin of lateral electric field. This factor is of empirical form, which is a function of drain voltage (vd) and source voltage (vs) . After considering both the above steps we obtain drain current expression whose accuracy is similar to that obtained from second-order hyperbolic v -E model. In modified first-order hyperbolic v -E expression if vd and vs is replaced by smoothing functions for the effective drain voltage (vdeff) and effective source voltage (vseff), it will as well take care of discontinuity between linear to saturation regions of operation. The condition of symmetry is shown to be satisfied by drain current and its higher order derivatives, as both of them are odd functions and their even order derivatives smoothly pass through the origin. In strong inversion region and technology node of 22 nm the GST is shown to pass till sixth-order derivative and for weak inversion it is shown till fifth-order derivative. In the expression of drain current major short channel phenomena like vertical field mobility reduction, velocity saturation and velocity overshoot have been taken into consideration.
Optimizing Fungal DNA Extraction Methods from Aerosol Filters
NASA Astrophysics Data System (ADS)
Jimenez, G.; Mescioglu, E.; Paytan, A.
2016-12-01
Fungi and fungal spores can be picked up from terrestrial ecosystems, transported long distances, and deposited into marine ecosystems. It is important to study dust-borne fungal communities, because they can stay viable and effect the ambient microbial populations, which are key players in biogeochemical cycles. One of the challenges of studying dust-borne fungal populations is that aerosol samples contain low biomass, making extracting good quality DNA very difficult. The aim of this project was to increase DNA yield by optimizing DNA extraction methods. We tested aerosol samples collected from Haifa, Israel (polycarbonate filter), Monterey Bay, CA (quartz filter) and Bermuda (quartz filter). Using the Qiagen DNeasy Plant Kit, we tested the effect of altering bead beating times and incubation times, adding three freeze and thaw steps, initially washing the filters with buffers for various lengths of time before using the kit, and adding a step with 30 minutes of sonication in 65C water. Adding three freeze/thaw steps, adding a sonication step, washing with a phosphate buffered saline overnight, and increasing incubation time to two hours, in that order, resulted in the highest increase in DNA for samples from Israel (polycarbonate). DNA yield of samples from Monterey (quart filter) increased about 5 times when washing with buffers overnight (phosphate buffered saline and potassium phophate buffer), adding a sonication step, and adding three freeze and thaw steps. Samples collected in Bermuda (quartz filter) had the highest increase in DNA yield from increasing incubation to 2 hours, increasing bead beating time to 6 minutes, and washing with buffers overnight (phosphate buffered saline and potassium phophate buffer). Our results show that DNA yield can be increased by altering various steps of the Qiagen DNeasy Plant Kit protocol, but different types of filters collected at different sites respond differently to alterations. These results can be used as preliminary results to continue developing fungi DNA extraction methods. Developing these methods will be important as dust storms are predicted to increase due to increased draughts and anthropogenic activity, and the fungal communities of these dust-storms are currently relatively understudied.
Nondestructive mechanical characterization of developing biological tissues using inflation testing.
Oomen, P J A; van Kelle, M A J; Oomens, C W J; Bouten, C V C; Loerakker, S
2017-10-01
One of the hallmarks of biological soft tissues is their capacity to grow and remodel in response to changes in their environment. Although it is well-accepted that these processes occur at least partly to maintain a mechanical homeostasis, it remains unclear which mechanical constituent(s) determine(s) mechanical homeostasis. In the current study a nondestructive mechanical test and a two-step inverse analysis method were developed and validated to nondestructively estimate the mechanical properties of biological tissue during tissue culture. Nondestructive mechanical testing was achieved by performing an inflation test on tissues that were cultured inside a bioreactor, while the tissue displacement and thickness were nondestructively measured using ultrasound. The material parameters were estimated by an inverse finite element scheme, which was preceded by an analytical estimation step to rapidly obtain an initial estimate that already approximated the final solution. The efficiency and accuracy of the two-step inverse method was demonstrated on virtual experiments of several material types with known parameters. PDMS samples were used to demonstrate the method's feasibility, where it was shown that the proposed method yielded similar results to tensile testing. Finally, the method was applied to estimate the material properties of tissue-engineered constructs. Via this method, the evolution of mechanical properties during tissue growth and remodeling can now be monitored in a well-controlled system. The outcomes can be used to determine various mechanical constituents and to assess their contribution to mechanical homeostasis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
A new efficient method to monitor precocious puberty nationwide in France.
Rigou, Annabel; Le Moal, Joëlle; Léger, Juliane; Le Tertre, Alain; Carel, Jean-Claude
2018-02-01
Clinical precocious puberty (PP) is a disease, reputed to be on the increase and suspected to be linked to endocrine disrupting chemicals (EDC) exposure. Population-based epidemiological data are lacking in France and scarce elsewhere. We accessed the feasibility of monitoring PP nationwide in France in this context, using a nationwide existing database, the French National Health Insurance Information System. Here, we present the method we used with a step-by-step approach to build and select the most suitable indicator. We built three indicators reflecting the incidence of idiopathic central precocious puberty (ICPP), the most frequent form of PP, and we compared these indicators according to their strengths and weaknesses with respect to surveillance purposes. Monitoring ICPP in France proved feasible using a Drug reimbursement indicator. Our method is cost efficient and highly relevant in public health surveillance. Our step-by-step approach proved helpful to achieve this project and could be proposed for assessing the feasibility of monitoring health outcomes of interest using existing data bases. What is known: • Precocious puberty (PP) is suspected to be related to EDC exposure and it is believed to be on the increase in France and in others countries. • Very few epidemiologic data on PP are currently available in the world at the national scale. What is new: • This is the first study describing a method to monitor the most frequent form of PP, idiopathic central PP (ICPP) nationwide in a cost-efficient way, using health insurance databases. • This cost-effective method will allow to estimate and monitor the incidence of ICPP in France and to analyze spatial variations at a very precise scale, which will be very useful to examine the role of environmental exposures, especially to EDCs.
NASA Astrophysics Data System (ADS)
Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.
2013-05-01
Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.
Dynamic advance reservation with delayed allocation
Vokkarane, Vinod; Somani, Arun
2014-12-02
A method of scheduling data transmissions from a source to a destination, includes the steps of: providing a communication system having a number of channels and a number of paths, each of the channels having a plurality of designated time slots; receiving two or more data transmission requests; provisioning the transmission of the data; receiving data corresponding to at least one of the two or more data transmission requests; waiting until an earliest requested start time T.sub.s; allocating at the current time each of the two or more data transmission requests; transmitting the data; and repeating the steps of waiting, allocating, and transmitting until each of the two or more data transmission requests that have been provisioned for a transmission of data is satisfied. A system to perform the method of scheduling data transmissions is also described.
2014-01-01
The morphology and electrical properties of orthorhombic β-WO3 nanoflakes with thickness of ~7 to 9 nm were investigated at the nanoscale with a combination of scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), current sensing force spectroscopy atomic force microscopy (CSFS-AFM, or PeakForce TUNA™), Fourier transform infra-red absorption spectroscopy (FTIR), linear sweep voltammetry (LSV) and Raman spectroscopy techniques. CSFS-AFM analysis established good correlation between the topography of the developed nanostructures and various features of WO3 nanoflakes synthesized via a two-step sol-gel-exfoliation method. It was determined that β-WO3 nanoflakes annealed at 550°C possess distinguished and exceptional thickness-dependent properties in comparison with the bulk, micro and nanostructured WO3 synthesized at alternative temperatures. PMID:25221453
Horsch, Salome; Kopczynski, Dominik; Kuthe, Elias; Baumbach, Jörg Ingo; Rahmann, Sven
2017-01-01
Motivation Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column—ion mobility spectrometry (MCC-IMS) is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process. Method We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios. Results The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology. PMID:28910313
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
Wang, Ping; Liu, Yongling; Chen, Tao; Xu, Wenhua; You, Jinmao; Liu, Yongjun; Li, Yulin
2013-01-01
Lignans and flavonols are the primary constituents of Sinopodophyllum emodi and have been used as cathartic, anthelmintic, chemotherapeutic and anti-hypertensive compounds. Although these compounds have been isolated, there have been no reports on the separation of 4'-demethyl podophyllotoxin, podophyllotoxin, deoxypodophyllotoxin and kaempferol in one step by medium-pressure liquid chromatography (MPLC) and high-speed counter-current chromatography (HSCCC). Development of an efficient method for the preparative separation and purification of three lignans and one flavonol from S. emodi. The precipitate of crude extracts was first separated by MPLC into four parts, numbered GJ-1, GJ-2, GJ-3 and GJ-4. GJ-1 was separated and purified by HSCCC using a solvent system composed of n-hexane:ethyl acetate:methanol:water (1.75:1.5:1:0.75, v/v/v/v). The purities of the target compounds were assessed using high-performance liquid chromatography (HPLC) and chemical structures were identified by (1) H-NMR and (13) C-NMR. The HSCCC and MPLC methods were successfully used for the preparative separation and purification of 4'-demethyl podophyllotoxin (8.5 mg, 92.4%), podophyllotoxin (40.1 mg, 92.1%), deoxypodophyllotoxin (4.6 mg, 98.1%), and kaempferol (1.6 mg, 96.7%) from a 100 mg sample. Three lignans (4'-demethyl podophyllotoxin, podophyllotoxin, deoxypodophyllotoxin) and one flavonol (kaempferol) were successfully isolated by HSCCC and MPLC in one step. Copyright © 2013 John Wiley & Sons, Ltd.
Redefining the lower statistical limit in x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
Marschner, M.; Birnbacher, L.; Willner, M.; Chabior, M.; Fehringer, A.; Herzen, J.; Noël, P. B.; Pfeiffer, F.
2015-03-01
Phase-contrast x-ray computed tomography (PCCT) is currently investigated and developed as a potentially very interesting extension of conventional CT, because it promises to provide high soft-tissue contrast for weakly absorbing samples. For data acquisition several images at different grating positions are combined to obtain a phase-contrast projection. For short exposure times, which are necessary for lower radiation dose, the photon counts in a single stepping position are very low. In this case, the currently used phase-retrieval does not provide reliable results for some pixels. This uncertainty results in statistical phase wrapping, which leads to a higher standard deviation in the phase-contrast projections than theoretically expected. For even lower statistics, the phase retrieval breaks down completely and the phase information is lost. New measurement procedures rely on a linear approximation of the sinusoidal phase stepping curve around the zero crossings. In this case only two images are acquired to obtain the phase-contrast projection. The approximation is only valid for small phase values. However, typically nearly all pixels are within this regime due to the differential nature of the signal. We examine the statistical properties of a linear approximation method and illustrate by simulation and experiment that the lower statistical limit can be redefined using this method. That means that the phase signal can be retrieved even with very low photon counts and statistical phase wrapping can be avoided. This is an important step towards enhanced image quality in PCCT with very low photon counts.
Qian, Cheng; Fan, Jiajie; Fang, Jiayi; Yu, Chaohua; Ren, Yi; Fan, Xuejun; Zhang, Guoqi
2017-10-16
By solving the problem of very long test time on reliability qualification for Light-emitting Diode (LED) products, the accelerated degradation test with a thermal overstress at a proper range is regarded as a promising and effective approach. For a comprehensive survey of the application of step-stress accelerated degradation test (SSADT) in LEDs, the thermal, photometric, and colorimetric properties of two types of LED chip scale packages (CSPs), i.e., 4000 °K and 5000 °K samples each of which was driven by two different levels of currents (i.e., 120 mA and 350 mA, respectively), were investigated under an increasing temperature from 55 °C to 150 °C and a systemic study of driving current effect on the SSADT results were also reported in this paper. During SSADT, junction temperatures of the test samples have a positive relationship with their driving currents. However, the temperature-voltage curve, which represents the thermal resistance property of the test samples, does not show significant variance as long as the driving current is no more than the sample's rated current. But when the test sample is tested under an overdrive current, its temperature-voltage curve is observed as obviously shifted to the left when compared to that before SSADT. Similar overdrive current affected the degradation scenario is also found in the attenuation of Spectral Power Distributions (SPDs) of the test samples. As used in the reliability qualification, SSADT provides explicit scenes on color shift and correlated color temperature (CCT) depreciation of the test samples, but not on lumen maintenance depreciation. It is also proved that the varying rates of the color shift and CCT depreciation failures can be effectively accelerated with an increase of the driving current, for instance, from 120 mA to 350 mA. For these reasons, SSADT is considered as a suitable accelerated test method for qualifying these two failure modes of LED CSPs.
Yu, Chaohua; Fan, Xuejun; Zhang, Guoqi
2017-01-01
By solving the problem of very long test time on reliability qualification for Light-emitting Diode (LED) products, the accelerated degradation test with a thermal overstress at a proper range is regarded as a promising and effective approach. For a comprehensive survey of the application of step-stress accelerated degradation test (SSADT) in LEDs, the thermal, photometric, and colorimetric properties of two types of LED chip scale packages (CSPs), i.e., 4000 °K and 5000 °K samples each of which was driven by two different levels of currents (i.e., 120 mA and 350 mA, respectively), were investigated under an increasing temperature from 55 °C to 150 °C and a systemic study of driving current effect on the SSADT results were also reported in this paper. During SSADT, junction temperatures of the test samples have a positive relationship with their driving currents. However, the temperature-voltage curve, which represents the thermal resistance property of the test samples, does not show significant variance as long as the driving current is no more than the sample’s rated current. But when the test sample is tested under an overdrive current, its temperature-voltage curve is observed as obviously shifted to the left when compared to that before SSADT. Similar overdrive current affected the degradation scenario is also found in the attenuation of Spectral Power Distributions (SPDs) of the test samples. As used in the reliability qualification, SSADT provides explicit scenes on color shift and correlated color temperature (CCT) depreciation of the test samples, but not on lumen maintenance depreciation. It is also proved that the varying rates of the color shift and CCT depreciation failures can be effectively accelerated with an increase of the driving current, for instance, from 120 mA to 350 mA. For these reasons, SSADT is considered as a suitable accelerated test method for qualifying these two failure modes of LED CSPs. PMID:29035300
NASA Technical Reports Server (NTRS)
Mumaw, Susan J. (Inventor); Evers, Jeffrey (Inventor); Craig, Calvin L., Jr. (Inventor); Walker, Stuart D. (Inventor)
2001-01-01
The invention is a circuit and method of limiting the charging current voltage from a power supply net work applied to an individual cell of a plurality of cells making up a battery being charged in series. It is particularly designed for use with batteries that can be damaged by overcharging, such as Lithium-ion type batteries. In detail. the method includes the following steps: 1) sensing the actual voltage level of the individual cell; 2) comparing the actual voltage level of the individual cell with a reference value and providing an error signal representative thereof; and 3) by-passing the charging current around individual cell necessary to keep the individual cell voltage level generally equal a specific voltage level while continuing to charge the remaining cells. Preferably this is accomplished by by-passing the charging current around the individual cell if said actual voltage level is above the specific voltage level and allowing the charging current to the individual cell if the actual voltage level is equal or less than the specific voltage level. In the step of bypassing the charging current, the by-passed current is transferred at a proper voltage level to the power supply. The by-pass circuit a voltage comparison circuit is used to compare the actual voltage level of the individual cell with a reference value and to provide an error signal representative thereof. A third circuit, designed to be responsive to the error signal, is provided for maintaining the individual cell voltage level generally equal to the specific voltage level. Circuitry is provided in the third circuit for bypassing charging current around the individual cell if the actual voltage level is above the specific voltage level and transfers the excess charging current to the power supply net work. The circuitry also allows charging of the individual cell if the actual voltage level is equal or less than the specific voltage level.
Method for surface treatment of a cadmium zinc telluride crystal
James, R.; Burger, A.; Chen, K.T.; Chang, H.
1999-08-03
A method for treatment of the surface of a CdZnTe (CZT) crystal is disclosed that reduces surface roughness (increases surface planarity) and provides an oxide coating to reduce surface leakage currents and thereby, improve resolution. A two step process is disclosed, etching the surface of a CZT crystal with a solution of lactic acid and bromine in ethylene glycol, following the conventional bromine/methanol etch treatment, and after attachment of electrical contacts, oxidizing the CZT crystal surface. 3 figs.
Cu-Ni-Fe anodes having improved microstructure
Bergsma, S. Craig; Brown, Craig W.
2004-04-20
A method of producing aluminum in a low temperature electrolytic cell containing alumina dissolved in an electrolyte. The method comprises the steps of providing a molten electrolyte having alumina dissolved therein in an electrolytic cell containing the electrolyte. A non-consumable anode and cathode is disposed in the electrolyte, the anode comprised of Cu--Ni--Fe alloys having single metallurgical phase. Electric current is passed from the anode, through the electrolyte to the cathode thereby depositing aluminum on the cathode, and molten aluminum is collected from the cathode.
NASA Astrophysics Data System (ADS)
Sheremet, V.; Genç, M.; Gheshlaghi, N.; Elçi, M.; Sheremet, N.; Aydınlı, A.; Altuntaş, I.; Ding, K.; Avrutin, V.; Özgür, Ü.; Morkoç, H.
2018-01-01
Enhancement of InGaN/GaN based light emitting diode performance with step graded electron injectors through a two-step passivation is reported. Perimeter passivation of LED dies with SiO2 immediately following ICP mesa etch in addition to conventional Si3N4 dielectric surface passivation leads to decrease in the reverse bias leakage current by a factor of two as well as a decrease in the shunt current under forward bias by an order of magnitude. Mitigation of the leakage currents owing to the two-step passivation leads to significant increase in the radiant intensity of LEDs by more than a factor of two compared to the conventional single step surface passivation. Further, micro-dome patterned surface of Si3N4 passivation layer allow enhanced light extraction from LEDs.
Method of manufacturing carbon nanotubes
NASA Technical Reports Server (NTRS)
Benavides, Jeanette M. (Inventor); Leidecker, Henning W. (Inventor); Frazier, Jeffrey (Inventor)
2004-01-01
A process for manufacturing carbon nanotubes, including a step of inducing electrical current through a carbon anode and a carbon cathode under conditions effective to produce the carbon nanotubes, wherein the carbon cathode is larger than the carbon anode. Preferably, a welder is used to induce the electrical current via an arc welding process. Preferably, an exhaust hood is placed on the anode, and the process does not require a closed or pressurized chamber. The process provides high-quality, single-walled carbon nanotubes, while eliminating the need for a metal catalyst.
Automated brush plating process for solid oxide fuel cells
Long, Jeffrey William
2003-01-01
A method of depositing a metal coating (28) on the interconnect (26) of a tubular, hollow fuel cell (10) contains the steps of providing the fuel cell (10) having an exposed interconnect surface (26); contacting the inside of the fuel cell (10) with a cathode (45) without use of any liquid materials; passing electrical current through a contacting applicator (46) which contains a metal electrolyte solution; passing the current from the applicator (46) to the cathode (45) and contacting the interconnect (26) with the applicator (46) and coating all of the exposed interconnect surface.
Hill, Jacqueline J; Kuyken, Willem; Richards, David A
2014-11-20
Stepped care is recommended and implemented as a means to organise depression treatment. Compared with alternative systems, it is assumed to achieve equivalent clinical effects and greater efficiency. However, no trials have examined these assumptions. A fully powered trial of stepped care compared with intensive psychological therapy is required but a number of methodological and procedural uncertainties associated with the conduct of a large trial need to be addressed first. STEPS (Developing stepped care treatment for depression) is a mixed methods study to address uncertainties associated with a large-scale evaluation of stepped care compared with high-intensity psychological therapy alone for the treatment of depression. We will conduct a pilot randomised controlled trial with an embedded process study. Quantitative trial data on recruitment, retention and the pathway of patients through treatment will be used to assess feasibility. Outcome data on the effects of stepped care compared with high-intensity therapy alone will inform a sample size calculation for a definitive trial. Qualitative interviews will be undertaken to explore what people think of our trial methods and procedures and the stepped care intervention. A minimum of 60 patients with Major Depressive Disorder will be recruited from an Improving Access to Psychological Therapies service and randomly allocated to receive stepped care or intensive psychological therapy alone. All treatments will be delivered at clinic facilities within the University of Exeter. Quantitative patient-related data on depressive symptoms, worry and anxiety and quality of life will be collected at baseline and 6 months. The pilot trial and interviews will be undertaken concurrently. Quantitative and qualitative data will be analysed separately and then integrated. The outcomes of this study will inform the design of a fully powered randomised controlled trial to evaluate the effectiveness and efficiency of stepped care. Qualitative data on stepped care will be of immediate interest to patients, clinicians, service managers, policy makers and guideline developers. A more informed understanding of the feasibility of a large trial will be obtained than would be possible from a purely quantitative (or qualitative) design. Current Controlled Trials ISRCTN66346646 registered on 2 July 2014.
A Leadership and Managerial Competency Framework for Public Hospital Managers in Vietnam
Van Tuong, Phan; Duc Thanh, Nguyen
2017-01-01
Objective The aim of this paper was to develop a leadership and managerial competency framework for public hospital managers in Vietnam. Methods This mixed-method study used a four-step approach. The first step was a position description content analysis to identify the tasks hospital managers are required to carry out. The resulting data were used to identify the leadership and managerial competency factors and items in the second step. In the third step, a workshop was organized to reach consensus about the validity of these competency factors and items. Finally, a quantitative survey was conducted across a sample of 891 hospital managers who are working in the selected hospitals in seven geographical regions in Vietnam to validate the competency scales using exploratory factor analysis (EFA) and Cronbach's alpha. Results The study identified a number of tasks required for public hospital managers and confirmed the competencies for implementing these tasks effectively. Four dimensions with 14 components and 81 items of leadership and managerial competencies were identified. These components exhibited 83.8% of variance and Cronbach's alpha were at good level of 0.9. Conclusions These competencies are required for public hospital managers which provide guidance to the further development of the competency-based training for the current management taskforce and preparing future hospital managers. PMID:29546227
NASA Astrophysics Data System (ADS)
Alvarez, Jose; Massey, Steven; Kalitsov, Alan; Velev, Julian
Nanopore sequencing via transverse current has emerged as a competitive candidate for mapping DNA methylation without needed bisulfite-treatment, fluorescent tag, or PCR amplification. By eliminating the error producing amplification step, long read lengths become feasible, which greatly simplifies the assembly process and reduces the time and the cost inherent in current technologies. However, due to the large error rates of nanopore sequencing, single base resolution has not been reached. A very important source of noise is the intrinsic structural noise in the electric signature of the nucleotide arising from the influence of neighboring nucleotides. In this work we perform calculations of the tunneling current through DNA molecules in nanopores using the non-equilibrium electron transport method within an effective multi-orbital tight-binding model derived from first-principles calculations. We develop a base-calling algorithm accounting for the correlations of the current through neighboring bases, which in principle can reduce the error rate below any desired precision. Using this method we show that we can clearly distinguish DNA methylation and other base modifications based on the reading of the tunneling current.
Formation of Cyclic Steps due to the Surge-type Turbidity Currents in a Flume Experiment
NASA Astrophysics Data System (ADS)
Yokokawa, M.
2016-12-01
Supercritical turbidity currents often form crescentic step-like wavy structures, which have been found at the submarine canyons, and deltaic environments. Field observations of turbidity currents and seabed topography on the Squamish delta in British Columbia, Canada revealed that cyclic steps formed by the surge-type turbidity currents (e.g., Hughes Clarke et al., 2012a; 2012b; 2014). The high-density portion of the flow, which affects the sea floor morphology, lasted only 30-60 seconds. The questions arise if we can reconstruct paleo-flow condition from the morphologic features of these steps. We don't know answers right now because there have been no experiments about the formative conditions of cyclic steps due to the "surge-type" turbidity currents. Here we did preliminary experiments on the formation of cyclic steps due to the multiple surge-type density currents, and compare the morphology of the steps with those of Squamish delta. First of all, we measured wave length and wave height of each step from profiles of each channels of Squamish delta from the elevation data and calculated the wave steepness. Wave steepness of active steps ranges about 0.05 to 0.15, which is relatively larger compare with those of other sediment waves. And in general, wave steepness is larger in the proximal region. The experiments had been performed at Osaka Institute of Technology. A flume, which is 7.0 m long, 0.3 m deep and 2 cm wide, was suspended in a larger tank, which is 7.6 m long, 1.2 m deep and 0.3 m wide, filled with water. The inner flume tilted at 7 degrees. Mixture of salt water (1.17 g/cm3) and plastic particles (1.5 g/cm3, 0.1-0.18 mm in diameter), whose weight ratio is 10:1, poured into the upstream end of the inner flume from head tank for 5 seconds. Discharge of the mixture was 240mL/s, thus for 5seconds 1200mL of mixture was released into the inner flume. We made 130 surges. As a result, four steps were formed ultimately, which were moving toward upstream direction. Wave steepness of the steps increases as number of runs increases, and reached to close to the value of Squamish. We did the other experiment for the continuous turbidity current. The conditions of the experiment were same as those of surge-type experiment except the duration of the run, which was 990 seconds, but it did not form cyclic steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-02-01
To make the coal-to-hydrogen route economically attractive, improvements are being sought in each step of the process: coal gasification, water-carbon monoxide shift reaction, and hydrogen separation. This report addresses the use of membranes in the hydrogen separation step. The separation of hydrogen from synthesis gas is a major cost element in the manufacture of hydrogen from coal. Separation by membranes is an attractive, new, and still largely unexplored approach to the problem. Membrane processes are inherently simple and efficient and often have lower capital and operating costs than conventional processes. In this report current ad future trends in hydrogen productionmore » and use are first summarized. Methods of producing hydrogen from coal are then discussed, with particular emphasis on the Texaco entrained flow gasifier and on current methods of separating hydrogen from this gas stream. The potential for membrane separations in the process is then examined. In particular, the use of membranes for H{sub 2}/CO{sub 2}, H{sub 2}/CO, and H{sub 2}/N{sub 2} separations is discussed. 43 refs., 14 figs., 6 tabs.« less
Comparison of 1-step and 2-step methods of fitting microbiological models.
Jewell, Keith
2012-11-15
Previous conclusions that a 1-step fitting method gives more precise coefficients than the traditional 2-step method are confirmed by application to three different data sets. It is also shown that, in comparison to 2-step fits, the 1-step method gives better fits to the data (often substantially) with directly interpretable regression diagnostics and standard errors. The improvement is greatest at extremes of environmental conditions and it is shown that 1-step fits can indicate inappropriate functional forms when 2-step fits do not. 1-step fits are better at estimating primary parameters (e.g. lag, growth rate) as well as concentrations, and are much more data efficient, allowing the construction of more robust models on smaller data sets. The 1-step method can be straightforwardly applied to any data set for which the 2-step method can be used and additionally to some data sets where the 2-step method fails. A 2-step approach is appropriate for visual assessment in the early stages of model development, and may be a convenient way to generate starting values for a 1-step fit, but the 1-step approach should be used for any quantitative assessment. Copyright © 2012 Elsevier B.V. All rights reserved.
Synthetic Self-Healing Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bello, Mollie
Given enough time, pressure, temperature fluctuation, and stress any material will fail. Currently, synthesized materials make up a large part of our everyday lives, and are used in a number of important applications such as; space travel, under water devices, precise instrumentation, transportation, and infrastructure. Structural failure of these material scan lead to expensive and dangerous consequences. In an attempt to prolong the life spans of specific materials and reduce efforts put into repairing them, biologically inspired, self-healing systems have been extensively investigated. The current review explores recent advances in three methods of synthesized self-healing: capsule based, vascular, and intrinsic.more » Ideally, self-healing materials require no human intervention to promote healing, are capable of surviving all the steps of polymer processing, and heal the same location repeatedly. Only the vascular method holds up to all of these idealities.« less
Unity PF current-source rectifier based on dynamic trilogic PWM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao Wang; Boon-Teck Ooi
1993-07-01
One remaining step in perfecting the stand-along, unity power factor, regulated current-source PWM rectifier is to reduce cost, by bringing the 12-valve converter (consisting of three single-phase full bridges that operate with two-level or bilogic PWM) to the six-valve bridge. However, the six-valve topology requires a three-level or trilogic PWM strategy that can handle feedback signals. This feature was not available until now. The paper describes a general method of translating three-phase bilogic PWM signals to three-phase trilogic PWM signals. The method of translation retains the characteristics of the bilogic PWM, including the frequency bandwidth. Experiments show that the trilogicmore » PWM signals produced by the method can not only handle stabilizing feedback signals but also signals for active filtering.« less
Apparatus and method for recharging a string a avalanche transistors within a pulse generator
Fulkerson, E. Stephen
2000-01-01
An apparatus and method for recharging a string of avalanche transistors within a pulse generator is disclosed. A plurality of amplification stages are connected in series. Each stage includes an avalanche transistor and a capacitor. A trigger signal, causes the apparatus to generate a very high voltage pulse of a very brief duration which discharges the capacitors. Charge resistors inject current into the string of avalanche transistors at various points, recharging the capacitors. The method of the present invention includes the steps of supplying current to charge resistors from a power supply; using the charge resistors to charge capacitors connected to a set of serially connected avalanche transistors; triggering the avalanche transistors; generating a high-voltage pulse from the charge stored in the capacitors; and recharging the capacitors through the charge resistors.
Developments in the formulation and delivery of spray dried vaccines.
Kanojia, Gaurav; Have, Rimko Ten; Soema, Peter C; Frijlink, Henderik; Amorij, Jean-Pierre; Kersten, Gideon
2017-10-03
Spray drying is a promising method for the stabilization of vaccines, which are usually formulated as liquids. Usually, vaccine stability is improved by spray drying in the presence of a range of excipients. Unlike freeze drying, there is no freezing step involved, thus the damage related to this step is avoided. The edge of spray drying resides in its ability for particles to be engineered to desired requirements, which can be used in various vaccine delivery methods and routes. Although several spray dried vaccines have shown encouraging preclinical results, the number of vaccines that have been tested in clinical trials is limited, indicating a relatively new area of vaccine stabilization and delivery. This article reviews the current status of spray dried vaccine formulations and delivery methods. In particular it discusses the impact of process stresses on vaccine integrity, the application of excipients in spray drying of vaccines, process and formulation optimization strategies based on Design of Experiment approaches as well as opportunities for future application of spray dried vaccine powders for vaccine delivery.
Multigrid methods with space–time concurrency
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...
2017-10-06
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Multigrid methods with space–time concurrency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Lin, Da; Hong, Ping; Zhang, Siheng; Xu, Weize; Jamal, Muhammad; Yan, Keji; Lei, Yingying; Li, Liang; Ruan, Yijun; Fu, Zhen F; Li, Guoliang; Cao, Gang
2018-05-01
Chromosome conformation capture (3C) technologies can be used to investigate 3D genomic structures. However, high background noise, high costs, and a lack of straightforward noise evaluation in current methods impede the advancement of 3D genomic research. Here we developed a simple digestion-ligation-only Hi-C (DLO Hi-C) technology to explore the 3D landscape of the genome. This method requires only two rounds of digestion and ligation, without the need for biotin labeling and pulldown. Non-ligated DNA was efficiently removed in a cost-effective step by purifying specific linker-ligated DNA fragments. Notably, random ligation could be quickly evaluated in an early quality-control step before sequencing. Moreover, an in situ version of DLO Hi-C using a four-cutter restriction enzyme has been developed. We applied DLO Hi-C to delineate the genomic architecture of THP-1 and K562 cells and uncovered chromosomal translocations. This technology may facilitate investigation of genomic organization, gene regulation, and (meta)genome assembly.
Sequence-Based Prediction of RNA-Binding Residues in Proteins.
Walia, Rasna R; El-Manzalawy, Yasser; Honavar, Vasant G; Dobbs, Drena
2017-01-01
Identifying individual residues in the interfaces of protein-RNA complexes is important for understanding the molecular determinants of protein-RNA recognition and has many potential applications. Recent technical advances have led to several high-throughput experimental methods for identifying partners in protein-RNA complexes, but determining RNA-binding residues in proteins is still expensive and time-consuming. This chapter focuses on available computational methods for identifying which amino acids in an RNA-binding protein participate directly in contacting RNA. Step-by-step protocols for using three different web-based servers to predict RNA-binding residues are described. In addition, currently available web servers and software tools for predicting RNA-binding sites, as well as databases that contain valuable information about known protein-RNA complexes, RNA-binding motifs in proteins, and protein-binding recognition sites in RNA are provided. We emphasize sequence-based methods that can reliably identify interfacial residues without the requirement for structural information regarding either the RNA-binding protein or its RNA partner.
Sequence-Based Prediction of RNA-Binding Residues in Proteins
Walia, Rasna R.; EL-Manzalawy, Yasser; Honavar, Vasant G.; Dobbs, Drena
2017-01-01
Identifying individual residues in the interfaces of protein–RNA complexes is important for understanding the molecular determinants of protein–RNA recognition and has many potential applications. Recent technical advances have led to several high-throughput experimental methods for identifying partners in protein–RNA complexes, but determining RNA-binding residues in proteins is still expensive and time-consuming. This chapter focuses on available computational methods for identifying which amino acids in an RNA-binding protein participate directly in contacting RNA. Step-by-step protocols for using three different web-based servers to predict RNA-binding residues are described. In addition, currently available web servers and software tools for predicting RNA-binding sites, as well as databases that contain valuable information about known protein–RNA complexes, RNA-binding motifs in proteins, and protein-binding recognition sites in RNA are provided. We emphasize sequence-based methods that can reliably identify interfacial residues without the requirement for structural information regarding either the RNA-binding protein or its RNA partner. PMID:27787829
NASA Astrophysics Data System (ADS)
Jiang, Jiamin; Younis, Rami M.
2017-10-01
In the presence of counter-current flow, nonlinear convergence problems may arise in implicit time-stepping when the popular phase-potential upwinding (PPU) scheme is used. The PPU numerical flux is non-differentiable across the co-current/counter-current flow regimes. This may lead to cycles or divergence in the Newton iterations. Recently proposed methods address improved smoothness of the numerical flux. The objective of this work is to devise and analyze an alternative numerical flux scheme called C1-PPU that, in addition to improving smoothness with respect to saturations and phase potentials, also improves the level of scalar nonlinearity and accuracy. C1-PPU involves a novel use of the flux limiter concept from the context of high-resolution methods, and allows a smooth variation between the co-current/counter-current flow regimes. The scheme is general and applies to fully coupled flow and transport formulations with an arbitrary number of phases. We analyze the consistency property of the C1-PPU scheme, and derive saturation and pressure estimates, which are used to prove the solution existence. Several numerical examples for two- and three-phase flows in heterogeneous and multi-dimensional reservoirs are presented. The proposed scheme is compared to the conventional PPU and the recently proposed Hybrid Upwinding schemes. We investigate three properties of these numerical fluxes: smoothness, nonlinearity, and accuracy. The results indicate that in addition to smoothness, nonlinearity may also be critical for convergence behavior and thus needs to be considered in the design of an efficient numerical flux scheme. Moreover, the numerical examples show that the C1-PPU scheme exhibits superior convergence properties for large time steps compared to the other alternatives.
Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim
2016-01-01
The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods.
Control Circuit For Two Stepping Motors
NASA Technical Reports Server (NTRS)
Ratliff, Roger; Rehmann, Kenneth; Backus, Charles
1990-01-01
Control circuit operates two independent stepping motors, one at a time. Provides following operating features: After selected motor stepped to chosen position, power turned off to reduce dissipation; Includes two up/down counters that remember at which one of eight steps each motor set. For selected motor, step indicated by illumination of one of eight light-emitting diodes (LED's) in ring; Selected motor advanced one step at time or repeatedly at rate controlled; Motor current - 30 mA at 90 degree positions, 60 mA at 45 degree positions - indicated by high or low intensity of LED that serves as motor-current monitor; Power-on reset feature provides trouble-free starts; To maintain synchronism between control circuit and motors, stepping of counters inhibited when motor power turned off.
The current matrix elements from HAL QCD method
NASA Astrophysics Data System (ADS)
Watanabe, Kai; Ishii, Noriyoshi
2018-03-01
HAL QCD method is a method to construct a potential (HAL QCD potential) that reproduces the NN scattering phase shift faithful to the QCD. The HAL QCD potential is obtained from QCD by eliminating the degrees of freedom of quarks and gluons and leaving only two particular hadrons. Therefor, in the effective quantum mechanics of two nucleons defined by HAL QCD potential, the conserved current consists not only of the nucleon current but also an extra current originating from the potential (two-body current). Though the form of the two-body current is closely related to the potential, it is not straight forward to extract the former from the latter. In this work, we derive the the current matrix element formula in the quantum mechanics defined by the HAL QCD potential. As a first step, we focus on the non-relativistic case. To give an explicit example, we consider a second quantized non-relativistic two-channel coupling model which we refer to as the original model. From the original model, the HAL QCD potential for the open channel is constructed by eliminating the closed channel in the elastic two-particle scattering region. The current matrix element formula is derived by demanding the effective quantum mechanics defined by the HAL QCD potential to respond to the external field in the same way as the original two-channel coupling model.
Effects of Imperfect Dynamic Clamp: Computational and Experimental Results
Bettencourt, Jonathan C.; Lillis, Kyle P.; White, John A.
2008-01-01
In the dynamic clamp technique, a typically nonlinear feedback system delivers electrical current to an excitable cell that represents the actions of “virtual” ion channels (e.g., channels that are gated by local membrane potential or by electrical activity in neighboring biological or virtual neurons). Since the conception of this technique, there have been a number of different implementations of dynamic clamp systems, each with differing levels of flexibility and performance. Embedded hardware-based systems typically offer feedback that is very fast and precisely timed, but these systems are often expensive and sometimes inflexible. PC-based systems, on the other hand, allow the user to write software that defines an arbitrarily complex feedback system, but real-time performance in PC-based systems can be deteriorated by imperfect real-time performance. Here we systematically evaluate the performance requirements for artificial dynamic clamp knock-in of transient sodium and delayed rectifier potassium conductances. Specifically we examine the effects of controller time step duration, differential equation integration method, jitter (variability in time step), and latency (the time lag from reading inputs to updating outputs). Each of these control system flaws is artificially introduced in both simulated and real dynamic clamp experiments. We demonstrate that each of these errors affect dynamic clamp accuracy in a way that depends on the time constants and stiffness of the differential equations being solved. In simulations, time steps above 0.2 ms lead to catastrophic alteration of spike shape, but the frequency-vs.-current relationship is much more robust. Latency (the part of the time step that occurs between measuring membrane potential and injecting re-calculated membrane current) is a crucial factor as well. Experimental data are substantially more sensitive to inaccuracies than simulated data. PMID:18076999
Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method
NASA Astrophysics Data System (ADS)
Taitano, William; Knoll, Dana; Chacon, Luis
2009-11-01
The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO
NASA Astrophysics Data System (ADS)
Kim, Youngsun
2017-05-01
The most common structure used for current transformers (CTs) consists of secondary windings around a ferromagnetic core past the primary current being measured. A CT used as a surge protection device (SPD) may experience large inrushes of current, like surges. However, when a large current flows into the primary winding, measuring the magnitude of the current is difficult because the ferromagnetic core becomes magnetically saturated. Several approaches to reduce the saturation effect are described in the literature. A Rogowski coil is representative of several devices that measure large currents. It is an electrical device that measures alternating current (AC) or high-frequency current. However, such devices are very expensive in application. In addition, the volume of a CT must be increased to measure sufficiently large currents, but for installation spaces that are too small, other methods must be used. To solve this problem, it is necessary to analyze the magnetic field and electromotive force (EMF) characteristics when designing a CT. Thus, we proposed an analysis method for the CT under an inrush current using the time-domain finite element method (TDFEM). The input source current of a surge waveform is expanded by a Fourier series to obtain an instantaneous value. An FEM model of the device is derived in a two-dimensional system and coupled with EMF circuits. The time-derivative term in the differential equation is solved in each time step by the finite difference method. It is concluded that the proposed algorithm is useful for analyzing CT characteristics, including the field distribution. Consequently, the proposed algorithm yields a reference for obtaining the effects of design parameters and magnetic materials for special shapes and sizes before the CT is designed and manufactured.
Cross-current leaching of indium from end-of-life LCD panels.
Rocchetti, Laura; Amato, Alessia; Fonti, Viviana; Ubaldini, Stefano; De Michelis, Ida; Kopacek, Bernd; Vegliò, Francesco; Beolchini, Francesca
2015-08-01
Indium is a critical element mainly produced as a by-product of zinc mining, and it is largely used in the production process of liquid crystal display (LCD) panels. End-of-life LCDs represent a possible source of indium in the field of urban mining. In the present paper, we apply, for the first time, cross-current leaching to mobilize indium from end-of-life LCD panels. We carried out a series of treatments to leach indium. The best leaching conditions for indium were 2M sulfuric acid at 80°C for 10min, which allowed us to completely mobilize indium. Taking into account the low content of indium in end-of-life LCDs, of about 100ppm, a single step of leaching is not cost-effective. We tested 6 steps of cross-current leaching: in the first step indium leaching was complete, whereas in the second step it was in the range of 85-90%, and with 6 steps it was about 50-55%. Indium concentration in the leachate was about 35mg/L after the first step of leaching, almost 2-fold at the second step and about 3-fold at the fifth step. Then, we hypothesized to scale up the process of cross-current leaching up to 10 steps, followed by cementation with zinc to recover indium. In this simulation, the process of indium recovery was advantageous from an economic and environmental point of view. Indeed, cross-current leaching allowed to concentrate indium, save reagents, and reduce the emission of CO2 (with 10 steps we assessed that the emission of about 90kg CO2-Eq. could be avoided) thanks to the recovery of indium. This new strategy represents a useful approach for secondary production of indium from waste LCD panels. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vázquez-Rowe, Ian; Iribarren, Diego
2015-01-01
Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.
Vázquez-Rowe, Ian
2015-01-01
Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting. PMID:25654136
Method of electrode fabrication and an electrode for metal chloride battery
Bloom, I.D.; Nelson, P.A.; Vissers, D.R.
1993-03-16
A method of fabricating an electrode for use in a metal chloride battery and an electrode are provided. The electrode has relatively larger and more uniform pores than those found in typical electrodes. The fabrication method includes the steps of mixing sodium chloride particles selected from a predetermined size range with metal particles selected from a predetermined size range, and then rigidifying the mixture. The electrode exhibits lower resistivity values of approximately 0.5 [Omega]cm[sup 2] than those resistivity values of approximately 1.0-1.5 [Omega]cm[sup 2] exhibited by currently available electrodes.
Method of electrode fabrication and an electrode for metal chloride battery
Bloom, Ira D.; Nelson, Paul A.; Vissers, Donald R.
1993-01-01
A method of fabricating an electrode for use in a metal chloride battery and an electrode are provided. The electrode has relatively larger and more uniform pores than those found in typical electrodes. The fabrication method includes the steps of mixing sodium chloride particles selected from a predetermined size range with metal particles selected from a predetermined size range, and then rigidifying the mixture. The electrode exhibits lower resistivity values of approximately 0.5 .OMEGA.cm.sup.2 than those resistivity values of approximately 1.0-1.5 .OMEGA.cm.sup.2 exhibited by currently available electrodes.
Quantum lithography beyond the diffraction limit via Rabi-oscillations
NASA Astrophysics Data System (ADS)
Liao, Zeyang; Al-Amri, Mohammad; Zubairy, M. Suhail
2011-03-01
We propose a quantum optical method to do the sub-wavelength lithography. Our method is similar to the traditional lithography but adding a critical step before dissociating the chemical bound of the photoresist. The subwavelength pattern is achieved by inducing the multi-Rabi-oscillation between the two atomic levels. The proposed method does not require multiphoton absorption and the entanglement of photons. This method is expected to be realizable using current technology. This work is supported by a grant from the Qatar National Research Fund (QNRF) under the NPRP project and a grant from the King Abdulaziz City for Science and Technology (KACST).
Ablative Thermal Response Analysis Using the Finite Element Method
NASA Technical Reports Server (NTRS)
Dec John A.; Braun, Robert D.
2009-01-01
A review of the classic techniques used to solve ablative thermal response problems is presented. The advantages and disadvantages of both the finite element and finite difference methods are described. As a first step in developing a three dimensional finite element based ablative thermal response capability, a one dimensional computer tool has been developed. The finite element method is used to discretize the governing differential equations and Galerkin's method of weighted residuals is used to derive the element equations. A code to code comparison between the current 1-D tool and the 1-D Fully Implicit Ablation and Thermal Response Program (FIAT) has been performed.
Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H
1997-01-01
The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
NASA Astrophysics Data System (ADS)
Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo
2017-11-01
External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and could need others specific external cues. In conclusion the current protocol (and their selected parameters, kind of sound time for training, step of variation, range of variation) provide a suitable gait facilitation method specially for patients with the highest gait disturbance (stage 2 and 3). The method should be adjusted for initial stages and evaluated in a rehabilitation program.
Measurement of intrahepatic pressure during radiofrequency ablation in porcine liver.
Kawamoto, Chiaki; Yamauchi, Atsushi; Baba, Yoko; Kaneko, Keiko; Yakabi, Koji
2010-04-01
To identify the most effective procedures to avoid increased intrahepatic pressure during radiofrequency ablation, we evaluated different ablation methods. Laparotomy was performed in 19 pigs. Intrahepatic pressure was monitored using an invasive blood pressure monitor. Radiofrequency ablation was performed as follows: single-step standard ablation; single-step at 30 W; single-step at 70 W; 4-step at 30 W; 8-step at 30 W; 8-step at 70 W; and cooled-tip. The array was fully deployed in single-step methods. In the multi-step methods, the array was gradually deployed in four or eight steps. With the cooled-tip, ablation was performed by increasing output by 10 W/min, starting at 40 W. Intrahepatic pressure was as follows: single-step standard ablation, 154.5 +/- 30.9 mmHg; single-step at 30 W, 34.2 +/- 20.0 mmHg; single-step at 70 W, 46.7 +/- 24.3 mmHg; 4-step at 30 W, 42.3 +/- 17.9 mmHg; 8-step at 30 W, 24.1 +/- 18.2 mmHg; 8-step at 70 W, 47.5 +/- 31.5 mmHg; and cooled-tip, 114.5 +/- 16.6 mmHg. The radiofrequency ablation-induced area was spherical with single-step standard ablation, 4-step at 30 W, and 8-step at 30 W. Conversely, the ablated area was irregular with single-step at 30 W, single-step at 70 W, and 8-step at 70 W. The ablation time was significantly shorter for the multi-step method than for the single-step method. Increased intrahepatic pressure could be controlled using multi-step methods. From the shapes of the ablation area, 30-W 8-step expansions appear to be most suitable for radiofrequency ablation.
Preoperative Planning in Orthopaedic Surgery. Current Practice and Evolving Applications.
Atesok, Kivanc; Galos, David; Jazrawi, Laith M; Egol, Kenneth A
2015-12-01
Preoperative planning is an essential prerequisite for the success of orthopaedic procedures. Traditionally, the exercise has involved the written down, step by step "blueprint" of the surgical procedure. Preoperative planning of the technical aspects of the orthopaedic procedure has been performed on hardcopy radiographs using various methods such as copying the radiographic image on tracing papers to practice the planned interventions. This method has become less practical due to variability in radiographic magnification and increasing implementation of digital imaging systems. Advances in technology along with recognition of the importance of surgical safety protocols resulted in widespread changes in orthopaedic preoperative planning approaches. Nowadays, perioperative "briefings" have gained particular importance and novel planning methods have started to integrate into orthopaedic practice. These methods include using software that enables surgeons to perform preoperative planning on digital radiographs and to construct 3D digital models or prototypes of various orthopaedic pathologies from a patient's CT scans to practice preoperatively. Evidence-to-date suggests that preoperative planning and briefings are effective means of favorably influencing the outcomes of orthopaedic procedures.
Chuang, Yen-Jun; Zhou, Xichun; Pan, Zhengwei; Turchi, Craig
2009-01-01
Carbohydrate functionalized nanoparticles, i.e., the glyconanoparticles, have wide application ranging from studies of carbohydrate-protein interactions, in vivo cell imaging, biolabeling, etc. Currently reported methods for preparation of glyconanoaprticles require multi-step modifications of carbohydrates moieties to conjugate to nanoparticle surface. However, the required synthetic manipulations are difficult and time consuming. We report herewith a simple and versatile method for preparing glyconanoparticles. This method is based on the utilization of clean and convenient microwave irradiation energy for one-step, site-specific conjugation of unmodified carbohydrates onto hydrazide-functionalized Au nanoparticles. A colorimetric assay that utilizes the ensemble of gold glyconanoparticles and Concanavalin A (ConA) was also presented. This feasible assay system was developed to analyze multivalent interactions and to determine the dissociation constant (Kd) for five kind of Au glyconanoparticles with lectin. Surface plasmon changes of the Au glyconanparticles as a function of lectin-carbohydrate interactions were measured and the dissociation constants were determined based on non-linear curve fitting. The strength of the interaction of carbohydrates with ConA was found to be as follows: Maltose > Mannose > Glucose > Lactose > MAN5. PMID:19698698
NASA Astrophysics Data System (ADS)
Yokokawa, Miwa; Yamano, Junpei; Miyai, Masatomo; Hughes Clarke, John; Izumi, Norihiro
2017-04-01
Field observations of turbidity currents and seabed topography on the Squamish delta in British Columbia, Canada revealed that cyclic steps formed by the surge-type turbidity currents (e.g., Hughes Clarke et al., 2014). The high-density portion of the flow, which affects the sea floor morphology, lasted only 30-60 seconds. We are doing flume experiments aiming to investigate the relationship between the condition of surges and topography of resultant steps. In this presentation, we are going to discuss about the effect of surge duration on the topography of steps. The experiments have been performed at Osaka Institute of Technology. A flume, which is 7.0 m long, 0.3 m deep and 2 cm wide, was suspended in a larger tank, which is 7.6 m long, 1.2 m deep and 0.3 m wide, filled with water. The inner flume tilted at 7 degrees. As a source of turbidity currents, mixture of salt water (1.17 g/cm^3) and plastic particles (1.3 g/cm^3, 0.1-0.18 mm in diameter) was prepared. The concentration of the sediments was 6.1 weight % (5.5 volume %) in the head tank. This mixture of salt water and plastic particles poured into the upstream end of the inner flume from head tank for 3 seconds or 7 seconds. 140 surges were made respectively. Discharge of the currents were fluctuated but range from 306 to 870 mL for 3s-surge, and from 1134 to 2030 mL for 7s-surge. As a result, five or six steps were formed respectively. At the case of 3s-surge, steps located at upstream portion of the flume moved vigorously toward upstream direction, whereas steps at downstream portion of the flume moved toward upstream direction at the case of 7s-surge. The wavelengths and wave heights of the steps by 3s-surge are larger than those of 7s-surge at the upstream portion of the flume, but the size of steps of 3s-surge are smaller than those of 7s-surge at the downstream portion of the flume. In this condition of slope and concentration, the longer surge duration, i.e. larger discharge of the current transports the sediment further and makes the steps larger and active at the further location from the source of the currents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
April M. Whaley; Stacey M. L. Hendrickson; Ronald L. Boring
In response to Staff Requirements Memorandum (SRM) SRM-M061020, the U.S. Nuclear Regulatory Commission (NRC) is sponsoring work to update the technical basis underlying human reliability analysis (HRA) in an effort to improve the robustness of HRA. The ultimate goal of this work is to develop a hybrid of existing methods addressing limitations of current HRA models and in particular issues related to intra- and inter-method variabilities and results. This hybrid method is now known as the Integrated Decision-tree Human Event Analysis System (IDHEAS). Existing HRA methods have looked at elements of the psychological literature, but there has not previously beenmore » a systematic attempt to translate the complete span of cognition from perception to action into mechanisms that can inform HRA. Therefore, a first step of this effort was to perform a literature search of psychology, cognition, behavioral science, teamwork, and operating performance to incorporate current understanding of human performance in operating environments, thus affording an improved technical foundation for HRA. However, this literature review went one step further by mining the literature findings to establish causal relationships and explicit links between the different types of human failures, performance drivers and associated performance measures ultimately used for quantification. This is the first of two papers that detail the literature review (paper 1) and its product (paper 2). This paper describes the literature review and the high-level architecture used to organize the literature review, and the second paper (Whaley, Hendrickson, Boring, & Xing, these proceedings) describes the resultant cognitive framework.« less
Hierarchical α-MnO2 nanowires@Ni1-x Mnx Oy nanoflakes core-shell nanostructures for supercapacitors.
Wang, Hsin-Yi; Xiao, Fang-Xing; Yu, Le; Liu, Bin; Lou, Xiong Wen David
2014-08-13
A facile two-step solution-phase method has been developed for the preparation of hierarchical α-MnO2 nanowires@Ni1-x Mnx Oy nanoflakes core-shell nanostructures. Ultralong α-MnO2 nanowires were synthesized by a hydrothermal method in the first step. Subsequently, Ni1-x Mnx Oy nanoflakes were grown on α-MnO2 nanowires to form core-shell nanostructures using chemical bath deposition followed by thermal annealing. Both solution-phase methods can be easily scaled up for mass production. We have evaluated their application in supercapacitors. The ultralong one-dimensional (1D) α-MnO2 nanowires in hierarchical core-shell nanostructures offer a stable and efficient backbone for charge transport; while the two-dimensional (2D) Ni1-x Mnx Oy nanoflakes on α-MnO2 nanowires provide high accessible surface to ions in the electrolyte. These beneficial features enable the electrode with high capacitance and reliable stability. The capacitance of the core-shell α-MnO2 @Ni1-x Mnx Oy nanostructures (x = 0.75) is as high as 657 F g(-1) at a current density of 250 mA g(-1) , and stable charging-discharging cycling over 1000 times at a current density of 2000 mA g(-1) has been realized. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ion formation mechanisms in UV-MALDI.
Knochenmuss, Richard
2006-09-01
Matrix Assisted Laser Desorption/Ionization (MALDI) is a very widely used analytical method, but has been developed in a highly empirical manner. Deeper understanding of ionization mechanisms could help to design better methods and improve interpretation of mass spectra. This review summarizes current mechanistic thinking, with emphasis on the most common MALDI variant using ultraviolet laser excitation. A two-step framework is gaining acceptance as a useful model for many MALDI experiments. The steps are primary ionization during or shortly after the laser pulse, followed by secondary reactions in the expanding plume of desorbed material. Primary ionization in UV-MALDI remains somewhat controversial, the two main approaches are the cluster and pooling/photoionization models. Secondary events are less contentious, ion-molecule reaction thermodynamics and kinetics are often invoked, but details differ. To the extent that local thermal equilibrium is approached in the plume, the mass spectra may be straightforwardly interpreted in terms of charge transfer thermodynamics.
Pretreatment methods for bioethanol production.
Xu, Zhaoyang; Huang, Fang
2014-09-01
Lignocellulosic biomass, such as wood, grass, agricultural, and forest residues, are potential resources for the production of bioethanol. The current biochemical process of converting biomass to bioethanol typically consists of three main steps: pretreatment, enzymatic hydrolysis, and fermentation. For this process, pretreatment is probably the most crucial step since it has a large impact on the efficiency of the overall bioconversion. The aim of pretreatment is to disrupt recalcitrant structures of cellulosic biomass to make cellulose more accessible to the enzymes that convert carbohydrate polymers into fermentable sugars. This paper reviews several leading acidic, neutral, and alkaline pretreatments technologies. Different pretreatment methods, including dilute acid pretreatment (DAP), steam explosion pretreatment (SEP), organosolv, liquid hot water (LHW), ammonia fiber expansion (AFEX), soaking in aqueous ammonia (SAA), sodium hydroxide/lime pretreatments, and ozonolysis are intensively introduced and discussed. In this minireview, the key points are focused on the structural changes primarily in cellulose, hemicellulose, and lignin during the above leading pretreatment technologies.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
NASA Astrophysics Data System (ADS)
Yuan, Shuai; Qiu, Zhiwen; Zhang, Hailiang; Gong, Haibo; Hao, Yufeng; Cao, Bingqiang
2016-01-01
During the growth of CH3NH3PbI3-xClx (MAPbI3-xClx) perovskite films by the two-step inter-diffusion method, the presence of a trace amount of oxygen gas is critical to their physical properties and photovoltaic performance. As the oxygen concentration increases, poor film morphologies and incomplete surface coverage are observed. Moreover, by XRD, Raman scattering, and photoluminescence measurements, we find that MAPbI3-xClx grains become more distorted and the electron-hole recombination rate dramatically increases. Higher oxygen concentration triggers a sharp decrease in the current density and the fill factor of corresponding solar cells, which degrades device performance, on average, from 14.3% to 4.4%. This work proves the importance of controlling the oxygen atmosphere in the fabrication of high-performance perovskite solar cells.
Method of manufacturing a niobium-aluminum-germanium superconductive material
Wang, John L.; Pickus, Milton R.; Douglas, Kent E.
1980-01-01
A method for manufacturing flexible Nb.sub.3 (Al,Ge) multifilamentary superconductive material in which a sintered porous niobium compact is infiltrated with an aluminum-germanium alloy and thereafter deformed and heat treated in a series of steps at different successively higher temperatures preferably below 1000.degree. C. to produce filaments composed of Nb.sub.3 (Al,G3) within the compact. By avoiding temperatures in excess of 1000.degree. C. during the heat treatment, cladding material such as copper can be applied to facilitate a deformation step preceding the heat treatment and can remain in place through the heat treatment to also serve as a temperature stabilizer for supeconductive material produced. Further, these lower heat treatment temperatures favor formation of filaments with reduced grain size and, hence with more grain boundaries which in turn increase the current-carrying capacity of the superconductive material.
A comprehensive and efficient process for counseling patients desiring sterilization.
Haws, J M; Butta, P G; Girvin, S
1997-06-01
To optimize the time spent counseling a sterilization patient, this article presents a 10-step process that includes all steps necessary to ensure a comprehensive counseling session: (1) Discuss current contraception use and all available methods; (2) assess the client's interest in/readiness for sterilization; (3) emphasize that the procedure is meant to be permanent, but there is a possibility of failure; (4) explain the surgical procedure using visuals, and include a discussion of benefits and risks; (5) explain privately to the client the need to use condoms if engaging in risky sexual activity; (6) have the client read and sign an informed consent form; (7) schedule an appointment for the procedure and provide the patient with a copy of all necessary paperwork; (8) discuss cost and payment method; (9) provide written preoperative and postoperative instructions; and (10) schedule a postoperation visit, or a postoperation semen analysis.
Forging Unsupported Metal-Boryl Bonds with Icosahedral Carboranes.
Saleh, Liban M A; Dziedzic, Rafal M; Khan, Saeed I; Spokoyny, Alexander M
2016-06-13
In contrast to the plethora of metal-catalyzed cross-coupling methods available for the installation of functional groups on aromatic hydrocarbons, a comparable variety of methods are currently not available for icosahedral carboranes, which are boron-rich three-dimensional aromatic analogues of aryl groups. Part of this is due to the limited understanding of the elementary steps for cross-coupling involving carboranes. Here, we report our efforts in isolating metal-boryl complexes to further our understanding of one of these elementary steps, oxidative addition. Structurally characterized examples of group 10 M-B bonds featuring icosahedral carboranes are completely unknown. Use of mercurocarboranes as a reagent to deliver M-B bonds saw divergent reactivity for platinum and palladium, with a Pt-B bond being isolated for the former, and a rare Pd-Hg bond being formed for the latter. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Improvement of the System of Training of Specialists by University for Coal Mining Enterprises
NASA Astrophysics Data System (ADS)
Mikhalchenko, Vadim; Seredkina, Irina
2017-11-01
In the article the ingenious technique of the Quality Function Deployment with reference to the process of training of specialists with higher education by university is considered. The method is based on the step-by-step conversion of customer requirements into specific organizational, meaningful and functional transformations of the technological process of the university. A fully deployed quality function includes four stages of tracking customer requirements while creating a product: product planning and design, process design, production design. The Quality Function Deployment can be considered as one of the methods for optimizing the technological processes of training of specialists with higher education in the current economic conditions. Implemented at the initial stages of the life cycle of the technological process, it ensures not only the high quality of the "product" of graduate school, but also the fullest possible satisfaction of consumer's requests and expectations.
Single molecule targeted sequencing for cancer gene mutation detection.
Gao, Yan; Deng, Liwei; Yan, Qin; Gao, Yongqian; Wu, Zengding; Cai, Jinsen; Ji, Daorui; Li, Gailing; Wu, Ping; Jin, Huan; Zhao, Luyang; Liu, Song; Ge, Liangjin; Deem, Michael W; He, Jiankui
2016-05-19
With the rapid decline in cost of sequencing, it is now affordable to examine multiple genes in a single disease-targeted clinical test using next generation sequencing. Current targeted sequencing methods require a separate step of targeted capture enrichment during sample preparation before sequencing. Although there are fast sample preparation methods available in market, the library preparation process is still relatively complicated for physicians to use routinely. Here, we introduced an amplification-free Single Molecule Targeted Sequencing (SMTS) technology, which combined targeted capture and sequencing in one step. We demonstrated that this technology can detect low-frequency mutations using artificially synthesized DNA sample. SMTS has several potential advantages, including simple sample preparation thus no biases and errors are introduced by PCR reaction. SMTS has the potential to be an easy and quick sequencing technology for clinical diagnosis such as cancer gene mutation detection, infectious disease detection, inherited condition screening and noninvasive prenatal diagnosis.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study
NASA Technical Reports Server (NTRS)
Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.
2010-01-01
This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.
NASA Astrophysics Data System (ADS)
Krelaus, J.; Heinemann, K.; Ullmann, B.; Freyhardt, H. C.
1995-02-01
Bulk YBa 2Cu 4O 8 (Y-124) is prepared from YBa 2Cu 3O 7-σ (Y-123) and CuO by a powder-metallurgical method. The superconducting features of the Y-124, in particular critical current densities and activation energies, are measured resistively using a four-probe technique and magnetically using a Faraday magnetometer. In a second step the Y-124 is decomposed at high temperatures. The intragranular critical current density is measured at different annealing times, tA, in order to determine and discuss the characteristics of the jc( tA) curves.
A Kalman filter for a two-dimensional shallow-water model
NASA Technical Reports Server (NTRS)
Parrish, D. F.; Cohn, S. E.
1985-01-01
A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.
Direct coal liquefaction process
Rindt, John R.; Hetland, Melanie D.
1993-01-01
An improved multistep liquefaction process for organic carbonaceous mater which produces a virtually completely solvent-soluble carbonaceous liquid product. The solubilized product may be more amenable to further processing than liquid products produced by current methods. In the initial processing step, the finely divided organic carbonaceous material is treated with a hydrocarbonaceous pasting solvent containing from 10% and 100% by weight process-derived phenolic species at a temperature within the range of 300.degree. C. to 400.degree. C. for typically from 2 minutes to 120 minutes in the presence of a carbon monoxide reductant and an optional hydrogen sulfide reaction promoter in an amount ranging from 0 to 10% by weight of the moisture- and ash-free organic carbonaceous material fed to the system. As a result, hydrogen is generated via the water/gas shift reaction at a rate necessary to prevent condensation reactions. In a second step, the reaction product of the first step is hydrogenated.
Direct coal liquefaction process
Rindt, J.R.; Hetland, M.D.
1993-10-26
An improved multistep liquefaction process for organic carbonaceous mater which produces a virtually completely solvent-soluble carbonaceous liquid product. The solubilized product may be more amenable to further processing than liquid products produced by current methods. In the initial processing step, the finely divided organic carbonaceous material is treated with a hydrocarbonaceous pasting solvent containing from 10% and 100% by weight process-derived phenolic species at a temperature within the range of 300 C to 400 C for typically from 2 minutes to 120 minutes in the presence of a carbon monoxide reductant and an optional hydrogen sulfide reaction promoter in an amount ranging from 0 to 10% by weight of the moisture- and ash-free organic carbonaceous material fed to the system. As a result, hydrogen is generated via the water/gas shift reaction at a rate necessary to prevent condensation reactions. In a second step, the reaction product of the first step is hydrogenated.
Describing litho-constrained layout by a high-resolution model filter
NASA Astrophysics Data System (ADS)
Tsai, Min-Chun
2008-05-01
A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.
NASA Astrophysics Data System (ADS)
Sharma, N.; Yu, D. H.; Zhu, Y.; Wu, Y.; Peterson, V. K.
2017-02-01
In operando NPD data of electrodes in lithium-ion batteries reveal unusual LiFePO4 phase evolution after the application of a thermal step and at high current. At low current under ambient conditions the LiFePO4 to FePO4 two-phase reaction occurs during the charge process, however, following a thermal step and at higher current this reaction appears at the end of charge and continues into the next electrochemical step. The same behavior is observed for the FePO4 to LiFePO4 transition, occurring at the end of discharge and continuing into the following electrochemical step. This suggests that the bulk (or the majority of the) electrode transformation is dependent on the battery's history, current, or temperature. Such information concerning the non-equilibrium evolution of an electrode allows a direct link between the electrode's functional mechanism that underpins lithium-ion battery behavior and the real-life operating conditions of the battery, such as variable temperature and current, to be made.
Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine
2018-01-01
Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.
Three-step method for menstrual and oral contraceptive cycle verification.
Schaumberg, Mia A; Jenkins, David G; Janse de Jonge, Xanne A K; Emmerton, Lynne M; Skinner, Tina L
2017-11-01
Fluctuating endogenous and exogenous ovarian hormones may influence exercise parameters; yet control and verification of ovarian hormone status is rarely reported and limits current exercise science and sports medicine research. The purpose of this study was to determine the effectiveness of an individualised three-step method in identifying the mid-luteal or high hormone phase in endogenous and exogenous hormone cycles in recreationally-active women and determine hormone and demographic characteristics associated with unsuccessful classification. Cross-sectional study design. Fifty-four recreationally-active women who were either long-term oral contraceptive users (n=28) or experiencing regular natural menstrual cycles (n=26) completed step-wise menstrual mapping, urinary ovulation prediction testing and venous blood sampling for serum/plasma hormone analysis on two days, 6-12days after positive ovulation prediction to verify ovarian hormone concentrations. Mid-luteal phase was successfully verified in 100% of oral contraceptive users, and 70% of naturally-menstruating women. Thirty percent of participants were classified as luteal phase deficient; when excluded, the success of the method was 89%. Lower age, body fat and longer menstrual cycles were significantly associated with luteal phase deficiency. A step-wise method including menstrual cycle mapping, urinary ovulation prediction and serum/plasma hormone measurement was effective at verifying ovarian hormone status. Additional consideration of age, body fat and cycle length enhanced identification of luteal phase deficiency in physically-active women. These findings enable the development of stricter exclusion criteria for female participants in research studies and minimise the influence of ovarian hormone variations within sports and exercise science and medicine research. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Patrol force allocation for law enforcement: An introductory planning guide
NASA Technical Reports Server (NTRS)
Sohn, R. L.; Kennedy, R. D.
1976-01-01
Previous and current methods for analyzing police patrol forces are reviewed and discussed. The steps in developing an allocation analysis procedure are defined, including the prediction of the rate of calls for service, determination of the number of patrol units needed, designing sectors, and analyzing dispatch strategies. Existing computer programs used for this purpose are briefly described, and some results of their application are given.
Strong coupling in electromechanical computation
NASA Astrophysics Data System (ADS)
Füzi, János
2000-06-01
A method is presented to carry out simultaneously electromagnetic field and force computation, electrical circuit analysis and mechanical computation to simulate the dynamic operation of electromagnetic actuators. The equation system is solved by a predictor-corrector scheme containing a Powell error minimization algorithm which ensures that every differential equation (coil current, field strength rate, flux rate, speed of the keeper) is fulfilled within the same time step.
A Facile Two-Step Method to Implement N√ {iSWAP} and N√ {SWAP} Gates in a Circuit QED
NASA Astrophysics Data System (ADS)
Said, T.; Chouikh, A.; Bennai, M.
2018-05-01
We propose a way for implementing a two-step N√ {iSWAP} and N √ {SWAP} gates based on the qubit-qubit interaction with N superconducting qubits, by coupling them to a resonator driven by a strong microwave field. The operation times do not increase with the growth of the qubit number. Due to the virtual excitations of the resonator, the scheme is insensitive to the decay of the resonator. Numerical analysis shows that the scheme can be implemented with high fidelity. Moreover, we propose a detailed procedure and analyze the experimental feasibility. So, our proposal can be experimentally realized in the range of current circuit QED techniques.
Reisner, Sari L; Conron, Kerith J; Tardiff, Laura Anatale; Jarvi, Stephanie; Gordon, Allegra R; Austin, S Bryn
2014-11-26
A barrier to monitoring the health of gender minority (transgender) populations is the lack of brief, validated tools with which to identify participants in surveillance systems. We used the Growing Up Today Study (GUTS), a prospective cohort study of U.S. young adults (mean age = 20.7 years in 2005), to assess the validity of self-report measures and implement a two-step method to measure gender minority status (step 1: assigned sex at birth, step 2: current gender identity). A mixed-methods study was conducted in 2013. Construct validity was evaluated in secondary data analysis of the 2010 wave (n = 7,831). Cognitive testing interviews of close-ended measures were conducted with a subsample of participants (n = 39). Compared to cisgender (non-transgender) participants, transgender participants had higher levels of recalled childhood gender nonconformity age < 11 years and current socially assigned gender nonconformity and were more likely to have ever identified as not completely heterosexual (p < 0.001). No problems with item comprehension were found for cisgender or gender minority participants. Assigned sex at birth was interpreted as sex designated on a birth certificate; transgender was understood to be a difference between a person's natal sex and gender identity. Participants were correctly classified as male, female, or transgender. The survey items performed well in this sample and are recommended for further evaluation in languages other than English and with diverse samples in terms of age, race/ethnicity, and socioeconomic status.
Bova, G Steven; Eltoum, Isam A; Kiernan, John A; Siegal, Gene P; Frost, Andra R; Best, Carolyn J M; Gillespie, John W; Emmert-Buck, Michael R
2005-01-01
Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of pancreatic malignancy and other biological phenomena. This chapter reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed-over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification. High-quality tissue microdissection does not necessarily mean high-quality samples to analyze. The quality of biomaterials obtained for analysis is highly dependent on steps upstream and downstream from tissue microdissection. We provide protocols for each of these steps, and encourage you to improve upon these. It is worth the effort of every laboratory to optimize and document its technique at each stage of the process, and we provide a starting point for those willing to spend the time to optimize. In our view, poor documentation of tissue and cell type of origin and the use of nonoptimized protocols is a source of inefficiency in current life science research. Even incremental improvement in this area will increase productivity significantly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chun-Cheng; Department of Mathematic and Physical Sciences, R.O.C. Air Force Academy, Kaohsiung 820, Taiwan; Tang, Jian-Fu
2016-06-28
The multi-step resistive switching (RS) behavior of a unipolar Pt/Li{sub 0.06}Zn{sub 0.94}O/Pt resistive random access memory (RRAM) device is investigated. It is found that the RRAM device exhibits normal, 2-, 3-, and 4-step RESET behaviors under different compliance currents. The transport mechanism within the device is investigated by means of current-voltage curves, in-situ transmission electron microscopy, and electrochemical impedance spectroscopy. It is shown that the ion transport mechanism is dominated by Ohmic behavior under low electric fields and the Poole-Frenkel emission effect (normal RS behavior) or Li{sup +} ion diffusion (2-, 3-, and 4-step RESET behaviors) under high electric fields.
Reflections in computer modeling of rooms: Current approaches and possible extensions
NASA Astrophysics Data System (ADS)
Svensson, U. Peter
2005-09-01
Computer modeling of rooms is most commonly done by some calculation technique that is based on decomposing the sound field into separate reflection components. In a first step, a list of possible reflection paths is found and in a second step, an impulse response is constructed from the list of reflections. Alternatively, the list of reflections is used for generating a simpler echogram, the energy decay as function of time. A number of geometrical acoustics-based methods can handle specular reflections, diffuse reflections, edge diffraction, curved surfaces, and locally/non-locally reacting surfaces to various degrees. This presentation gives an overview of how reflections are handled in the image source method and variants of the ray-tracing methods, which are dominating today in commercial software, as well as in the radiosity method and edge diffraction methods. The use of the recently standardized scattering and diffusion coefficients of surfaces is discussed. Possibilities for combining edge diffraction, surface scattering, and impedance boundaries are demonstrated for an example surface. Finally, the number of reflection paths becomes prohibitively high when all such combinations are included as demonstrated for a simple concert hall model. [Work supported by the Acoustic Research Centre through NFR, Norway.
Hybrid finite element and Brownian dynamics method for charged particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Gary A., E-mail: ghuber@ucsd.edu; Miao, Yinglong; Zhou, Shenggao
2016-04-28
Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented usingmore » a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.« less
Nanocrystal synthesis in microfluidic reactors: where next?
Phillips, Thomas W; Lignos, Ioannis G; Maceiczyk, Richard M; deMello, Andrew J; deMello, John C
2014-09-07
The past decade has seen a steady rise in the use of microfluidic reactors for nanocrystal synthesis, with numerous studies reporting improved reaction control relative to conventional batch chemistry. However, flow synthesis procedures continue to lag behind batch methods in terms of chemical sophistication and the range of accessible materials, with most reports having involved simple one- or two-step chemical procedures directly adapted from proven batch protocols. Here we examine the current status of microscale methods for nanocrystal synthesis, and consider what role microreactors might ultimately play in laboratory-scale research and industrial production.
Searching for an Axis-Parallel Shoreline
NASA Astrophysics Data System (ADS)
Langetepe, Elmar
We are searching for an unknown horizontal or vertical line in the plane under the competitive framework. We design a framework for lower bounds on all cyclic and monotone strategies that result in two-sequence functionals. For optimizing such functionals we apply a method that combines two main paradigms. The given solution shows that the combination method is of general interest. Finally, we obtain the current best strategy and can prove that this is the best strategy among all cyclic and monotone strategies which is a main step toward a lower bound construction.
Surface Treatment And Protection Method For Cadium Zinc Telluride Crystals
Wright, Gomez W.; James, Ralph B.; Burger, Arnold; Chinn, Douglas A.
2006-02-21
A method for treatment of the surface of a CdZnTe (CZT) crystal that provides a native dielectric coating to reduce surface leakage currents and thereby, improve the resolution of instruments incorporating detectors using CZT crystals. A two step process is disclosed, etching the surface of a CZT crystal with a solution of the conventional bromine/methanol etch treatment, and after attachment of electrical contacts, passivating the CZT crystal surface with a solution of 10 w/o NH4F and 10 w/o H2O2 in water.
Biomimicry in Product Design through Materials Selection and Computer Aided Engineering
NASA Astrophysics Data System (ADS)
Alexandridis, G.; Tzetzis, D.; Kyratsis, P.
2016-11-01
The aim of this study is to demonstrate a 7-step methodology that describes the way nature can act as a source of inspiration for the design and the development of a product. Furthermore, it suggests special computerized tools and methods for the product optimization regarding its environmental impact i.e. material selection, production methods. For validation purposes, a garden chaise lounge that imitates the form of a scorpion was developed as a result for the case study and the presentation of the current methodology.
Measured close lightning leader-step electric-field-derivative waveforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Doug M.; Hill, Dustin; Biagi, Christopher J.
2010-12-01
We characterize the measured electric field-derivative (dE/dt) waveforms of lightning stepped-leader steps from three negative lightning flashes at distances of tens to hundreds of meters. Electromagnetic signatures of leader steps at such close distances have rarely been documented in previous literature. Individual leader-step three-dimensional locations are determined by a dE/dt TOA system. The leader-step field derivative is typically a bipolar pulse with a sharp initial half-cycle of the same polarity as that of the return stroke, followed by an opposite polarity overshoot that decays relatively slowly to background level. This overshoot increases in amplitude relative to the initial peak andmore » becomes dominant as range decreases. The initial peak is often preceded by a 'slow front,' similar to the slow front that precedes the fast transition to peak in first return stroke dE/dt and E waveforms. The overall step-field waveform duration is typically less than 1 {micro}s. The mean initial peak of dE/dt, range-normalized to 100 km, is 7.4 V m{sup -1} {micro}s{sup -1} (standard deviation (S.D.), 3.7 V m{sup -1} {micro}s{sup -1}, N = 103), the mean half-peak width is 33.5 ns (S.D., 11.9 ns, N = 69), and the mean 10-to-90% risetime is 43.6 ns (S.D., 24.2 ns, N = 69). From modeling, we determine the properties of the leader step currents which produced two typical measured field derivatives, and we use one of these currents to calculate predicted leader step E and dE/dt as a function of source range and height, the results being in good agreement with our observations. The two modeled current waveforms had maximum rates of current rise-to-peak near 100 kA {micro}s{sup -1}, peak currents in the 5-7 kA range, current half-peak widths of about 300 ns, and charge transfers of {approx}3 mC. As part of the modeling, those currents were propagated upward at 1.5 x 10{sup 8} m s{sup -1}, with their amplitudes decaying exponentially with a decay height constant of 25 m.« less
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
Automatic segmentation of psoriasis lesions
NASA Astrophysics Data System (ADS)
Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang
2014-10-01
The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.
A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review.
Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha
2017-04-01
To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert's evaluation. The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems.
Inversion of Acoustic and Electromagnetic Recordings for Mapping Current Flow in Lightning Strikes
NASA Astrophysics Data System (ADS)
Anderson, J.; Johnson, J.; Arechiga, R. O.; Thomas, R. J.
2012-12-01
Acoustic recordings can be used to map current-carrying conduits in lightning strikes. Unlike stepped leaders, whose very high frequency (VHF) radio emissions have short (meter-scale) wavelengths and can be located by lightning-mapping arrays, current pulses emit longer (kilometer-scale) waves and cannot be mapped precisely by electromagnetic observations alone. While current pulses are constrained to conductive channels created by stepped leaders, these leaders often branch as they propagate, and most branches fail to carry current. Here, we present a method to use thunder recordings to map current pulses, and we apply it to acoustic and VHF data recorded in 2009 in the Magdalena mountains in central New Mexico, USA. Thunder is produced by rapid heating and expansion of the atmosphere along conductive channels in response to current flow, and therefore can be used to recover the geometry of the current-carrying channel. Toward this goal, we use VHF pulse maps to identify candidate conductive channels where we treat each channel as a superposition of finely-spaced acoustic point sources. We apply ray tracing in variable atmospheric structures to forward model the thunder that our microphone network would record for each candidate channel. Because multiple channels could potentially carry current, a non-linear inversion is performed to determine the acoustic source strength of each channel. For each combination of acoustic source strengths, synthetic thunder is modeled as a superposition of thunder signals produced by each channel, and a power envelope of this stack is then calculated. The inversion iteratively minimizes the misfit between power envelopes of recorded and modeled thunder. Because the atmospheric sound speed structure through which the waves propagate during these events is unknown, we repeat the procedure on many plausible atmospheres to find an optimal fit. We then determine the candidate channel, or channels, that minimizes residuals between synthetic and acoustic recordings. We demonstrate the usefulness of this method on both intracloud and cloud-to-ground strikes, and discuss factors affecting our ability to replicate recorded thunder.
Kay-Lambkin, Frances J; Baker, Amanda L; McKetin, Rebecca; Lee, Nicole
2010-09-01
Stepped-care has been recommended in the alcohol and other drug field and adopted in a number of service settings, but few research projects have examined this approach. This article aims to describe a pilot trial of stepped-care methods in the treatment of methamphetamine use and depression comorbidity. An adaptive treatment strategy was developed based on recommendations for stepped-care among methamphetamine users, and incorporating cognitive behaviour therapy/motivational intervention for methamphetamine use and depression. The adaptive treatment strategy was compared with a fixed treatment, comprising an extended integrated cognitive behaviour therapy/motivational intervention treatment. Eighteen participants across two study sites were involved in the trial, and were current users of methamphetamines (at least once weekly) exhibiting at least moderate symptoms of depression (score of 17 or greater on the Beck Depression Inventory II). Treatment delivered via the adaptive treatment (stepped-care) model was associated with improvement in depression and methamphetamine use, however, was not associated with more efficient delivery of psychological treatment to this population relative to the comparison treatment. This pilot trial attests to the potential for adaptive treatment strategies to increase the evidence base for stepped-care approaches within the alcohol and other drug field. However, in order for stepped-care treatment in this trial to be delivered efficiently, specific training in the delivery and philosophy of the model is required.
A novel dynamic mechanical testing technique for reverse shoulder replacements.
Dabirrahmani, Danè; Bokor, Desmond; Appleyard, Richard
2014-04-01
In vitro mechanical testing of orthopedic implants provides information regarding their mechanical performance under simulated biomechanical conditions. Current in vitro component stability testing methods for reverse shoulder implants are based on anatomical shoulder designs, which do not capture the dynamic nature of these loads. With glenoid component loosening as one of the most prevalent modes of failure in reverse shoulder replacements, it is important to establish a testing protocol with a more realistic loading regime. This paper introduces a novel method of mechanically testing reverse shoulder implants, using more realistic load magnitudes and vectors, than is currently practiced. Using a custom made jig setup within an Instron mechanical testing system, it is possible to simulate the change in magnitude and direction of the joint load during arm abduction. This method is a step towards a more realistic testing protocol for measuring reverse shoulder implant stability.
Intelligent control for PMSM based on online PSO considering parameters change
NASA Astrophysics Data System (ADS)
Song, Zhengqiang; Yang, Huiling
2018-03-01
A novel online particle swarm optimization method is proposed to design speed and current controllers of vector controlled interior permanent magnet synchronous motor drives considering stator resistance variation. In the proposed drive system, the space vector modulation technique is employed to generate the switching signals for a two-level voltage-source inverter. The nonlinearity of the inverter is also taken into account due to the dead-time, threshold and voltage drop of the switching devices in order to simulate the system in the practical condition. Speed and PI current controller gains are optimized with PSO online, and the fitness function is changed according to the system dynamic and steady states. The proposed optimization algorithm is compared with conventional PI control method in the condition of step speed change and stator resistance variation, showing that the proposed online optimization method has better robustness and dynamic characteristics compared with conventional PI controller design.
NASA Astrophysics Data System (ADS)
Huang, Binbin; Wang, Yan; Zhan, Shuzhong; Ye, Jianshan
2017-02-01
Schiff base metal complexes have been applied in many fields, especially, a potential homogeneous catalyst for water splitting. However, the high overpotential, time consumed synthesis process and complicated working condition largely limit their application. In the present work, a one-step approach to fabricate Schiff base cobalt complex modified electrode is developed. Microrod clusters (MRC) and rough spherical particles (RSP) can be obtained on the ITO electrode through different electrochemical deposition condition. Both of the MRC and RSP present favorable activity for oxygen evolution reaction (OER) compared to the commercial Co3O4, taking an overpotential of 650 mV and 450 mV to drive appreciable catalytic current respectively. The highly active and stable RSP shows a Tafel plot of 84 mV dec-1 and negligible decrease of the current density for 12 h bulk electrolysis. The synthesis strategy of effective and stable catalyst in this work provide a simple method to fabricate heterogeneous OER catalyst with Schiff base metal complex.
Numerical Simulation of Tethered Underwater Kites for Power Generation
NASA Astrophysics Data System (ADS)
Ghasemi, Amirmahdi; Olinger, David; Tryggvason, Gretar
2015-11-01
An emerging renewable energy technology, tethered undersea kites (TUSK), which is used to extract hydrokinetic energy from ocean and tidal currents, is studied. TUSK systems consist of a rigid-winged ``kite,'' or glider, moving in an ocean current which is connected by tethers to a floating buoy on the ocean surface. The TUSK kite is a current speed enhancement device since the kite can move in high-speed, cross-current motion at 4-6 times the current velocity, thus producing more power than conventional marine turbines. A computational simulation is developed to simulate the dynamic motion of an underwater kite and extendable tether. A two-step projection method within a finite volume formulation, along with an Open MP acceleration method, is employed to solve the Navier-Stokes equations. An immersed boundary method is incorporated to model the fluid-structure interaction of the rigid kite (with NACA 0012 airfoil shape in 2D and NACA 0021 airfoil shape in 3D simulations) and the fluid flow. PID control methods are used to adjust the kite angle of attack during power (tether reel-out) and retraction (reel-in) phases. Two baseline simulations (for kite motions in two and three dimensions) are studied, and system power output, flow field vorticity, tether tension, and hydrodynamic coefficients (lift and drag) for the kite are determined. The simulated power output shows good agreement with established theoretical results for a kite moving in two-dimensions.
The aluminum electrode in AlCl3-alkali-halide melts.
NASA Technical Reports Server (NTRS)
Holleck, G. L.; Giner, J.
1972-01-01
Passivation phenomena have been observed upon cathodic and anodic polarization of the Al electrode in AlCl3-KCl-NaCl melts between 100 and 160 C. They are caused by formation of a solid salt layer at the electrode surface resulting from concentration changes upon current flow. The anodic limiting currents increased with temperature and with decreasing AlCl3 content of the melt. Current voltage curves obtained on a rotating aluminum disk showed a linear relationship between the anodic limiting current and omega to the minus 1/2 power. Upon cathodic polarization, dendrite formation occurs at the Al electrode. The activation overvoltage in AlCl3-KCl-NaCl was determined by galvanostatic current step methods. An apparent exchange current density of 270 mA/sq cm at 130 C and a double layer capacity of 40 plus or minus 10 microfarad/sq cm were measured.
Mapping Base Modifications in DNA by Transverse-Current Sequencing
NASA Astrophysics Data System (ADS)
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2018-02-01
Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.
NASA Astrophysics Data System (ADS)
Yokokawa, Miwa; Yamamoto, Shinya; Higuchi, Hiroyuki; Hughes Clarke, John E.; Izumi, Norihiro
2015-04-01
Upper-flow-regime bedforms, such as cyclic steps and antidunes, have been reported to be formed by turbidity currents. Their formative conditions are, however, not fully understood because of the difficulty of field surveys in the deep sea. Field observations of turbidity currents and seabed topography on the Squamish delta in Howe Sound, British Columbia, Canada have been undertaken which found bedwaves actively migrating in the upstream direction in channels formed on the prodelta slope. Their topography and behavior suggest that they are cyclic steps formed by turbidity currents. Because Squamish delta is as shallow as around 150 m, and easy to access compared with general submarine canyons, it is thought to be one of the best places for studying characteristics of cyclic steps formed by turbidity currents through field observations. In this study, we have analyzed configurations of cyclic steps with the use of data obtained in the field observation of 2011, and compare them with the data from the flume experiments. On the prodelta slope, three major active channels are clearly developed. In addition to the sonar survey, a 600 kHz ADCP was installed in 150m of water just seaward of the termination of the North Channel. In addition, 1200kHz ADCP and 500kHz M3s are suspended from the research vessel in 60 m of water and 300 m distance from the delta edge. We selected images showing large daily differences. The steps move vigorously at the upper 600m parts of the prodelta slope, so that we measured the steps in this area. From the profiles perpendicular to the bedwave crest lines through the center of channels, wavelength and wave height for each step, mean slope were measured on the software for quantitative image analyses manually. Wave steepness for each step was calculated using the wavelength and wave height measured as above. The mean slope ranges from 6.8° ~ 2.7° (more proximal, steeper), mean wavelength and wave heights of steps range from 24.5 to 87.6m and from 2.4 to 5.4m respectively. We compare the shape of steps with the upper-flow-regime bedforms, such as antidunes and cyclic steps, obtained from the open channel experiments. Wave steepness of the steps in Squamish ranges from 0.035 to 0.157, which is relatively high and close in value to those of cyclic steps and downstream-migrating-antidunes (DMA) in the open channel experiments. The non-dimensional wave number depends on the estimation of the thickness of the turbidity currents. Based on the optical backscatter profiles, the upper limit of sediment suspension is around 10m. However the maximum velocity is always located within the lower 5m, and higher density layer seems to locate within the lowermost 2 m. For the 10m flow thickness, the wave number is close in value to those of DMA. While for the 0.5m flow thickness, the wave number is close in value to those of cyclic steps. We will discuss about the effect of "density currents" and/or "surge" on the morphology of those steps.
Method for sputtering with low frequency alternating current
Timberlake, John R.
1996-01-01
Low frequency alternating current sputtering is provided by connecting a low frequency alternating current source to a high voltage transformer having outer taps and a center tap for stepping up the voltage of the alternating current. The center tap of the transformer is connected to a vacuum vessel containing argon or helium gas. Target electrodes, in close proximity to each other, and containing material with which the substrates will be coated, are connected to the outer taps of the transformer. With an applied potential, the gas will ionize and sputtering from the target electrodes onto the substrate will then result. The target electrodes can be copper or boron, and the substrate can be stainless steel, aluminum, or titanium. Copper coatings produced are used in place of nickel and/or copper striking.
Method for sputtering with low frequency alternating current
Timberlake, J.R.
1996-04-30
Low frequency alternating current sputtering is provided by connecting a low frequency alternating current source to a high voltage transformer having outer taps and a center tap for stepping up the voltage of the alternating current. The center tap of the transformer is connected to a vacuum vessel containing argon or helium gas. Target electrodes, in close proximity to each other, and containing material with which the substrates will be coated, are connected to the outer taps of the transformer. With an applied potential, the gas will ionize and sputtering from the target electrodes onto the substrate will then result. The target electrodes can be copper or boron, and the substrate can be stainless steel, aluminum, or titanium. Copper coatings produced are used in place of nickel and/or copper striking. 6 figs.
The role of deep-water sedimentary processes in shaping a continental margin: The Northwest Atlantic
Mosher, David C.; Campbell, D.C.; Gardner, J.V.; Piper, D.J.W.; Chaytor, Jason; Rebesco, M.
2017-01-01
The tectonic history of a margin dictates its general shape; however, its geomorphology is generally transformed by deep-sea sedimentary processes. The objective of this study is to show the influences of turbidity currents, contour currents and sediment mass failures on the geomorphology of the deep-water northwestern Atlantic margin (NWAM) between Blake Ridge and Hudson Trough, spanning about 32° of latitude and the shelf edge to the abyssal plain. This assessment is based on new multibeam echosounder data, global bathymetric models and sub-surface geophysical information.The deep-water NWAM is divided into four broad geomorphologic classifications based on their bathymetric shape: graded, above-grade, stepped and out-of-grade. These shapes were created as a function of the balance between sediment accumulation and removal that in turn were related to sedimentary processes and slope-accommodation. This descriptive method of classifying continental margins, while being non-interpretative, is more informative than the conventional continental shelf, slope and rise classification, and better facilitates interpretation concerning dominant sedimentary processes.Areas of the margin dominated by turbidity currents and slope by-pass developed graded slopes. If sediments did not by-pass the slope due to accommodation then an above grade or stepped slope resulted. Geostrophic currents created sedimentary bodies of a variety of forms and positions along the NWAM. Detached drifts form linear, above-grade slopes along their crests from the shelf edge to the deep basin. Plastered drifts formed stepped slope profiles. Sediment mass failure has had a variety of consequences on the margin morphology; large mass-failures created out-of-grade profiles, whereas smaller mass failures tended to remain on the slope and formed above-grade profiles at trough-mouth fans, or nearly graded profiles, such as offshore Cape Fear.
Yu, Xiao-Xue; Huang, Jie-Yun; Xu, Dan; Xie, Zhi-Yong; Xie, Zhi-Sheng; Xu, Xin-Jun
2014-01-01
Orientin and vitexin are the two main bioactive compounds in Trollius chinensis Bunge. In this study, a rapid method was established for the isolation and purification of orientin and vitexin from T. chinensis Bunge using high-speed counter-current chromatography in one step, with a solvent system of ethyl acetate-ethanol-water (4:1:5, v/v/v). A total of 9.8 mg orientin and 2.1 mg vitexin were obtained from 100 mg of the ethyl acetate extract, with purities of 99.2% and 96.0%, respectively. Their structures were identified by UV, MS and NMR. The method was efficient and convenient, which could be used for the preparative separation of orientin and vitexin from T. chinensis Bunge.
RobOKoD: microbial strain design for (over)production of target compounds.
Stanford, Natalie J; Millard, Pierre; Swainston, Neil
2015-01-01
Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design.
RobOKoD: microbial strain design for (over)production of target compounds
Stanford, Natalie J.; Millard, Pierre; Swainston, Neil
2015-01-01
Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design. PMID:25853130
2012-01-01
Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934
Eddy current analysis of cracks grown from surface defects and non-metallic particles
NASA Astrophysics Data System (ADS)
Cherry, Matthew R.; Hutson, Alisha; Aldrin, John C.; Shank, Jared
2018-04-01
Eddy current methods are sensitive to any discrete change in conductivity. Traditionally this has been used to determine the presence of a crack. However, other features that are not cracks such as non-metallic inclusions, carbide stringers and surface voids can cause an eddy current indication that could potentially lead to a reject of an in-service component. These features may not actually be lifelimiting, meaning NDE methods could reject components with remaining useful life. In-depth analysis of signals from eddy current sensors could provide a means of sorting between rejectable indications and false-calls from geometric and non-conductive features. In this project, cracks were grown from voids and non-metallic inclusions in a nickel-based super-alloy and eddy current analysis was performed on multiple intermediate steps of fatigue. Data were collected with multiple different ECT probes and at multiple frequencies, and the results were analyzed. The results show how cracks growing from non-metallic features can skew eddy current signals and make characterization a challenge. Modeling and simulation was performed with multiple analysis codes, and the models were found to be in good agreement with the data for cracks growing away from voids and non-metallic inclusions.
NASA Technical Reports Server (NTRS)
Foster, Lucas E.; Britcher, Colin P.
1995-01-01
The Large Angle Magnetic Suspension Test Fixture (LAMSTF) is a laboratory scale proof-of-concept system. The configuration is unique in that the electromagnets are mounted in a circular planar array. A mathematical model of the system had previously been developed, but was shown to have inaccuracies. These inaccuracies showed up in the step responses. Eddy currents were found to be the major cause of the modeling errors. In the original system, eddy currents existed in the aluminum baseplate, iron cores, and the sensor support frame. An attempt to include the eddy current dynamics in the system model is presented. The dynamics of a dummy sensor ring were added to the system. Adding the eddy current dynamics to the simulation improves the way it compares to the actual experiment. Also presented is a new method of determining the yaw angle of the suspended element. From the coil currents the yaw angle can be determined and the controller can be updated to suspend at the new current. This method has been used to demonstrate a 360 degree yaw angle rotation.
Goldenberg, S D; Cliff, P R; Smith, S; Milner, M; French, G L
2010-01-01
Current diagnosis of Clostridium difficile infection (CDI) relies upon detection of toxins A/B in stool by enzyme immunoassay [EIA(A/B)]. This strategy is unsatisfactory because it has a low sensitivity resulting in significant false negatives. We investigated the performance of a two-step algorithm for diagnosis of CDI using detection of glutamate dehydrogenase (GDH). GDH-positive samples were tested for C. difficile toxin B gene (tcdB) by polymerase chain reaction (PCR). The performance of the two-step protocol was compared with toxin detection by the Meridian Premier EIA kit in 500 consecutive stool samples from patients with suspected CDI. The reference standard among samples that were positive by either EIA(A/B) or GDH testing was culture cytotoxin neutralisation (culture/CTN). Thirty-six (7%) of 500 samples were identified as true positives by culture/CTN. EIA(A/B) identified 14 of the positive specimens with 22 false negatives and two false positives. The two-step protocol identified 34 of the positive samples with two false positives and two false negatives. EIA(A/B) had a sensitivity of 39%, specificity of 99%, positive predictive value of 88% and negative predictive value of 95%. The two-step algorithm performed better, with corresponding values of 94%, 99%, 94% and 99% respectively. Screening for GDH before confirmation of positives by PCR is cheaper than screening all specimens by PCR and is an effective method for routine use. Current EIA(A/B) tests for CDI are of inadequate sensitivity and should be replaced; however, this may result in apparent changes in CDI rates that would need to be explained in national surveillance statistics. Copyright 2009 The Hospital Infection Society. Published by Elsevier Ltd. All rights reserved.
Geometrical control of ionic current rectification in a configurable nanofluidic diode.
Alibakhshi, Mohammad Amin; Liu, Binqi; Xu, Zhiping; Duan, Chuanhua
2016-09-01
Control of ionic current in a nanofluidic system and development of the elements analogous to electrical circuits have been the subject of theoretical and experimental investigations over the past decade. Here, we theoretically and experimentally explore a new technique for rectification of ionic current using asymmetric 2D nanochannels. These nanochannels have a rectangular cross section and a stepped structure consisting of a shallow and a deep side. Control of height and length of each side enables us to obtain optimum rectification at each ionic strength. A 1D model based on the Poisson-Nernst-Planck equation is derived and validated against the full 2D numerical solution, and a nondimensional concentration is presented as a function of nanochannel dimensions, surface charge, and the electrolyte concentration that summarizes the rectification behavior of such geometries. The rectification factor reaches a maximum at certain electrolyte concentration predicted by this nondimensional number and decays away from it. This method of fabrication and control of a nanofluidic diode does not require modification of the surface charge and facilitates the integration with lab-on-a-chip fluidic circuits. Experimental results obtained from the stepped nanochannels are in good agreement with the 1D theoretical model.
Gold nanoparticles with patterned surface monolayers for nanomedicine: current perspectives.
Pengo, Paolo; Şologan, Maria; Pasquato, Lucia; Guida, Filomena; Pacor, Sabrina; Tossi, Alessandro; Stellacci, Francesco; Marson, Domenico; Boccardo, Silvia; Pricl, Sabrina; Posocco, Paola
2017-12-01
Molecular self-assembly is a topic attracting intense scientific interest. Various strategies have been developed for construction of molecular aggregates with rationally designed properties, geometries, and dimensions that promise to provide solutions to both theoretical and practical problems in areas such as drug delivery, medical diagnostics, and biosensors, to name but a few. In this respect, gold nanoparticles covered with self-assembled monolayers presenting nanoscale surface patterns-typically patched, striped or Janus-like domains-represent an emerging field. These systems are particularly intriguing for use in bio-nanotechnology applications, as presence of such monolayers with three-dimensional (3D) morphology provides nanoparticles with surface-dependent properties that, in turn, affect their biological behavior. Comprehensive understanding of the physicochemical interactions occurring at the interface between these versatile nanomaterials and biological systems is therefore crucial to fully exploit their potential. This review aims to explore the current state of development of such patterned, self-assembled monolayer-protected gold nanoparticles, through step-by-step analysis of their conceptual design, synthetic procedures, predicted and determined surface characteristics, interactions with and performance in biological environments, and experimental and computational methods currently employed for their investigation.
NASA Astrophysics Data System (ADS)
Agudelo-Toro, Andres; Neef, Andreas
2013-04-01
Objective. We present a computational method that implements a reduced set of Maxwell's equations to allow simulation of cells under realistic conditions: sub-micron cell morphology, a conductive non-homogeneous space and various ion channel properties and distributions. Approach. While a reduced set of Maxwell's equations can be used to couple membrane currents to extra- and intracellular potentials, this approach is rarely taken, most likely because adequate computational tools are missing. By using these equations, and introducing an implicit solver, numerical stability is attained even with large time steps. The time steps are limited only by the time development of the membrane potentials. Main results. This method allows simulation times of tens of minutes instead of weeks, even for complex problems. The extracellular fields are accurately represented, including secondary fields, which originate at inhomogeneities of the extracellular space and can reach several millivolts. We present a set of instructive examples that show how this method can be used to obtain reference solutions for problems, which might not be accurately captured by the traditional approaches. This includes the simulation of realistic magnitudes of extracellular action potential signals in restricted extracellular space. Significance. The electric activity of neurons creates extracellular potentials. Recent findings show that these endogenous fields act back onto the neurons, contributing to the synchronization of population activity. The influence of endogenous fields is also relevant for understanding therapeutic approaches such as transcranial direct current, transcranial magnetic and deep brain stimulation. The mutual interaction between fields and membrane currents is not captured by today's concepts of cellular electrophysiology, including the commonly used activation function, as those concepts are based on isolated membranes in an infinite, isopotential extracellular space. The presented tool makes simulations with detailed morphology and implicit interactions of currents and fields available to the electrophysiology community.
Advanced electric-field scanning probe lithography on molecular resist using active cantilever
NASA Astrophysics Data System (ADS)
Kaestner, Marcus; Aydogan, Cemal; Ivanov, Tzvetan; Ahmad, Ahmad; Angelov, Tihomir; Reum, Alexander; Ishchuk, Valentyn; Krivoshapkina, Yana; Hofer, Manuel; Lenk, Steve; Atanasov, Ivaylo; Holz, Mathias; Rangelow, Ivo W.
2015-07-01
The routine "on demand" fabrication of features smaller than 10 nm opens up new possibilities for the realization of many devices. Driven by the thermally actuated piezoresistive cantilever technology, we have developed a prototype of a scanning probe lithography (SPL) platform which is able to image, inspect, align, and pattern features down to the single digit nanoregime. Here, we present examples of practical applications of the previously published electric-field based current-controlled scanning probe lithography. In particular, individual patterning tests are carried out on calixarene by using our developed table-top SPL system. We have demonstrated the application of a step-and-repeat SPL method including optical as well as atomic force microscopy-based navigation and alignment. The closed-loop lithography scheme was applied to sequentially write positive and negative tone features. Due to the integrated unique combination of read-write cycling, each single feature is aligned separately with the highest precision and inspected after patterning. This routine was applied to create a pattern step by step. Finally, we have demonstrated the patterning over larger areas, over existing topography, and the practical applicability of the SPL processes for lithography down to 13-nm pitch patterns. To enhance the throughput capability variable beam diameter electric field, current-controlled SPL is briefly discussed.
Zhang, Yaohong; Wu, Guohua; Ding, Chao; Liu, Feng; Yao, Yingfang; Zhou, Yong; Wu, Congping; Nakazawa, Naoki; Huang, Qingxun; Toyoda, Taro; Wang, Ruixiang; Hayase, Shuzi; Zou, Zhigang; Shen, Qing
2018-06-18
Lead selenide (PbSe) colloidal quantum dots (CQDs) are considered to be a strong candidate for high-efficiency colloidal quantum dot solar cells (CQDSCs) due to its efficient multiple exciton generation. However, currently, even the best PbSe CQDSCs can only display open-circuit voltage ( V oc ) about 0.530 V. Here, we introduce a solution-phase ligand exchange method to prepare PbI 2 -capped PbSe (PbSe-PbI 2 ) CQD inks, and for the first time, the absorber layer of PbSe CQDSCs was deposited in one step by using this PbSe-PbI 2 CQD inks. One-step-deposited PbSe CQDs absorber layer exhibits fast charge transfer rate, reduced energy funneling, and low trap assisted recombination. The champion large-area (active area is 0.35 cm 2 ) PbSe CQDSCs fabricated with one-step PbSe CQDs achieve a power conversion efficiency (PCE) of 6.0% and a V oc of 0.616 V, which is the highest V oc among PbSe CQDSCs reported to date.
A Systematic Method for Reviewing and Analyzing Health Information on Consumer-Oriented Websites.
Rew, Lynn; Saenz, Ashley; Walker, Lorraine O
2018-05-29
A discussion of a proposed method for analyzing the quality of consumer-oriented websites that provide health-related information. The quality of health information available to consumers online varies widely in quality. In an effort to improve the quality of online information, experts have undertaken systematic reviews on selected health topics; however, no standardized comprehensive methodology currently exists for such review. An eight-step method is recommended embracing the following steps: (1) select topic; (2) determine the purpose of the analysis; (3) select search terms and engines; (4) develop and apply website inclusion and exclusion criteria; (5) develop processes and tools to manage search results; (6) specify measures of quality; (7) compute readability; (8) evaluate websites. Each of these steps is illustrated in relation to the health topic of gynecomastia, a physical and mental health challenge for many adolescent males and young men. Although most extant analyses of consumer-oriented websites have focused on disease conditions and their treatment, website-analysis methodology would encourage analyses that fall into the nursing care domain. The method outlined in this paper is intended to provide nurses and others who work with specific patient populations with the tools needed for website analytic studies. Such studies provide a foundation for making recommendations about quality websites, as well as identifying gaps in online information for health consumers. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Civil & Military Operations: Evolutionary Prep Steps to Pass Smart Power Current Limitations
2011-06-01
and outcomes – Identifying the best time, place, and method for action – Reduced ambiguity for action application, reduced side effects – Find the...improvements to arrive at increased accuracy, precision, and reduction of un-intended effects . The examples of these streams will demonstrate the...DIME – Diplomatic, Intelligence, Military, and Economic; EBO – Effects Based Operations. 16th International Command and Control Research and
Cryo-balloon catheter localization in fluoroscopic images
NASA Astrophysics Data System (ADS)
Kurzendorfer, Tanja; Brost, Alexander; Jakob, Carolin; Mewes, Philip W.; Bourier, Felix; Koch, Martin; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert
2013-03-01
Minimally invasive catheter ablation has become the preferred treatment option for atrial fibrillation. Although the standard ablation procedure involves ablation points set by radio-frequency catheters, cryo-balloon catheters have even been reported to be more advantageous in certain cases. As electro-anatomical mapping systems do not support cryo-balloon ablation procedures, X-ray guidance is needed. However, current methods to provide support for cryo-balloon catheters in fluoroscopically guided ablation procedures rely heavily on manual user interaction. To improve this, we propose a first method for automatic cryo-balloon catheter localization in fluoroscopic images based on a blob detection algorithm. Our method is evaluated on 24 clinical images from 17 patients. The method successfully detected the cryoballoon in 22 out of 24 images, yielding a success rate of 91.6 %. The successful localization achieved an accuracy of 1.00 mm +/- 0.44 mm. Even though our methods currently fails in 8.4 % of the images available, it still offers a significant improvement over manual methods. Furthermore, detecting a landmark point along the cryo-balloon catheter can be a very important step for additional post-processing operations.
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.
Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun
2018-05-08
Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.
A Versatile Microfluidic Device for Automating Synthetic Biology.
Shih, Steve C C; Goyal, Garima; Kim, Peter W; Koutsoubelis, Nicolas; Keasling, Jay D; Adams, Paul D; Hillson, Nathan J; Singh, Anup K
2015-10-16
New microbes are being engineered that contain the genetic circuitry, metabolic pathways, and other cellular functions required for a wide range of applications such as producing biofuels, biobased chemicals, and pharmaceuticals. Although currently available tools are useful in improving the synthetic biology process, further improvements in physical automation would help to lower the barrier of entry into this field. We present an innovative microfluidic platform for assembling DNA fragments with 10× lower volumes (compared to that of current microfluidic platforms) and with integrated region-specific temperature control and on-chip transformation. Integration of these steps minimizes the loss of reagents and products compared to that with conventional methods, which require multiple pipetting steps. For assembling DNA fragments, we implemented three commonly used DNA assembly protocols on our microfluidic device: Golden Gate assembly, Gibson assembly, and yeast assembly (i.e., TAR cloning, DNA Assembler). We demonstrate the utility of these methods by assembling two combinatorial libraries of 16 plasmids each. Each DNA plasmid is transformed into Escherichia coli or Saccharomyces cerevisiae using on-chip electroporation and further sequenced to verify the assembly. We anticipate that this platform will enable new research that can integrate this automated microfluidic platform to generate large combinatorial libraries of plasmids and will help to expedite the overall synthetic biology process.
Turner, Andrew D; Boundy, Michael J; Rapkova, Monika Dhanji
2017-09-01
In recent years, evidence has grown for the presence of tetrodotoxin (TTX) in bivalve mollusks, leading to the potential for consumers of contaminated products to be affected by Tetrodotoxin Shellfish Poisoning (TSP). A single-laboratory validation was conducted for the hydrophilic interaction LC (HILIC) tandem MS (MS/MS) analysis of TTX in common mussels and Pacific oysters-the bivalve species that have been found to contain TTXs in the United Kingdom in recent years. The method consists of a single-step dispersive extraction in 1% acetic acid, followed by a carbon SPE cleanup step before dilution and instrumental analysis. The full method was developed as a rapid tool for the quantitation of TTX, as well as for the associated analogs 4-epi-TTX; 5,6,11-trideoxy TTX; 11-nor TTX-6-ol; 5-deoxy TTX; and 4,9-anhydro TTX. The method can also be run as the acquisition of TTX together with paralytic shellfish toxins. Results demonstrated acceptable method performance characteristics for specificity, linearity, recovery, ruggedness, repeatability, matrix variability, and within-laboratory reproducibility for the analysis of TTX. The LOD and LOQ were fit-for-purpose in comparison to the current action limit for TTX enforced in The Netherlands. In addition, aspects of method performance (LOD, LOQ, and within-laboratory reproducibility) were found to be satisfactory for three other TTX analogs (11-nor TTX-6-ol, 5-deoxy TTX, and 4,9-anhydro TTX). The method was found to be practical and suitable for use in regulatory testing, providing rapid turnaround of sample analysis. Plans currently underway on a full collaborative study to validate a HILIC-MS/MS method for paralytic shellfish poisoning toxins will be extended to include TTX in order to generate international acceptance, ultimately for use as an alternative official control testing method should regulatory controls be adopted.
Strain gage selection in loads equations using a genetic algorithm
NASA Technical Reports Server (NTRS)
1994-01-01
Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.
NASA Astrophysics Data System (ADS)
Liu, Lang; Li, Han-Yu; Yu, Yao; Liu, Lin; Wu, Yue
2018-02-01
The fabrication of a current collector-contained in-plane micro-supercapacitor (MSC) usually requires the patterning of the current collector first and then subsequent patterning of the active material with the assistance of a photoresist and mask. However, this two-step patterning process is too complicated and the photoresist used is harmful to the properties of nanomaterials. Here, we demonstrate a one-step, mask-free strategy to pattern the current collector and the active material at the same time, for the fabrication of an all-solid-state flexible in-plane MSC. Silver nanowires (AgNWs) are used as the current collector. An atmospheric pressure pulsed cold micro-plasma-jet is used to realize the one-step, mask-free production of interdigitated multi-walled carbon nanotube (MWCNT)/AgNW electrodes. Remarkably, the fabricated MWCNT/AgNW-based MSC shows good flexibility and excellent rate capability. Moreover, the performance of properties including cyclic stability, equivalent series resistance, relaxation time and energy/power densities of the MWCNT/AgNW-based MSC are significantly enhanced by the presence of the AgNW current collector.
Fully Burdened Cost of Fuel Using Input-Output Analysis
2011-12-01
Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single step, allowing for less complex and...wide extension of the Bulk Fuels Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single...ABBREVIATIONS AEM Atlantic, Europe, and the Mediterranean AOAs Analysis of Alternatives DAG Defense Acquisition Guidebook DAU Defense Acquisition University
NASA Astrophysics Data System (ADS)
Bahtiar, A.; Rahmanita, S.; Inayatie, Y. D.
2017-05-01
Morphology of perovskite film is a key important for achieving high performance perovskite solar cells. Perovskite films are commonly prepared by two-step spin-coating method. However, pin-holes are frequently formed in perovskite films due to incomplete conversion of lead-iodide (PbI2) into perovskite CH3NH3PbI3. Pin-holes in perovskite film cause large hysteresis in current-voltage curve of solar cells due to large series resistance between perovskite layer-hole transport material. Moreover, crystal structure and grain size of perovskite crystal are also other important parameters for achieving high performance solar cells, which are significantly affected by preparation of perovskite film. We studied the effect of preparation of perovskite film using controlled spin-coating parameters on crystal structure and morphological properties of perovskite film. We used two-step spin-coating method for preparation of perovskite film with varied spinning speed, spinning time and temperature of spin-coating process to control growth of perovskite crystal aimed to produce high quality perovskite crystal with pin-hole free and large grain size. All experiment was performed in air with high humidity (larger than 80%). The best crystal structure, pin-hole free with large grain crystal size of perovskite film was obtained from film prepared at room temperature with spinning speed 1000 rpm for 20 seconds and annealed at 100°C for 300 seconds.
NASA Astrophysics Data System (ADS)
Kuang, Yubin; Stork, David G.; Kahl, Fredrik
2011-03-01
Underdrawings and pentimenti-typically revealed through x-ray imaging and infrared reflectography-comprise important evidence about the intermediate states of an artwork and thus the working methods of its creator.1 To this end, Shahram, Stork and Donoho introduced the De-pict algorithm, which recovers layers of brush strokes in paintings with open brush work where several layers are partially visible, such as in van Gogh's Self portrait with a grey felt hat.2 While that preliminary work served as a proof of concept that computer image analytic methods could recover some occluded brush strokes, the work needed further refinement before it could be a tool for art scholars. Our current work makes several steps to improve that algorithm. Specifically, we refine the inpainting step through the inclusion of curvature-based constraints, in which a mathematical curvature penalty biases the reconstruction toward matching the artist's smooth hand motion. We refine and test our methods using "ground truth" image data: passages of four layers of brush strokes in which the intermediate layers were recorded photographically. At each successive top layer (currently identified by the user), we used k-means clustering combined with graph cuts to obtain chromatically and spatially coherent segmentation of brush strokes. We then reconstructed strokes at the deeper layer with our new curvature-based inpainting algorithm based on chromatic level lines. Our methods are clearly superior to previous versions of the De-pict algorithm on van Gogh's works giving smoother, natural strokes that more closely match the shapes of unoccluded strokes. Our improved method might be applied to the classic drip paintings of Jackson Pollock, where the drip work is more open and the physics of splashing paint ensures that the curvature more uniform than in the brush strokes of van Gogh.
Urate Oxidase Purification by Salting-in Crystallization: Towards an Alternative to Chromatography
Giffard, Marion; Ferté, Natalie; Ragot, François; El Hajji, Mohamed; Castro, Bertrand; Bonneté, Françoise
2011-01-01
Background Rasburicase (Fasturtec® or Elitek®, Sanofi-Aventis), the recombinant form of urate oxidase from Aspergillus flavus, is a therapeutic enzyme used to prevent or decrease the high levels of uric acid in blood that can occur as a result of chemotherapy. It is produced by Sanofi-Aventis and currently purified via several standard steps of chromatography. This work explores the feasibility of replacing one or more chromatography steps in the downstream process by a crystallization step. It compares the efficacy of two crystallization techniques that have proven successful on pure urate oxidase, testing them on impure urate oxidase solutions. Methodology/Principal Findings Here we investigate the possibility of purifying urate oxidase directly by crystallization from the fermentation broth. Based on attractive interaction potentials which are known to drive urate oxidase crystallization, two crystallization routes are compared: a) by increased polymer concentration, which induces a depletion attraction and b) by decreased salt concentration, which induces attractive interactions via a salting-in effect. We observe that adding polymer, a very efficient way to crystallize pure urate oxidase through the depletion effect, is not an efficient way to grow crystals from impure solution. On the other hand, we show that dialysis, which decreases salt concentration through its strong salting-in effect, makes purification of urate oxidase from the fermentation broth possible. Conclusions The aim of this study is to compare purification efficacy of two crystallization methods. Our findings show that crystallization of urate oxidase from the fermentation broth provides purity comparable to what can be achieved with one chromatography step. This suggests that, in the case of urate oxidase, crystallization could be implemented not only for polishing or concentration during the last steps of purification, but also as an initial capture step, with minimal changes to the current process. PMID:21589929
Developments in the formulation and delivery of spray dried vaccines
Kanojia, Gaurav; Have, Rimko ten; Soema, Peter C.; Frijlink, Henderik; Amorij, Jean-Pierre; Kersten, Gideon
2017-01-01
ABSTRACT Spray drying is a promising method for the stabilization of vaccines, which are usually formulated as liquids. Usually, vaccine stability is improved by spray drying in the presence of a range of excipients. Unlike freeze drying, there is no freezing step involved, thus the damage related to this step is avoided. The edge of spray drying resides in its ability for particles to be engineered to desired requirements, which can be used in various vaccine delivery methods and routes. Although several spray dried vaccines have shown encouraging preclinical results, the number of vaccines that have been tested in clinical trials is limited, indicating a relatively new area of vaccine stabilization and delivery. This article reviews the current status of spray dried vaccine formulations and delivery methods. In particular it discusses the impact of process stresses on vaccine integrity, the application of excipients in spray drying of vaccines, process and formulation optimization strategies based on Design of Experiment approaches as well as opportunities for future application of spray dried vaccine powders for vaccine delivery. PMID:28925794
One-step generation of multipartite entanglement among nitrogen-vacancy center ensembles
Song, Wan-lu; Yin, Zhang-qi; Yang, Wan-li; Zhu, Xiao-bo; Zhou, Fei; Feng, Mang
2015-01-01
We describe a one-step, deterministic and scalable scheme for creating macroscopic arbitrary entangled coherent states (ECSs) of separate nitrogen-vacancy center ensembles (NVEs) that couple to a superconducting flux qubit. We discuss how to generate the entangled states between the flux qubit and two NVEs by the resonant driving. Then the ECSs of the NVEs can be obtained by projecting the flux qubit, and the entanglement detection can be realized by transferring the quantum state from the NVEs to the flux qubit. Our numerical simulation shows that even under current experimental parameters the concurrence of the ECSs can approach unity. We emphasize that this method is straightforwardly extendable to the case of many NVEs. PMID:25583623
Hattotuwagama, Channa K; Doytchinova, Irini A; Flower, Darren R
2007-01-01
Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made are freely available online at the URL http://www.jenner.ac.uk/MHCPred.
NASA Astrophysics Data System (ADS)
Wood, Michael J.; Aristizabal, Felipe; Coady, Matthew; Nielson, Kent; Ragogna, Paul J.; Kietzig, Anne-Marie
2018-02-01
The production of millimetric liquid droplets has importance in a wide range of applications both in the laboratory and industrially. As such, much effort has been put forth to devise methods to generate these droplets on command in a manner which results in high diameter accuracy and precision, well-defined trajectories followed by successive droplets and low oscillations in droplet shape throughout their descents. None of the currently employed methods of millimetric droplet generation described in the literature adequately addresses all of these desired droplet characteristics. The reported methods invariably involve the cohesive separation of the desired volume of liquid from the bulk supply in the same step that separates the single droplet from the solid generator. We have devised a droplet generation device which separates the desired volume of liquid within a tee-apparatus in a step prior to the generation of the droplet which has yielded both high accuracy and precision of the diameters of the final droplets produced. Further, we have engineered a generating tip with extreme antiwetting properties which has resulted in reduced adhesion forces between the liquid droplet and the solid tip. This has yielded the ability to produce droplets of low mass without necessitating different diameter generating tips or the addition of surfactants to the liquid, well-defined droplet trajectories, and low oscillations in droplet volume. The trajectories and oscillations of the droplets produced have been assessed and presented quantitatively in a manner that has been lacking in the current literature.
Automatic document classification of biological literature
Chen, David; Müller, Hans-Michael; Sternberg, Paul W
2006-01-01
Background Document classification is a wide-spread problem with many applications, from organizing search engine snippets to spam filtering. We previously described Textpresso, a text-mining system for biological literature, which marks up full text according to a shallow ontology that includes terms of biological interest. This project investigates document classification in the context of biological literature, making use of the Textpresso markup of a corpus of Caenorhabditis elegans literature. Results We present a two-step text categorization algorithm to classify a corpus of C. elegans papers. Our classification method first uses a support vector machine-trained classifier, followed by a novel, phrase-based clustering algorithm. This clustering step autonomously creates cluster labels that are descriptive and understandable by humans. This clustering engine performed better on a standard test-set (Reuters 21578) compared to previously published results (F-value of 0.55 vs. 0.49), while producing cluster descriptions that appear more useful. A web interface allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. Conclusion We have demonstrated a simple method to classify biological documents that embodies an improvement over current methods. While the classification results are currently optimized for Caenorhabditis elegans papers by human-created rules, the classification engine can be adapted to different types of documents. We have demonstrated this by presenting a web interface that allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. PMID:16893465
Chagas disease diagnostic applications: present knowledge and future steps
Balouz, Virginia; Agüero, Fernán; Buscaglia, Carlos A.
2017-01-01
Chagas disease, caused by the protozoan Trypanosoma cruzi, is a life-long and debilitating illness of major significance throughout Latin America, and an emergent threat to global public health. Being a neglected disease, the vast majority of Chagasic patients have limited access to proper diagnosis and treatment, and there is only a marginal investment into R&D for drug and vaccine development. In this context, identification of novel biomarkers able to transcend the current limits of diagnostic methods surfaces as a main priority in Chagas disease applied research. The expectation is that these novel biomarkers will provide reliable, reproducible and accurate results irrespective of the genetic background, infecting parasite strain, stage of disease, and clinical-associated features of Chagasic populations. In addition, they should be able to address other still unmet diagnostic needs, including early detection of congenital T. cruzi transmission, rapid assessment of treatment efficiency or failure, indication/prediction of disease progression and direct parasite typification in clinical samples. The lack of access of poor and neglected populations to essential diagnostics also stress the necessity of developing new methods operational in Point-of-Care (PoC) settings. In summary, emergent diagnostic tests integrating these novel and tailored tools should provide a significant impact on the effectiveness of current intervention schemes and on the clinical management of Chagasic patients. In this chapter, we discuss the present knowledge and possible future steps in Chagas disease diagnostic applications, as well as the opportunity provided by recent advances in high-throughput methods for biomarker discovery. PMID:28325368
Blaxter, T J; Carlen, P L; Niesen, C
1989-01-01
1. Rat dentate granule neurones in hippocampal slices were voltage-clamped at 21-23 degrees C using CsCl-filled microelectrodes. The perfusate contained TTX and K+ channel blockers to isolate pharmacologically inward Ca2+ currents. 2. From hyperpolarized holding potentials of -65 to -85 mV, depolarizing test potentials to between -50 and -40 mV elicited a transient (100-200 ms) low-threshold (TLT) current which was also elicited from more depolarized holding potentials following hyperpolarizing voltage steps of -40 mV or greater. 3. Larger depolarizing steps from a hyperpolarized holding potential triggered a large (2-6 nA), transient high-threshold (THT) inward current, rapidly peaking and decaying over 500 ms, followed by a sustained inward current component. 4. At depolarized holding potentials (-50 to -20 mV), the THT current was apparently inactivated and a sustained high-threshold (SHT) inward current was evident during depolarizing voltage steps of 10 mV or more. 5. From hyperpolarized holding potentials with depolarizing voltage steps of 10-30 mV, most neurones demonstrated a small-amplitude, sustained low-threshold (SLT) inward current with similar characteristics to the SHT current. 6. Zero-Ca2+ perfusate or high concentrations of Ca2+ channel blockers (Cd2+, Mn2+ or Ni2+) diminished or abolished all inward currents. 7. Repetitive voltage step activation of each current at 0.5 Hz reduced the large THT current to less than 25% of an unconditioned control current, reduced the SHT current by 50%, but had little effect on the TLT current. 8. A low concentration of Cd2+ (50 microM) blocked the THT and SHT currents with little effect on the TLT current. Nimodipine (1 microM) attenuated the SHT current. Ni2+ (100 microM) selectively attenuated the TLT current. 9. In low-Ca2+ perfusate, high concentrations of Ca2+ (10-15 mM), focally applied to different parts of the neurone, increased the THT current when applied to the dendrites, the SHT current when applied to the soma and the TLT current at all locations. Conversely, in regular perfusate, Cd2+ (1-5 mM), focally applied to the dendrites decreased the THT current and somatic applications decreased the SHT current. The TLT current was diminished regardless of the site of Cd2+ application. 10. These results suggest the existence of three different Ca2+ currents in dentate granule cells separable by their activation and inactivation characteristics, pharmacology and site of initiation. PMID:2557433
A sub-target approach to the kinodynamic motion control of a wheeled mobile robot
NASA Astrophysics Data System (ADS)
Motonaka, Kimiko; Watanabe, Keigo; Maeyama, Shoichi
2018-02-01
A mobile robot with two independently driven wheels is popular, but it is difficult to stabilize it by a continuous controller with a constant gain, due to its nonholonomic property. It is guaranteed that a nonholonomic controlled object can always be converged to an arbitrary point using a switching control method or a quasi-continuous control method based on an invariant manifold in a chained form. From this, the authors already proposed a kinodynamic controller to converge the states of such a two-wheeled mobile robot to the arbitrary target position while avoiding obstacles, by combining the control based on the invariant manifold and the harmonic potential field (HPF). On the other hand, it was confirmed in the previous research that there is a case that the robot cannot avoid the obstacle because there is no enough space to converge the current state to the target state. In this paper, we propose a method that divides the final target position into some sub-target positions and moves the robot step by step, and it is confirmed by the simulation that the robot can converge to the target position while avoiding obstacles using the proposed method.
Revising the lower statistical limit of x-ray grating-based phase-contrast computed tomography.
Marschner, Mathias; Birnbacher, Lorenz; Willner, Marian; Chabior, Michael; Herzen, Julia; Noël, Peter B; Pfeiffer, Franz
2017-01-01
Phase-contrast x-ray computed tomography (PCCT) is currently investigated as an interesting extension of conventional CT, providing high soft-tissue contrast even if examining weakly absorbing specimen. Until now, the potential for dose reduction was thought to be limited compared to attenuation CT, since meaningful phase retrieval fails for scans with very low photon counts when using the conventional phase retrieval method via phase stepping. In this work, we examine the statistical behaviour of the reverse projection method, an alternative phase retrieval approach and compare the results to the conventional phase retrieval technique. We investigate the noise levels in the projections as well as the image quality and quantitative accuracy of the reconstructed tomographic volumes. The results of our study show that this method performs better in a low-dose scenario than the conventional phase retrieval approach, resulting in lower noise levels, enhanced image quality and more accurate quantitative values. Overall, we demonstrate that the lower statistical limit of the phase stepping procedure as proposed by recent literature does not apply to this alternative phase retrieval technique. However, further development is necessary to overcome experimental challenges posed by this method which would enable mainstream or even clinical application of PCCT.
Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J
2013-04-01
Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The 5-Step Method: Principles and Practice
ERIC Educational Resources Information Center
Copello, Alex; Templeton, Lorna; Orford, Jim; Velleman, Richard
2010-01-01
This article includes a description of the 5-Step Method. First, the origins and theoretical basis of the method are briefly described. This is followed by a discussion of the general principles that guide the delivery of the method. Each step is then described in more detail, including the content and focus of each of the five steps that include:…
Green Schools Energy Project: A Step-by-Step Manual.
ERIC Educational Resources Information Center
Quigley, Gwen
This publication contains a step-by-step guide for implementing an energy-saving project in local school districts: the installation of newer, more energy-efficient "T-8" fluorescent tube lights in place of "T-12" lights. Eleven steps are explained in detail: (1) find out what kind of lights the school district currently uses;…
Supercritical Fluid Technologies to Fabricate Proliposomes.
Falconer, James R; Svirskis, Darren; Adil, Ali A; Wu, Zimei
2015-01-01
Proliposomes are stable drug carrier systems designed to form liposomes upon addition of an aqueous phase. In this review, current trends in the use of supercritical fluid (SCF) technologies to prepare proliposomes are discussed. SCF methods are used in pharmaceutical research and industry to address limitations associated with conventional methods of pro/liposome fabrication. The SCF solvent methods of proliposome preparation are eco-friendly (known as green technology) and, along with the SCF anti-solvent methods, could be advantageous over conventional methods; enabling better design of particle morphology (size and shape). The major hurdles of SCF methods include poor scalability to industrial manufacturing which may result in variable particle characteristics. In the case of SCF anti-solvent methods, another hurdle is the reliance on organic solvents. However, the amount of solvent required is typically less than that used by the conventional methods. Another hurdle is that most of the SCF methods used have complicated manufacturing processes, although once the setup has been completed, SCF technologies offer a single-step process in the preparation of proliposomes compared to the multiple steps required by many other methods. Furthermore, there is limited research into how proliposomes will be converted into liposomes for the end-user, and how such a product can be prepared reproducibly in terms of vesicle size and drug loading. These hurdles must be overcome and with more research, SCF methods, especially where the SCF acts as a solvent, have the potential to offer a strong alternative to the conventional methods to prepare proliposomes.
Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun
2018-01-01
Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.
An Economic Evaluation of Colorectal Cancer Screening in Primary Care Practice
Meenan, Richard T.; Anderson, Melissa L.; Chubak, Jessica; Vernon, Sally W.; Fuller, Sharon; Wang, Ching-Yun; Green, Beverly B.
2015-01-01
Introduction Recent colorectal cancer screening studies focus on optimizing adherence. This study evaluated the cost effectiveness of interventions using electronic health records (EHRs), automated mailings, and stepped support increases to improve 2-year colorectal cancer screening adherence. Methods Analyses were based on a parallel-design, randomized trial in which three stepped interventions (EHR-linked mailings [“automated”], automated plus telephone assistance [“assisted”], or automated and assisted plus nurse navigation to testing completion or refusal [navigated”]) were compared to usual care. Data were from August 2008–November 2011 with analyses performed during 2012–2013. Implementation resources were micro-costed; research and registry development costs were excluded. Incremental cost-effectiveness ratios (ICERs) were based on number of participants current for screening per guidelines over 2 years. Bootstrapping examined robustness of results. Results Intervention delivery cost per participant current for screening ranged from $21 (automated) to $27 (navigated). Inclusion of induced testing costs (e.g., screening colonoscopy) lowered expenditures for automated (ICER=−$159) and assisted (ICER=−$36) relative to usual care over 2 years. Savings arose from increased fecal occult blood testing, substituting for more expensive colonoscopies in usual care. Results were broadly consistent across demographic subgroups. More intensive interventions were consistently likely to be cost effective relative to less intensive interventions, with willingness to pay values of $600–$1,200 for an additional person current for screening yielding ≥80% probability of cost effectiveness. Conclusions Two-year cost effectiveness of a stepped approach to colorectal cancer screening promotion based on EHR data is indicated, but longer-term cost effectiveness requires further study. PMID:25998922
Validation of a One-Step Method for Extracting Fatty Acids from Salmon, Chicken and Beef Samples.
Zhang, Zhichao; Richardson, Christine E; Hennebelle, Marie; Taha, Ameer Y
2017-10-01
Fatty acid extraction methods are time-consuming and expensive because they involve multiple steps and copious amounts of extraction solvents. In an effort to streamline the fatty acid extraction process, this study compared the standard Folch lipid extraction method to a one-step method involving a column that selectively elutes the lipid phase. The methods were tested on raw beef, salmon, and chicken. Compared to the standard Folch method, the one-step extraction process generally yielded statistically insignificant differences in chicken and salmon fatty acid concentrations, percent composition and weight percent. Initial testing showed that beef stearic, oleic and total fatty acid concentrations were significantly lower by 9-11% with the one-step method as compared to the Folch method, but retesting on a different batch of samples showed a significant 4-8% increase in several omega-3 and omega-6 fatty acid concentrations with the one-step method relative to the Folch. Overall, the findings reflect the utility of a one-step extraction method for routine and rapid monitoring of fatty acids in chicken and salmon. Inconsistencies in beef concentrations, although minor (within 11%), may be due to matrix effects. A one-step fatty acid extraction method has broad applications for rapidly and routinely monitoring fatty acids in the food supply and formulating controlled dietary interventions. © 2017 Institute of Food Technologists®.
Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min
2018-06-25
Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.
Vertically aligned carbon nanotube emitter on metal foil for medical X-ray imaging.
Ryu, Je Hwang; Kim, Wan Sun; Lee, Seung Ho; Eom, Young Ju; Park, Hun Kuk; Park, Kyu Chang
2013-10-01
A simple method is proposed for growing vertically aligned carbon nanotubes on metal foil using the triode direct current plasma-enhanced chemical vapor deposition (PECVD). The carbon nanotube (CNT) electron emitter was fabricated using fewer process steps with an acid treated metal substrate. The CNT emitter was used for X-ray generation, and the X-ray image of mouse's joint was obtained with an anode current of 0.5 mA at an anode bias of 60 kV. The simple fabrication of a well-aligned CNT with a protection layer on metal foil, and its X-ray application, were studied.
A novel clot lysis assay for recombinant plasminogen activator.
Jamialahmadi, Oveis; Fazeli, Ahmad; Hashemi-Najafabadi, Sameereh; Fazeli, Mohammad Reza
2015-03-01
Recombinant plasminogen activator (r-PA, reteplase) is an engineered variant of alteplase. When expressed in E. coli, it appears as inclusion bodies that require refolding to recover its biological activity. An important step following refolding is to determine the activity of refolded protein. Current methods for enzymatic activity of thrombolytic drugs are costly and complex. Here a straightforward and low-cost clot lysis assay was developed. It quantitatively measures the activity of the commercial reteplase and is also capable of screening refolding conditions. As evidence for adequate accuracy and sensitivity of the current assay, r-PA activity measurements are shown to be comparable to those obtained from chromogenic substrate assay.
Applying Knowledge Discovery in Databases in Public Health Data Set: Challenges and Concerns
Volrathongchia, Kanittha
2003-01-01
In attempting to apply Knowledge Discovery in Databases (KDD) to generate a predictive model from a health care dataset that is currently available to the public, the first step is to pre-process the data to overcome the challenges of missing data, redundant observations, and records containing inaccurate data. This study will demonstrate how to use simple pre-processing methods to improve the quality of input data. PMID:14728545
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
Immunodiagnosis of childhood malignancies.
Parham, D M; Holt, H
1999-09-01
Immunodiagnosis utilizing immunohistochemical techniques is currently the most commonly utilized and readily available method of ancillary diagnosis in pediatric oncopathology. The methodology comprises relatively simple steps, based on straightforward biologic concepts, and the reagents used are generally well characterized and widely used. The principle of cancer immunodiagnosis is based on the determination of neoplastic lineage using detection of proteins typical of cell differentiation pathways. Methodology sensitivity varies and has become greater with each new generation of tests, but technical draw-backs should be considered to avoid excessive background or nonspecific results. Automated instrumentation offers a degree of accuracy and reproducibility not easily attainable by manual methods.
Surface treatment and protection method for cadmium zinc telluride crystals
Wright, Gomez W.; James, Ralph B.; Burger, Arnold; Chinn, Douglas A.
2003-01-01
A method for treatment of the surface of a CdZnTe (CZT) crystal that provides a native dielectric coating to reduce surface leakage currents and thereby, improve the resolution of instruments incorporating detectors using CZT crystals. A two step process is disclosed, etching the surface of a CZT crystal with a solution of the conventional bromine/methanol etch treatment, and after attachment of electrical contacts, passivating the CZT crystal surface with a solution of 10 w/o NH.sub.4 F and 10 w/o H.sub.2 O.sub.2 in water.
A novel design solution to the fraenal notch of maxillary dentures.
White, J A P; Bond, I P; Jagger, D C
2013-09-01
This study investigates a novel design feature for the fraenal notch of maxillary dentures, using computational and experimental methods, and shows that its use could significantly increase the longevity of the prosthesis. A two-step process can be used to create the design feature with current denture base materials, but would be highly dependent on the individual skill of the dental technician. Therefore, an alternative form of manufacture, multi-material additive layer manufacture (or '3D printing'), has been proposed as a future method for the direct production of complete dentures with multi-material design features.
Electrochemical Dissolution of Tungsten Carbide in NaCl-KCl-Na2WO4 Molten Salt
NASA Astrophysics Data System (ADS)
Zhang, Liwen; Nie, Zuoren; Xi, Xiaoli; Ma, Liwen; Xiao, Xiangjun; Li, Ming
2018-02-01
Tungsten carbide was utilized as anode to extract tungsten in a NaCl-KCl-Na2WO4 molten salt, and the electrochemical dissolution was investigated. Although the molten salt electrochemical method is a short process method of tungsten extraction from tungsten carbide in one step, the dissolution efficiency and current efficiency are quite low. In order to improve the dissolution rate and current efficiency, the sodium tungstate was added as the active substance. The dissolution rate, the anode current efficiency, and the cathode current efficiency were calculated with different contents of sodium tungstate addition. The anodes prior to and following the reaction, as well as the product, were analyzed through X-ray diffraction, scanning electron microscopy, and energy dispersive spectrometry. The results demonstrated that the sodium tungstate could improve the dissolution rate and the current efficiency, due to the addition of sodium tungstate decreasing the charge transfer resistance in the electrolysis system. Due to the fact that the addition of sodium tungstate could remove the carbon during electrolysis, pure tungsten powders with 100 nm diameter were obtained when the content of sodium tungstate was 1.0 pct.
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Fan, Liang-Shih
2014-07-01
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge-Kutta schemes in the coupled fluid-particle interaction. The major challenge to implement high-order Runge-Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid-particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge-Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and -0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding may lead to more comprehensive studies of the effect of the particle rotation on fluid-solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge-Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier-Stokes solver.
Dynamic pressure sensitivity determination with Mach number method
NASA Astrophysics Data System (ADS)
Sarraf, Christophe; Damion, Jean-Pierre
2018-05-01
Measurements of pressure in fast transient conditions are often performed even if the dynamic characteristic of the transducer are not traceable to international standards. Moreover, the question of a primary standard in dynamic pressure is still open, especially for gaseous applications. The question is to improve dynamic standards in order to respond to expressed industrial needs. In this paper, the method proposed in the EMRP IND09 ‘Dynamic’ project, which can be called the ‘ideal shock tube method’, is compared with the ‘collective standard method’ currently used in the Laboratoire de Métrologie Dynamique (LNE/ENSAM). The input is a step of pressure generated by a shock tube. The transducer is a piezoelectric pressure sensor. With the ‘ideal shock tube method’ the sensitivity of a pressure sensor is first determined dynamically. This method requires a shock tube implemented with piezoelectric shock wave detectors. The measurement of the Mach number in the tube allows an evaluation of the incident pressure amplitude of a step using a theoretical 1D model of the shock tube. Heat transfer, other actual effects and effects of the shock tube imperfections are not taken into account. The amplitude of the pressure step is then used to determine the sensitivity in dynamic conditions. The second method uses a frequency bandwidth comparison to determine pressure at frequencies from quasi-static conditions, traceable to static pressure standards, to higher frequencies (up to 10 kHz). The measurand is also a step of pressure generated by a supposed ideal shock tube or a fast-opening device. The results are provided as a transfer function with an uncertainty budget assigned to a frequency range, also deliverable frequency by frequency. The largest uncertainty in the bandwidth of comparison is used to trace the final pressure step level measured in dynamic conditions, owing that this pressure is not measurable in a steady state on a shock tube. A reference sensor thereby calibrated can be used in a comparison measurement process. At high frequencies the most important component of the uncertainty in this method is due to actual shock tube complex effects not already functionalized nowadays or thought not to be functionalized in this kind of direct method. After a brief review of both methods and a brief review of the determination of the transfer function of pressure transducers, and the budget of associated uncertainty for the dynamic calibration of a pressure transducer in gas, this paper presents a comparison of the results obtained with the ‘ideal shock tube’ and the ‘collective standard’ methods.
Environmental assessment of packaging: Sense and sensibility
NASA Astrophysics Data System (ADS)
Kooijman, Jan M.
1993-09-01
The functions of packaging are derived from product requirements, thus for insight into the environmental effects of packaging the actual combination of product and package has to be evaluated along the production and distribution system. This extension to all related environmental aspects adds realism to the environmental analysis and provides guidance for design while preventing a too detailed investigation of parts of the production system. This approach is contrary to current environmental studies where packaging is always treated as an independent object, neglecting the more important environmental effects of the product that are influenced by packaging. The general analysis and quantification stages for this approach are described, and the currently available methods for the assessment of environmental effects are reviewed. To limit the workload involved in an environmental assessment, a step-by-step analysis and the use of feedback is recommended. First the dominant environmental effects of a particular product and its production and distribution are estimated. Then, on the basis of these preliminary results, the appropriate system boundaries are chosen and the need for further or more detailed environmental analysis is determined. For typical food and drink applications, the effect of different system boundaries on the outcome of environmental assessments and the advantage of the step-by-step analysis of the food supply system is shown. It appears that, depending on the consumer group, different advice for reduction of environmental effects has to be given. Furthermore, because of interrelated environmental effects of the food supply system, the continuing quest for more detailed and accurate analysis of the package components is not necessary for improved management of the environmental effects of packaging.
NASA Astrophysics Data System (ADS)
Smid, Marek; Costa, Ana; Pebesma, Edzer; Granell, Carlos; Bhattacharya, Devanjan
2016-04-01
Human kind is currently predominantly urban based, and the majority of ever continuing population growth will take place in urban agglomerations. Urban systems are not only major drivers of climate change, but also the impact hot spots. Furthermore, climate change impacts are commonly managed at city scale. Therefore, assessing climate change impacts on urban systems is a very relevant subject of research. Climate and its impacts on all levels (local, meso and global scale) and also the inter-scale dependencies of those processes should be a subject to detail analysis. While global and regional projections of future climate are currently available, local-scale information is lacking. Hence, statistical downscaling methodologies represent a potentially efficient way to help to close this gap. In general, the methodological reviews of downscaling procedures cover the various methods according to their application (e.g. downscaling for the hydrological modelling). Some of the most recent and comprehensive studies, such as the ESSEM COST Action ES1102 (VALUE), use the concept of Perfect Prog and MOS. Other examples of classification schemes of downscaling techniques consider three main categories: linear methods, weather classifications and weather generators. Downscaling and climate modelling represent a multidisciplinary field, where researchers from various backgrounds intersect their efforts, resulting in specific terminology, which may be somewhat confusing. For instance, the Polynomial Regression (also called the Surface Trend Analysis) is a statistical technique. In the context of the spatial interpolation procedures, it is commonly classified as a deterministic technique, and kriging approaches are classified as stochastic. Furthermore, the terms "statistical" and "stochastic" (frequently used as names of sub-classes in downscaling methodological reviews) are not always considered as synonymous, even though both terms could be seen as identical since they are referring to methods handling input modelling factors as variables with certain probability distributions. In addition, the recent development is going towards multi-step methodologies containing deterministic and stochastic components. This evolution leads to the introduction of new terms like hybrid or semi-stochastic approaches, which makes the efforts to systematically classifying downscaling methods to the previously defined categories even more challenging. This work presents a review of statistical downscaling procedures, which classifies the methods in two steps. In the first step, we describe several techniques that produce a single climatic surface based on observations. The methods are classified into two categories using an approximation to the broadest consensual statistical terms: linear and non-linear methods. The second step covers techniques that use simulations to generate alternative surfaces, which correspond to different realizations of the same processes. Those simulations are essential because there is a limited number of real observational data, and such procedures are crucial for modelling extremes. This work emphasises the link between statistical downscaling methods and the research of climate change impacts at city scale.
A step-by-step protocol for assaying protein carbonylation in biological samples.
Colombo, Graziano; Clerici, Marco; Garavaglia, Maria Elisa; Giustarini, Daniela; Rossi, Ranieri; Milzani, Aldo; Dalle-Donne, Isabella
2016-04-15
Protein carbonylation represents the most frequent and usually irreversible oxidative modification affecting proteins. This modification is chemically stable and this feature is particularly important for storage and detection of carbonylated proteins. Many biochemical and analytical methods have been developed during the last thirty years to assay protein carbonylation. The most successful method consists on protein carbonyl (PCO) derivatization with 2,4-dinitrophenylhydrazine (DNPH) and consequent spectrophotometric assay. This assay allows a global quantification of PCO content due to the ability of DNPH to react with carbonyl giving rise to an adduct able to absorb at 366 nm. Similar approaches were also developed employing chromatographic separation, in particular HPLC, and parallel detection of absorbing adducts. Subsequently, immunological techniques, such as Western immunoblot or ELISA, have been developed leading to an increase of sensitivity in protein carbonylation detection. Currently, they are widely employed to evaluate change in total protein carbonylation and eventually to highlight the specific proteins undergoing selective oxidation. In the last decade, many mass spectrometry (MS) approaches have been developed for the identification of the carbonylated proteins and the relative amino acid residues modified to carbonyl derivatives. Although these MS methods are much more focused and detailed due to their ability to identify the amino acid residues undergoing carbonylation, they still require too expensive equipments and, therefore, are limited in distribution. In this protocol paper, we summarise and comment on the most diffuse protocols that a standard laboratory can employ to assess protein carbonylation; in particular, we describe step-by-step the different protocols, adding suggestions coming from our on-bench experience. Copyright © 2015 Elsevier B.V. All rights reserved.
PREDICTIVE MEASURES OF A RESIDENT'S PERFORMANCE ON WRITTEN ORTHOPAEDIC BOARD SCORES
Dyrstad, Bradley W; Pope, David; Milbrandt, Joseph C; Beck, Ryan T; Weinhoeft, Anita L.; Idusuyi, Osaretin B
2011-01-01
Objective Residency programs are continually attempting to predict the performance of both current and potential residents. Previous studies have supported the use of USMLE Steps 1 and 2 as predictors of Orthopaedic In-Training Examination (OITE) and eventual American Board of Orthopaedic Surgery success, while others show no significant correlation. A strong performance on OITE examinations does correlate with strong residency performance, and some believe OITE scores are good predictors of future written board success. The current study was designed to examine potential differences in resident assessment measures and their predictive value for written boards. Design/Methods A retrospective review of resident performance data was performed for the past 10 years. Personalized information was removed by the residency coordinator. USMLE Step 1, USMLE Step 2, Orthopaedic In-Training Examination (from first to fifth years of training), and written orthopaedic specialty board scores were collected. Subsequently, the residents were separated into two groups, those scoring above the 35th percentile on written boards and those scoring below. Data were analyzed using correlation and regression analyses to compare and contrast the scores across all tests. Results A significant difference was seen between the groups in regard to USMLE scores for both Step 1 and 2. Also, a significant difference was found between OITE scores for both the second and fifth years. Positive correlations were found for USMLE Step 1, Step 2, OITE 2 and OITE 5 when compared to performance on written boards. One resident initially failed written boards, but passed on the second attempt This resident consistently scored in the 20th and 30th percentiles on the in-training examinations. Conclusions USMLE Step 1 and 2 scores along with OITE scores are helpful in gauging an orthopaedic resident’s performance on written boards. Lower USMLE scores along with consistently low OITE scores likely identify residents at risk of failing their written boards. Close monitoring of the annual OITE scores is recommended and may be useful to identify struggling residents. Future work involving multiple institutions is warranted and would ensure applicability of our findings to other orthopedic residency programs. PMID:22096449
Robust w-Estimators for Cryo-EM Class Means
Huang, Chenxi; Tagare, Hemant D.
2016-01-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397
Robust w-Estimators for Cryo-EM Class Means.
Huang, Chenxi; Tagare, Hemant D
2016-02-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.
Iraola, Gregorio; Hernández, Martín; Calleros, Lucía; Paolicchi, Fernando; Silveyra, Silvia; Velilla, Alejandra; Carretto, Luis; Rodríguez, Eliana; Pérez, Ruben
2012-12-01
Campylobacter (C.) fetus (epsilonproteobacteria) is an important veterinary pathogen. This species is currently divided into C. fetus subspecies (subsp.) fetus (Cff) and C. fetus subsp. venerealis (Cfv). Cfv is the causative agent of bovine genital Campylobacteriosis, an infectious disease that leads to severe reproductive problems in cattle worldwide. Cff is a more general pathogen that causes reproductive problems mainly in sheep although cattle can also be affected. Here we describe a multiplex PCR method to detect C. fetus and differentiate between subspecies in a single step. The assay was standardized using cultured strains and successfully used to analyze the abomasal liquid of aborted bovine fetuses without any pre-enrichment step. Results of our assay were completely consistent with those of traditional bacteriological diagnostic methods. Furthermore, the multiplex PCR technique we developed may be easily adopted by any molecular diagnostic laboratory as a complementary tool for detecting C. fetus subspecies and obtaining epidemiological information about abortion events in cattle.
Progress developing the JAXA next generation satellite data repository (G-Portal).
NASA Astrophysics Data System (ADS)
Ikehata, Y.
2016-12-01
JAXA has been operating the "G-Portal" as a repository for search and access data of Earth observation satellite related JAXA since February 2013. The G-Portal handles ten satellites data; GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1. G-Portal plans to import future satellites GCOM-C and EarthCARE. Except for ALOS and ALOS-2, all of these data are open and free. The G-Portal supports web search, catalogue search (CSW and OpenSearch) and direct download by SFTP for data access. However, the G-Portal has some problems about performance and usability. For example, about performance, the G-Portal is based on 10Gbps network and uses scale out architecture. (Conceptual design was reported in AGU fall meeting 2015. (IN23D-1748)) In order to improve those problems, JAXA is developing the next generation repository since February 2016. This paper describes usability problems improvements and challenges towards the next generation system. The improvements and challenges include the following points. Current web interface uses "step by step" design and URL is generated randomly. For that reason, users must see the Web page and click many times to get desired satellite data. So, Web design will be changed completely from "step by step" to "1 page" and URL will be based on REST (REpresentational State Transfer). Regarding direct download, the current method(SFTP) is very hard to use because of anomaly port assign and key-authentication. So, we will support FTP protocol. Additionally, the next G-Portal improve catalogue service. Currently catalogue search is available only to limited users including NASA, ESA and CEOS due to performance and reliability issue, but we will remove this limitation. Furthermore, catalogue search client function will be implemented to take in other agencies satellites catalogue. Users will be able to search satellite data across agencies.
Characteristics of camel-gate structures with active doping channel profiles
NASA Astrophysics Data System (ADS)
Tsai, Jung-Hui; Lour, Wen-Shiung; Laih, Lih-Wen; Liu, Rong-Chau; Liu, Wen-Chau
1996-03-01
In this paper, we demonstrate the influence of channel doping profile on the performances of camel-gate field effect transistors (CAMFETs). For comparison, single and tri-step doping channel structures with identical doping thickness products are employed, while other parameters are kept unchanged. The results of a theoretical analysis show that the single doping channel FET with lightly doping active layer has higher barrier height and drain-source saturation current. However, the transconductance is decreased. For a tri-step doping channel structure, it is found that the output drain-source saturation current and the barrier height are enhanced. Furthermore, the relatively voltage independent performances are improved. Two CAMFETs with single and tri-step doping channel structures have been fabricated and discussed. The devices exhibit nearly voltage independent transconductances of 144 mS mm -1 and 222 mS mm -1 for single and tri-step doping channel CAMFETs, respectively. The operation gate voltage may extend to ± 1.5 V for a tri-step doping channel CAMFET. In addition, the drain current densities of > 750 and 405 mA mm -1 are obtained for the tri-step and single doping CAMFETs. These experimental results are inconsistent with theoretical analysis.
Parallel Cartesian grid refinement for 3D complex flow simulations
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2013-11-01
A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.
Methods for slow axis beam quality improvement of high power broad area diode lasers
NASA Astrophysics Data System (ADS)
An, Haiyan; Xiong, Yihan; Jiang, Ching-Long J.; Schmidt, Berthold; Treusch, Georg
2014-03-01
For high brightness direct diode laser systems, it is of fundamental importance to improve the slow axis beam quality of the incorporated laser diodes regardless what beam combining technology is applied. To further advance our products in terms of increased brightness at a high power level, we must optimize the slow axis beam quality despite the far field blooming at high current levels. The later is caused predominantly by the built-in index step in combination with the thermal lens effect. Most of the methods for beam quality improvements reported in publications sacrifice the device efficiency and reliable output power. In order to improve the beam quality as well as maintain the efficiency and reliable output power, we investigated methods of influencing local heat generation to reduce the thermal gradient across the slow axis direction, optimizing the built-in index step and discriminating high order modes. Based on our findings, we have combined different methods in our new device design. Subsequently, the beam parameter product (BPP) of a 10% fill factor bar has improved by approximately 30% at 7 W/emitter without efficiency penalty. This technology has enabled fiber coupled high brightness multi-kilowatt direct diode laser systems. In this paper, we will elaborate on the methods used as well as the results achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocchetti, Laura; Amato, Alessia; Fonti, Viviana
Graphical abstract: Display Omitted - Highlights: • End-of-life LCD panels represent a source of indium. • Several experimental conditions for indium leaching have been assessed. • Indium is completely extracted with 2 M sulfuric acid at 80 °C for 10 min. • Cross-current leaching improves indium extraction and operating costs are lowered. • Benefits to the environment come from reduction of CO{sub 2} emissions and reagents use. - Abstract: Indium is a critical element mainly produced as a by-product of zinc mining, and it is largely used in the production process of liquid crystal display (LCD) panels. End-of-life LCDs representmore » a possible source of indium in the field of urban mining. In the present paper, we apply, for the first time, cross-current leaching to mobilize indium from end-of-life LCD panels. We carried out a series of treatments to leach indium. The best leaching conditions for indium were 2 M sulfuric acid at 80 °C for 10 min, which allowed us to completely mobilize indium. Taking into account the low content of indium in end-of-life LCDs, of about 100 ppm, a single step of leaching is not cost-effective. We tested 6 steps of cross-current leaching: in the first step indium leaching was complete, whereas in the second step it was in the range of 85–90%, and with 6 steps it was about 50–55%. Indium concentration in the leachate was about 35 mg/L after the first step of leaching, almost 2-fold at the second step and about 3-fold at the fifth step. Then, we hypothesized to scale up the process of cross-current leaching up to 10 steps, followed by cementation with zinc to recover indium. In this simulation, the process of indium recovery was advantageous from an economic and environmental point of view. Indeed, cross-current leaching allowed to concentrate indium, save reagents, and reduce the emission of CO{sub 2} (with 10 steps we assessed that the emission of about 90 kg CO{sub 2}-Eq. could be avoided) thanks to the recovery of indium. This new strategy represents a useful approach for secondary production of indium from waste LCD panels.« less
Simultaneous calibration phantom commission and geometry calibration in cone beam CT
NASA Astrophysics Data System (ADS)
Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong
2017-09-01
Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.
Automatic cloud coverage assessment of Formosat-2 image
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2011-11-01
Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.
NASA Technical Reports Server (NTRS)
Funaro, Gregory V.; Alexander, Reginald A.
2015-01-01
The Advanced Concepts Office (ACO) at NASA, Marshall Space Flight Center is expanding its current technology assessment methodologies. ACO is developing a framework called TAPP that uses a variety of methods, such as association mining and rule learning from data mining, structure development using a Technological Innovation System (TIS), and social network modeling to measure structural relationships. The role of ACO is to 1) produce a broad spectrum of ideas and alternatives for a variety of NASA's missions, 2) determine mission architecture feasibility and appropriateness to NASA's strategic plans, and 3) define a project in enough detail to establish an initial baseline capable of meeting mission objectives ACO's role supports the decision-making process associated with the maturation of concepts for traveling through, living in, and understanding space. ACO performs concept studies and technology assessments to determine the degree of alignment between mission objectives and new technologies. The first step in technology assessment is to identify the current technology maturity in terms of a technology readiness level (TRL). The second step is to determine the difficulty associated with advancing a technology from one state to the next state. NASA has used TRLs since 1970 and ACO formalized them in 1995. The DoD, ESA, Oil & Gas, and DoE have adopted TRLs as a means to assess technology maturity. However, "with the emergence of more complex systems and system of systems, it has been increasingly recognized that TRL assessments have limitations, especially when considering [the] integration of complex systems." When performing the second step in a technology assessment, NASA requires that an Advancement Degree of Difficulty (AD2) method be utilized. NASA has used and developed or used a variety of methods to perform this step: Expert Opinion or Delphi Approach, Value Engineering or Value Stream, Analytical Hierarchy Process (AHP), Technique for the Order of Prioritization by Similarity to Ideal Solution (TOPSIS), and other multi-criteria decision-making methods. These methods can be labor-intensive, often contain cognitive or parochial bias, and do not consider the competing prioritization between mission architectures. Strategic Decision-Making (SDM) processes cannot be properly understood unless the context of the technology is understood. This makes assessing technological change particularly challenging due to the relationships "between incumbent technology and the incumbent (innovation) system in relation to the emerging technology and the emerging innovation system." The central idea in technology dynamics is to consider all activities that contribute to the development, diffusion, and use of innovations as system functions. Bergek defines system functions within a TIS to address what is actually happening and has a direct influence on the ultimate performance of the system and technology development. ACO uses similar metrics and is expanding these metrics to account for the structure and context of the technology. At NASA technology and strategy is strongly interrelated. NASA's Strategic Space Technology Investment Plan (SSTIP) prioritizes those technologies essential to the pursuit of NASA's missions and national interests. The SSTIP is strongly coupled with NASA's Technology Roadmaps to provide investment guidance during the next four years, within a twenty-year horizon. This paper discusses the methods ACO is currently developing to better perform technology assessments while taking into consideration Strategic Alignment, Technology Forecasting, and Long Term Planning.
Investigation of direct solar-to-microwave energy conversion techniques
NASA Technical Reports Server (NTRS)
Chatterton, N. E.; Mookherji, T. K.; Wunsch, P. K.
1978-01-01
Identification of alternative methods of producing microwave energy from solar radiation for purposes of directing power to the Earth from space is investigated. Specifically, methods of conversion of optical radiation into microwave radiation by the most direct means are investigated. Approaches based on demonstrated device functioning and basic phenomenologies are developed. There is no system concept developed, that is competitive with current baseline concepts. The most direct methods of conversion appear to require an initial step of production of coherent laser radiation. Other methods generally require production of electron streams for use in solid-state or cavity-oscillator systems. Further development is suggested to be worthwhile for suggested devices and on concepts utilizing a free-electron stream for the intraspace station power transport mechanism.
Preparation of alpha-emitting nuclides by electrodeposition
NASA Astrophysics Data System (ADS)
Lee, M. H.; Lee, C. W.
2000-06-01
A method is described for electrodepositing the alpha-emitting nuclides. To determine the optimum conditions for plating plutonium, the effects of electrolyte concentration, chelating reagent, current, pH of electrolyte and the time of plating on the electrodeposition were investigated on the base of the ammonium oxalate-ammonium sulfate electrolyte containing diethyl triamino pentaacetic acid. An optimized electrodeposition procedure for the determination of plutonium was validated by application to environmental samples. The chemical yield of the optimized method of electrodeposition step in the environmental sample was a little higher than that of Talvitie's method. The developed electrodeposition procedure in this study was applied to determine the radionuclides such as thorium, uranium and americium that the electrodeposition yields were a little higher than those of the conventional method.
Unciti-Broceta, Juan D; Cano-Cortés, Victoria; Altea-Manzano, Patricia; Pernagallo, Salvatore; Díaz-Mochón, Juan J; Sánchez-Martín, Rosario M
2015-05-15
Engineered nanoparticles (eNPs) for biological and biomedical applications are produced from functionalised nanoparticles (NPs) after undergoing multiple handling steps, giving rise to an inevitable loss of NPs. Herein we present a practical method to quantify nanoparticles (NPs) number per volume in an aqueous suspension using standard spectrophotometers and minute amounts of the suspensions (up to 1 μL). This method allows, for the first time, to analyse cellular uptake by reporting NPs number added per cell, as opposed to current methods which are related to solid content (w/V) of NPs. In analogy to the parameter used in viral infective assays (multiplicity of infection), we propose to name this novel parameter as multiplicity of nanofection.
Helicase Stepping Investigated with One-Nucleotide Resolution Fluorescence Resonance Energy Transfer
NASA Astrophysics Data System (ADS)
Lin, Wenxia; Ma, Jianbing; Nong, Daguan; Xu, Chunhua; Zhang, Bo; Li, Jinghua; Jia, Qi; Dou, Shuoxing; Ye, Fangfu; Xi, Xuguang; Lu, Ying; Li, Ming
2017-09-01
Single-molecule Förster resonance energy transfer is widely applied to study helicases by detecting distance changes between a pair of dyes anchored to overhangs of a forked DNA. However, it has been lacking single-base pair (1-bp) resolution required for revealing stepping kinetics of helicases. We designed a nanotensioner in which a short DNA is bent to exert force on the overhangs, just as in optical or magnetic tweezers. The strategy improved the resolution of Förster resonance energy transfer to 0.5 bp, high enough to uncover differences in DNA unwinding by yeast Pif1 and E. coli RecQ whose unwinding behaviors cannot be differentiated by currently practiced methods. We found that Pif1 exhibits 1-bp-stepping kinetics, while RecQ breaks 1 bp at a time but sequesters the nascent nucleotides and releases them randomly. The high-resolution data allowed us to propose a three-parameter model to quantitatively interpret the apparently different unwinding behaviors of the two helicases which belong to two superfamilies.
Stetzer, Dave; Leavitt, Adam M; Goeke, Charles L; Havas, Magda
2016-01-01
Ground current commonly referred to as "stray voltage" has been an issue on dairy farms since electricity was first brought to rural America. Equipment that generates high-frequency voltage transients on electrical wires combined with a multigrounded (electrical distribution) system and inadequate neutral returns all contribute to ground current. Despite decades of problems, we are no closer to resolving this issue, in part, due to three misconceptions that are addressed in this study. Misconception 1. The current standard of 1 V at cow contact is adequate to protect dairy cows; Misconception 2. Frequencies higher than 60 Hz do not need to be considered; and Misconception 3. All sources of ground current originate on the farm that has a ground current problem. This case study of a Wisconsin dairy farm documents, 1. how to establish permanent monitoring of ground current (step potential) on a dairy farm; 2. how to determine and remediate both on-farm and off-farm sources contributing to step potential; 3. which step-potential metrics relate to cow comfort and milk production; and 4. how these metrics relate to established standards. On-farm sources include lighting, variable speed frequency drives on motors, radio frequency identification system and off-farm sources are due to a poor primary neutral return on the utility side of the distribution system. A step-potential threshold of 1 V root mean square (RMS) at 60 Hz is inadequate to protect dairy cows as decreases of a few mV peak-peak at higher frequencies increases milk production, reduces milking time and improves cow comfort.
"Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes
... Home Current Issue Past Issues Special Section "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes ... onset. Those are the basic facts of "Small Steps. Big Rewards: Prevent type 2 Diabetes," created by ...
Overlap junctions for high coherence superconducting qubits
NASA Astrophysics Data System (ADS)
Wu, X.; Long, J. L.; Ku, H. S.; Lake, R. E.; Bal, M.; Pappas, D. P.
2017-07-01
Fabrication of sub-micron Josephson junctions is demonstrated using standard processing techniques for high-coherence, superconducting qubits. These junctions are made in two separate lithography steps with normal-angle evaporation. Most significantly, this work demonstrates that it is possible to achieve high coherence with junctions formed on aluminum surfaces cleaned in situ by Ar plasma before junction oxidation. This method eliminates the angle-dependent shadow masks typically used for small junctions. Therefore, this is conducive to the implementation of typical methods for improving margins and yield using conventional CMOS processing. The current method uses electron-beam lithography and an additive process to define the top and bottom electrodes. Extension of this work to optical lithography and subtractive processes is discussed.
Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing
NASA Astrophysics Data System (ADS)
Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng
2017-05-01
Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.
Methods, systems and devices for detecting and locating ferromagnetic objects
Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID
2010-01-26
Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.
Melendez, Johan H.; Santaus, Tonya M.; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A.; Geddes, Chris D.
2016-01-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by the detection of the genomic target often involving PCR-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (GC) DNA. Our approach is based on the use of highly-focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the present study, we show that highly focused microwaves at 2.45 GHz, using 12.3 mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification in less than 10 minutes total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward towards the development of a point-of-care (POC) platform for detection of gonorrhea infections. PMID:27325503
Kennedy, Zachary C.; Barrett, Christopher A.; Warner, Marvin G.
2017-03-01
Azides on the periphery of nanodiamond materials (ND) are of great utility because they have been shown to undergo Cu-catalyzed and Cu-free cycloaddition reactions with structurally diverse alkynes, affording particles tailored for applications in biology and materials science. However, current methods employed to access ND featuring azide groups typically require either harsh pretreatment procedures or multiple synthesis steps and use surface linking groups that may be susceptible to undesirable cleavage. Here in this paper we demonstrate an alternative single-step approach to producing linker-free, azide-functionalized ND. Our method was applied to low-cost, detonation-derived ND powders where surface carbonyl groups undergo silver-mediatedmore » decarboxylation and radical substitution with azide. ND with directly grafted azide groups were then treated with a variety of aliphatic, aromatic, and fluorescent alkynes to afford 1-(ND)-4-substituted-1,2,3-triazole materials under standard copper-catalyzed cycloaddition conditions. Surface modification steps were verified by characteristic infrared absorptions and elemental analyses. High loadings of triazole surface groups (up to 0.85 mmol g –1) were obtained as determined from thermogravimetric analysis. The azidation procedure disclosed is envisioned to become a valuable initial transformation in numerous future applications of ND.« less
Two-step carbon coating of lithium vanadium phosphate as high-rate cathode for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Kuang, Quan; Zhao, Yanming
2012-10-01
Carbon-coated Li3V2(PO4)3 was firstly prepared at 850 °C via two-step reaction method combined sol-gel and conventional solid-state synthesis by using VPO4/carbon as an intermediate. Two different carbon sources, citric acid and glucose as carbon additives in sequence, ultimately deduced double carbon-coated Li3V2(PO4)3 as a high-rate cathode material. The Li3V2(PO4)3/carbon with 4.39% residual carbon has a splendid electronic conductivity of 4.76×10-2 S cm-1. Even in the voltage window of 2.5-4.8 V, the Li3V2(PO4)3/carbon cathode can retain outstanding rate ability (170.4 mAh g-1 at 1.2 C, 101.9 mAh g-1 at 17 C), and no degradation is found after 120 C current rate. These phenomena show that the two-step carbon-coated Li3V2(PO4)3 can act as a fast charge-discharge cathode material for high-power Li-ion batteries. Furthermore, it's believed that this synthesize method can be easily transplanted to prepare other lithiated vanadium-based phosphates.
Clark, Stephen J; Smallwood, Sébastien A; Lee, Heather J; Krueger, Felix; Reik, Wolf; Kelsey, Gavin
2017-03-01
DNA methylation (DNAme) is an important epigenetic mark in diverse species. Our current understanding of DNAme is based on measurements from bulk cell samples, which obscures intercellular differences and prevents analyses of rare cell types. Thus, the ability to measure DNAme in single cells has the potential to make important contributions to the understanding of several key biological processes, such as embryonic development, disease progression and aging. We have recently reported a method for generating genome-wide DNAme maps from single cells, using single-cell bisulfite sequencing (scBS-seq), allowing the quantitative measurement of DNAme at up to 50% of CpG dinucleotides throughout the mouse genome. Here we present a detailed protocol for scBS-seq that includes our most recent developments to optimize recovery of CpGs, mapping efficiency and success rate; reduce hands-on time; and increase sample throughput with the option of using an automated liquid handler. We provide step-by-step instructions for each stage of the method, comprising cell lysis and bisulfite (BS) conversion, preamplification and adaptor tagging, library amplification, sequencing and, lastly, alignment and methylation calling. An individual with relevant molecular biology expertise can complete library preparation within 3 d. Subsequent computational steps require 1-3 d for someone with bioinformatics expertise.
An iterative network partition algorithm for accurate identification of dense network modules
Sun, Siqi; Dong, Xinran; Fu, Yao; Tian, Weidong
2012-01-01
A key step in network analysis is to partition a complex network into dense modules. Currently, modularity is one of the most popular benefit functions used to partition network modules. However, recent studies suggested that it has an inherent limitation in detecting dense network modules. In this study, we observed that despite the limitation, modularity has the advantage of preserving the primary network structure of the undetected modules. Thus, we have developed a simple iterative Network Partition (iNP) algorithm to partition a network. The iNP algorithm provides a general framework in which any modularity-based algorithm can be implemented in the network partition step. Here, we tested iNP with three modularity-based algorithms: multi-step greedy (MSG), spectral clustering and Qcut. Compared with the original three methods, iNP achieved a significant improvement in the quality of network partition in a benchmark study with simulated networks, identified more modules with significantly better enrichment of functionally related genes in both yeast protein complex network and breast cancer gene co-expression network, and discovered more cancer-specific modules in the cancer gene co-expression network. As such, iNP should have a broad application as a general method to assist in the analysis of biological networks. PMID:22121225
March, Melissa I; Modest, Anna M; Ralston, Steven J; Hacker, Michele R; Gupta, Munish; Brown, Florence M
2016-01-01
To compare characteristics and outcomes of women diagnosed with gestational diabetes mellitus (GDM) by the newer one-step glucose tolerance test and those diagnosed with the traditional two-step method. This was a retrospective cohort study of women with GDM who delivered in 2010-2011. Data are reported as proportion or median (interquartile range) and were compared using a Chi-square, Fisher's exact or Wilcoxon rank sum test based on data type. Of 235 women with GDM, 55.7% were diagnosed using the two-step method and 44.3% with the one-step method. The groups had similar demographics and GDM risk factors. The two-step method group was diagnosed with GDM one week later [27.0 (24.0-29.0) weeks versus 26.0 (24.0-28.0 weeks); p = 0.13]. The groups had similar median weight gain per week before diagnosis. After diagnosis, women in the one-step method group had significantly higher median weight gain per week [0.67 pounds/week (0.31-1.0) versus 0.56 pounds/week (0.15-0.89); p = 0.047]. In the one-step method group more women had suspected macrosomia (11.7% versus 5.3%, p = 0.07) and more neonates had a birth weight >4000 g (13.6% versus 7.5%, p = 0.13); however, these differences were not statistically significant. Other pregnancy and neonatal complications were similar. Women diagnosed with the one-step method gained more weight per week after GDM diagnosis and had a non-statistically significant increased risk for suspected macrosomia. Our data suggest the one-step method identifies women with at least equally high risk as the two-step method.
Development of Mobile Platform Integrated with Existing Electronic Medical Records
Kim, YoungAh; Kang, Simon; Kim, Kyungduk; Kim, Jun
2014-01-01
Objectives This paper describes a mobile Electronic Medical Record (EMR) platform designed to manage and utilize the existing EMR and mobile application with optimized resources. Methods We structured the mEMR to reuse services of retrieval and storage in mobile app environments that have already proven to have no problem working with EMRs. A new mobile architecture-based mobile solution was developed in four steps: the construction of a server and its architecture; screen layout and storyboard making; screen user interface design and development; and a pilot test and step-by-step deployment. This mobile architecture consists of two parts, the server-side area and the client-side area. In the server-side area, it performs the roles of service management for EMR and documents and for information exchange. Furthermore, it performs menu allocation depending on user permission and automatic clinical document architecture document conversion. Results Currently, Severance Hospital operates an iOS-compatible mobile solution based on this mobile architecture and provides stable service without additional resources, dealing with dynamic changes of EMR templates. Conclusions The proposed mobile solution should go hand in hand with the existing EMR system, and it can be a cost-effective solution if a quality EMR system is operated steadily with this solution. Thus, we expect this example to be shared with hospitals that currently plan to deploy mobile solutions. PMID:25152837
DOE Office of Scientific and Technical Information (OSTI.GOV)
Posseme, N., E-mail: nicolas.posseme@cea.fr; Pollet, O.; Barnola, S.
2014-08-04
Silicon nitride spacer etching realization is considered today as one of the most challenging of the etch process for the new devices realization. For this step, the atomic etch precision to stop on silicon or silicon germanium with a perfect anisotropy (no foot formation) is required. The situation is that none of the current plasma technologies can meet all these requirements. To overcome these issues and meet the highly complex requirements imposed by device fabrication processes, we recently proposed an alternative etching process to the current plasma etch chemistries. This process is based on thin film modification by light ionsmore » implantation followed by a selective removal of the modified layer with respect to the non-modified material. In this Letter, we demonstrate the benefit of this alternative etch method in term of film damage control (silicon germanium recess obtained is less than 6 A), anisotropy (no foot formation), and its compatibility with other integration steps like epitaxial. The etch mechanisms of this approach are also addressed.« less
[Sample preparation and bioanalysis in mass spectrometry].
Bourgogne, Emmanuel; Wagner, Michel
2015-01-01
The quantitative analysis of compounds of clinical interest of low molecular weight (<1000 Da) in biological fluids is currently in most cases performed by liquid chromatography-mass spectrometry (LC-MS). Analysis of these compounds in biological fluids (plasma, urine, saliva, hair...) is a difficult task requiring a sample preparation. Sample preparation is a crucial part of chemical/biological analysis and in a sense is considered the bottleneck of the whole analytical process. The main objectives of sample preparation are the removal of potential interferences, analyte preconcentration, and converting (if needed) the analyte into a more suitable form for detection or separation. Without chromatographic separation, endogenous compounds, co-eluted products may affect a quantitative method in mass spectrometry performance. This work focuses on three distinct parts. First, quantitative bioanalysis will be defined, different matrices and sample preparation techniques currently used in bioanalysis by mass spectrometry of/for small molecules of clinical interest in biological fluids. In a second step the goals of sample preparation will be described. Finally, in a third step, sample preparation strategies will be made either directly ("dilute and shoot") or after precipitation.
Wang, Huilin; Wang, Mingjun; Tan, Hao; Li, Yuan; Zhang, Ziding; Song, Jiangning
2014-01-01
X-ray crystallography is the primary approach to solve the three-dimensional structure of a protein. However, a major bottleneck of this method is the failure of multi-step experimental procedures to yield diffraction-quality crystals, including sequence cloning, protein material production, purification, crystallization and ultimately, structural determination. Accordingly, prediction of the propensity of a protein to successfully undergo these experimental procedures based on the protein sequence may help narrow down laborious experimental efforts and facilitate target selection. A number of bioinformatics methods based on protein sequence information have been developed for this purpose. However, our knowledge on the important determinants of propensity for a protein sequence to produce high diffraction-quality crystals remains largely incomplete. In practice, most of the existing methods display poorer performance when evaluated on larger and updated datasets. To address this problem, we constructed an up-to-date dataset as the benchmark, and subsequently developed a new approach termed 'PredPPCrys' using the support vector machine (SVM). Using a comprehensive set of multifaceted sequence-derived features in combination with a novel multi-step feature selection strategy, we identified and characterized the relative importance and contribution of each feature type to the prediction performance of five individual experimental steps required for successful crystallization. The resulting optimal candidate features were used as inputs to build the first-level SVM predictor (PredPPCrys I). Next, prediction outputs of PredPPCrys I were used as the input to build second-level SVM classifiers (PredPPCrys II), which led to significantly enhanced prediction performance. Benchmarking experiments indicated that our PredPPCrys method outperforms most existing procedures on both up-to-date and previous datasets. In addition, the predicted crystallization targets of currently non-crystallizable proteins were provided as compendium data, which are anticipated to facilitate target selection and design for the worldwide structural genomics consortium. PredPPCrys is freely available at http://www.structbioinfor.org/PredPPCrys.
Yang, Jian-Yi; Peng, Zhen-Ling; Yu, Zu-Guo; Zhang, Rui-Jie; Anh, Vo; Wang, Desheng
2009-04-21
In this paper, we intend to predict protein structural classes (alpha, beta, alpha+beta, or alpha/beta) for low-homology data sets. Two data sets were used widely, 1189 (containing 1092 proteins) and 25PDB (containing 1673 proteins) with sequence homology being 40% and 25%, respectively. We propose to decompose the chaos game representation of proteins into two kinds of time series. Then, a novel and powerful nonlinear analysis technique, recurrence quantification analysis (RQA), is applied to analyze these time series. For a given protein sequence, a total of 16 characteristic parameters can be calculated with RQA, which are treated as feature representation of protein sequences. Based on such feature representation, the structural class for each protein is predicted with Fisher's linear discriminant algorithm. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies with step-by-step procedure are 65.8% and 64.2% for 1189 and 25PDB data sets, respectively. With one-against-others procedure used widely, we compare our method with five other existing methods. Especially, the overall accuracies of our method are 6.3% and 4.1% higher for the two data sets, respectively. Furthermore, only 16 parameters are used in our method, which is less than that used by other methods. This suggests that the current method may play a complementary role to the existing methods and is promising to perform the prediction of protein structural classes.
Design and analysis of group-randomized trials in cancer: A review of current practices.
Murray, David M; Pals, Sherri L; George, Stephanie M; Kuzmichev, Andrey; Lai, Gabriel Y; Lee, Jocelyn A; Myles, Ranell L; Nelson, Shakira M
2018-06-01
The purpose of this paper is to summarize current practices for the design and analysis of group-randomized trials involving cancer-related risk factors or outcomes and to offer recommendations to improve future trials. We searched for group-randomized trials involving cancer-related risk factors or outcomes that were published or online in peer-reviewed journals in 2011-15. During 2016-17, in Bethesda MD, we reviewed 123 articles from 76 journals to characterize their design and their methods for sample size estimation and data analysis. Only 66 (53.7%) of the articles reported appropriate methods for sample size estimation. Only 63 (51.2%) reported exclusively appropriate methods for analysis. These findings suggest that many investigators do not adequately attend to the methodological challenges inherent in group-randomized trials. These practices can lead to underpowered studies, to an inflated type 1 error rate, and to inferences that mislead readers. Investigators should work with biostatisticians or other methodologists familiar with these issues. Funders and editors should ensure careful methodological review of applications and manuscripts. Reviewers should ensure that studies are properly planned and analyzed. These steps are needed to improve the rigor and reproducibility of group-randomized trials. The Office of Disease Prevention (ODP) at the National Institutes of Health (NIH) has taken several steps to address these issues. ODP offers an online course on the design and analysis of group-randomized trials. ODP is working to increase the number of methodologists who serve on grant review panels. ODP has developed standard language for the Application Guide and the Review Criteria to draw investigators' attention to these issues. Finally, ODP has created a new Research Methods Resources website to help investigators, reviewers, and NIH staff better understand these issues. Published by Elsevier Inc.
Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows
Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio
2012-01-01
Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
Rowe, Sylvia; Alexander, Nick; Kretser, Alison; Steele, Robert; Kretsch, Molly; Applebaum, Rhona; Clydesdale, Fergus; Cummins, Deborah; Hentges, Eric; Navia, Juan; Jarvis, Ashley; Falci, Ken
2013-01-01
The present article articulates principles for effective public-private partnerships (PPPs) in scientific research. Recognizing that PPPs represent one approach for creating research collaborations and that there are other methods outside the scope of this article, PPPs can be useful in leveraging diverse expertise among government, academic, and industry researchers to address public health needs and questions concerned with nutrition, health, food science, and food and ingredient safety. A three-step process was used to identify the principles proposed herein: step 1) review of existing PPP guidelines, both in the peer-reviewed literature and at 16 disparate non-industry organizations; step 2) analysis of relevant successful or promising PPPs; and step 3) formal background interviews of 27 experienced, senior-level individuals from academia, government, industry, foundations, and non-governmental organizations. This process resulted in the articulation of 12 potential principles for establishing and managing successful research PPPs. The review of existing guidelines showed that guidelines for research partnerships currently reside largely within institutions rather than in the peer-reviewed literature. This article aims to introduce these principles into the literature to serve as a framework for dialogue and for future PPPs. PMID:24117791
Using economic analyses for local priority setting : the population cost-impact approach.
Heller, Richard F; Gemmell, Islay; Wilson, Edward C F; Fordham, Richard; Smith, Richard D
2006-01-01
Standard methods of economic analysis may not be suitable for local decision making that is specific to a particular population. We describe a new three-step methodology, termed 'population cost-impact analysis', which provides a population perspective to the costs and benefits of alternative interventions. The first two steps involve calculating the population impact and the costs of the proposed interventions relevant to local conditions. This involves the calculation of population impact measures (which have been previously described but are not currently used extensively) - measures of absolute risk and risk reduction, applied to a population denominator. In step three, preferences of policy-makers are obtained. This is in contrast to the QALY approach in which quality weights are obtained as a part of the measurement of benefit. We applied the population cost-impact analysis method to a comparison of two interventions - increasing the use of beta-adrenoceptor antagonists (beta-blockers) and smoking cessation - after myocardial infarction in a scaled-back notional local population of 100,000 people in England. Twenty-two public health professionals were asked via a questionnaire to rank the order in which they would implement four interventions. They were given information on both population cost impact and QALYs for each intervention. In a population of 100,000 people, moving from current to best practice for beta-adrenoceptor antagonists and smoking cessation will prevent 11 and 4 deaths (or gain of 127 or 42 life-years), respectively. The cost per event prevented in the next year, or life-year gained, is less for beta-adrenoceptor antagonists than for smoking cessation. Public health professionals were found to be more inclined to rank alternative interventions according to the population cost impact than the QALY approach. The use of the population cost-impact approach allows information on the benefits of moving from current to best practice to be presented in terms of the benefits and costs to a particular population. The process for deciding between alternative interventions in a prioritisation exercise may differ according to the local context. We suggest that the valuation of the benefit is performed after the benefits have been quantified and that it takes into account local issues relevant to prioritisation. It would be an appropriate next step to experiment with, and formalise, this part of the population cost-impact analysis to provide a standardised approach for determining willingness to pay and provide a ranking of priorities. Our method adds a new dimension to economic analysis, the ability to identify costs and benefits of potential interventions to a defined population, which may be of considerable use for policy makers working at the local level.
Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.
2015-01-01
The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351
Sun, Jin; Kelbert, Anna; Egbert, G.D.
2015-01-01
Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.
Magnetotomography—a new method for analysing fuel cell performance and quality
NASA Astrophysics Data System (ADS)
Hauer, Karl-Heinz; Potthast, Roland; Wüster, Thorsten; Stolten, Detlef
Magnetotomography is a new method for the measurement and analysis of the current density distribution of fuel cells. The method is based on the measurement of the magnetic flux surrounding the fuel cell stack caused by the current inside the stack. As it is non-invasive, magnetotomography overcomes the shortcomings of traditional methods for the determination of current density in fuel cells [J. Stumper, S.A. Campell, D.P. Wilkinson, M.C. Johnson, M. Davis, In situ methods for the determination of current distributions in PEM fuel cells, Electrochem. Acta 43 (1998) 3773; S.J.C. Cleghorn, C.R. Derouin, M.S. Wilson, S. Gottesfeld, A printed circuit board approach to measuring current distribution in a fuel cell, J. Appl. Electrochem. 28 (1998) 663; Ch. Wieser, A. Helmbold, E. Gülzow, A new technique for two-dimensional current distribution measurements in electro-chemical cells, J. Appl. Electrochem. 30 (2000) 803; Grinzinger, Methoden zur Ortsaufgelösten Strommessung in Polymer Elektrolyt Brennstoffzellen, Diploma thesis, TU-München, 2003; Y.-G. Yoon, W.-Y. Lee, T.-H. Yang, G.-G. Park, C.-S. Kim, Current distribution in a single cell of PEMFC, J. Power Sources 118 (2003) 193-199; M.M. Mench, C.Y. Wang, An in situ method for determination of current distribution in PEM fuel cells applied to a direct methanol fuel cell, J. Electrochem. Soc. 150 (2003) A79-A85; S. Schönbauer, T. Kaz, H. Sander, E. Gülzow, Segmented bipolar plate for the determination of current distribution in polymer electrolyte fuel cells, in: Proceedings of the Second European PEMFC Forum, vol. 1, Lucerne/Switzerland, 2003, pp. 231-237; G. Bender, S.W. Mahlon, T.A. Zawodzinski, Further refinements in the segmented cell approach to diagnosing performance in polymer electrolyte fuel cells, J. Power Sources 123 (2003) 163-171]. After several years of research a complete prototype system is now available for research on single cells and stacks. This paper describes the basic system (fundamentals, hardware and software) as well as the state of development until December 2003. Initial findings on a full-size single cell will be presented together with an outlook on the planned next steps.
Transfer path analysis: Current practice, trade-offs and consideration of damping
NASA Astrophysics Data System (ADS)
Oktav, Akın; Yılmaz, Çetin; Anlaş, Günay
2017-02-01
Current practice of experimental transfer path analysis is discussed in the context of trade-offs between accuracy and time cost. An overview of methods, which propose solutions for structure borne noise, is given, where assumptions, drawbacks and advantages of methods are stated theoretically. Applicability of methods is also investigated, where an engine induced structure borne noise of an automobile is taken as a reference problem. Depending on this particular problem, sources of measurement errors, processing operations that affect results and physical obstacles faced in the application are analysed. While an operational measurement is common in all stated methods, when it comes to removal of source, or the need for an external excitation, discrepancies are present. Depending on the chosen method, promised outcomes like independent characterisation of the source, or getting information about mounts also differ. Although many aspects of the problem are reported in the literature, damping and its effects are not considered. Damping effect is embedded in the measured complex frequency response functions, and it is needed to be analysed in the post processing step. Effects of damping, reasons and methods to analyse them are discussed in detail. In this regard, a new procedure, which increases the accuracy of results, is also proposed.
A Spatial Method to Calculate Small-Scale Fisheries Extent
NASA Astrophysics Data System (ADS)
Johnson, A. F.; Moreno-Báez, M.; Giron-Nava, A.; Corominas, J.; Erisman, B.; Ezcurra, E.; Aburto-Oropeza, O.
2016-02-01
Despite global catch per unit effort having redoubled since the 1950's, the global fishing fleet is estimated to be twice the size that the oceans can sustainably support. In order to gauge the collateral impacts of fishing intensity, we must be able to estimate the spatial extent and amount of fishing vessels in the oceans. Methods that do currently exist are built around electronic tracking and log book systems and generally focus on industrial fisheries. Spatial extent for small-scale fisheries therefore remains elusive for many small-scale fishing fleets; even though these fisheries land the same biomass for human consumption as industrial fisheries. Current methods are data-intensive and require extensive extrapolation when estimated across large spatial scales. We present an accessible, spatial method of calculating the extent of small-scale fisheries based on two simple measures that are available, or at least easily estimable, in even the most data poor fisheries: the number of boats and the local coastal human population. We demonstrate this method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This method provides an important first step towards estimating the fishing extent of the small-scale fleet, globally.
Automatic aortic root segmentation in CTA whole-body dataset
NASA Astrophysics Data System (ADS)
Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.
2016-03-01
Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.
Identification of microRNA-mRNA modules using microarray data.
Jayaswal, Vivek; Lutherborrow, Mark; Ma, David D F; Yang, Yee H
2011-03-06
MicroRNAs (miRNAs) are post-transcriptional regulators of mRNA expression and are involved in numerous cellular processes. Consequently, miRNAs are an important component of gene regulatory networks and an improved understanding of miRNAs will further our knowledge of these networks. There is a many-to-many relationship between miRNAs and mRNAs because a single miRNA targets multiple mRNAs and a single mRNA is targeted by multiple miRNAs. However, most of the current methods for the identification of regulatory miRNAs and their target mRNAs ignore this biological observation and focus on miRNA-mRNA pairs. We propose a two-step method for the identification of many-to-many relationships between miRNAs and mRNAs. In the first step, we obtain miRNA and mRNA clusters using a combination of miRNA-target mRNA prediction algorithms and microarray expression data. In the second step, we determine the associations between miRNA clusters and mRNA clusters based on changes in miRNA and mRNA expression profiles. We consider the miRNA-mRNA clusters with statistically significant associations to be potentially regulatory and, therefore, of biological interest. Our method reduces the interactions between several hundred miRNAs and several thousand mRNAs to a few miRNA-mRNA groups, thereby facilitating a more meaningful biological analysis and a more targeted experimental validation.
Explant culture: An advantageous method for isolation of mesenchymal stem cells from human tissues.
Hendijani, Fatemeh
2017-04-01
Mesenchymal stem cell (MSC) research progressively moves towards clinical phases. Accordingly, a wide range of different procedures were presented in the literature for MSC isolation from human tissues; however, there is not yet any close focus on the details to offer precise information for best method selection. Choosing a proper isolation method is a critical step in obtaining cells with optimal quality and yield in companion with clinical and economical considerations. In this concern, current review widely discusses advantages of omitting proteolysis step in isolation process and presence of tissue pieces in primary culture of MSCs, including removal of lytic stress on cells, reduction of in vivo to in vitro transition stress for migrated/isolated cells, reduction of price, processing time and labour, removal of viral contamination risk, and addition of supporting functions of extracellular matrix and released growth factors from tissue explant. In next sections, it provides an overall report of technical highlights and molecular events of explant culture method for isolation of MSCs from human tissues including adipose tissue, bone marrow, dental pulp, hair follicle, cornea, umbilical cord and placenta. Focusing on informative collection of molecular and methodological data about explant methods can make it easy for researchers to choose an optimal method for their experiments/clinical studies and also stimulate them to investigate and optimize more efficient procedures according to clinical and economical benefits. © 2017 John Wiley & Sons Ltd.
Prevalence of dry methods in granite countertop fabrication in Oklahoma.
Phillips, Margaret L; Johnson, Andrew C
2012-01-01
Granite countertop fabricators are at risk of exposure to respirable crystalline silica, which may cause silicosis and other lung conditions. The purpose of this study was to estimate the prevalence of exposure control methods, especially wet methods, in granite countertop fabrication in Oklahoma to assess how many workers might be at risk of overexposure to crystalline silica in this industry. Granite fabrication shops in the three largest metropolitan areas in Oklahoma were enumerated, and 47 of the 52 shops participated in a survey on fabrication methods. Countertop shops were small businesses with average work forces of fewer than 10 employees. Ten shops (21%) reported using exclusively wet methods during all fabrication steps. Thirty-five shops (74%) employing a total of about 200 workers reported using dry methods all or most of the time in at least one fabrication step. The tasks most often performed dry were edge profiling (17% of shops), cutting of grooves for reinforcing rods (62% of shops), and cutting of sink openings (45% of shops). All shops reported providing either half-face or full-face respirators for use during fabrication, but none reported doing respirator fit testing. Few shops reported using any kind of dust collection system. These findings suggest that current consumer demand for granite countertops is giving rise to a new wave of workers at risk of silicosis due to potential overexposure to granite dust.
Rapid determination of actinides in seawater samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.; Hutchison, Jay B.
2014-03-09
A new rapid method for the determination of actinides in seawater samples has been developed at the Savannah River National Laboratory. The actinides can be measured by alpha spectrometry or inductively-coupled plasma mass spectrometry. The new method employs novel pre-concentration steps to collect the actinide isotopes quickly from 80 L or more of seawater. Actinides are co-precipitated using an iron hydroxide co-precipitation step enhanced with Ti +3 reductant, followed by lanthanum fluoride co-precipitation. Stacked TEVA Resin and TRU Resin cartridges are used to rapidly separate Pu, U, and Np isotopes from seawater samples. TEVA Resin and DGA Resin were usedmore » to separate and measure Pu, Am and Cm isotopes in seawater volumes up to 80 L. This robust method is ideal for emergency seawater samples following a radiological incident. It can also be used, however, for the routine analysis of seawater samples for oceanographic studies to enhance efficiency and productivity. In contrast, many current methods to determine actinides in seawater can take 1–2 weeks and provide chemical yields of ~30–60 %. This new sample preparation method can be performed in 4–8 h with tracer yields of ~85–95 %. By employing a rapid, robust sample preparation method with high chemical yields, less seawater is needed to achieve lower or comparable detection limits for actinide isotopes with less time and effort.« less
Simplified jet fuel reaction mechanism for lean burn combustion application
NASA Technical Reports Server (NTRS)
Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman
1993-01-01
Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. Detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.
NASA Astrophysics Data System (ADS)
Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.
An optimized high quality male DNA extraction from spermatophores in open thelycum shrimp species.
Planella, Laia; Heras, Sandra; Vera, Manuel; García-Marín, José-Luis; Roldán, María Inés
2017-09-01
The crucial step of most of the current genetic studies is the extraction of DNA of sufficient quantity and quality. Several genomic DNA isolation methods have been described to successfully obtain male DNA from shrimp species. However, all current protocols require invasive handling methods with males for DNA isolation. Using Aristeus antennatus as a model we tested a reliable non-invasive differential DNA extraction method to male DNA isolation from spermatophores attached to female thelycum. The present protocol provides high quality and quantity DNA for polymerase chain reaction amplification and male genotyping. This new approach could be useful to experimental shrimp culture to select sires with relevant genetic patterns for selective breeding programs. More importantly, it can be applied to identify the mating pairs and male structure in wild populations of species as A. antennatus, where males are often difficult to capture. Our method could be also valuable for biological studies on other spermatophore-using species, such as myriapods, arachnids and insects. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
NASA Technical Reports Server (NTRS)
Liu, A. F.
1974-01-01
A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.
Jaroenlak, Pattana; Sanguanrut, Piyachat; Williams, Bryony A. P.; Stentiford, Grant D.; Flegel, Timothy W.; Sritunyalucksana, Kallaya
2016-01-01
Hepatopancreatic microsporidiosis (HPM) caused by Enterocytozoon hepatopenaei (EHP) is an important disease of cultivated shrimp. Heavy infections may lead to retarded growth and unprofitable harvests. Existing PCR detection methods target the EHP small subunit ribosomal RNA (SSU rRNA) gene (SSU-PCR). However, we discovered that they can give false positive test results due to cross reactivity of the SSU-PCR primers with DNA from closely related microsporidia that infect other aquatic organisms. This is problematic for investigating and monitoring EHP infection pathways. To overcome this problem, a sensitive and specific nested PCR method was developed for detection of the spore wall protein (SWP) gene of EHP (SWP-PCR). The new SWP-PCR method did not produce false positive results from closely related microsporidia. The first PCR step of the SWP-PCR method was 100 times (104 plasmid copies per reaction vial) more sensitive than that of the existing SSU-PCR method (106 copies) but sensitivity was equal for both in the nested step (10 copies). Since the hepatopancreas of cultivated shrimp is not currently known to be infected with microsporidia other than EHP, the SSU-PCR methods are still valid for analyzing hepatopancreatic samples despite the lower sensitivity than the SWP-PCR method. However, due to its greater specificity and sensitivity, we recommend that the SWP-PCR method be used to screen for EHP in feces, feed and environmental samples for potential EHP carriers. PMID:27832178
Jaroenlak, Pattana; Sanguanrut, Piyachat; Williams, Bryony A P; Stentiford, Grant D; Flegel, Timothy W; Sritunyalucksana, Kallaya; Itsathitphaisarn, Ornchuma
2016-01-01
Hepatopancreatic microsporidiosis (HPM) caused by Enterocytozoon hepatopenaei (EHP) is an important disease of cultivated shrimp. Heavy infections may lead to retarded growth and unprofitable harvests. Existing PCR detection methods target the EHP small subunit ribosomal RNA (SSU rRNA) gene (SSU-PCR). However, we discovered that they can give false positive test results due to cross reactivity of the SSU-PCR primers with DNA from closely related microsporidia that infect other aquatic organisms. This is problematic for investigating and monitoring EHP infection pathways. To overcome this problem, a sensitive and specific nested PCR method was developed for detection of the spore wall protein (SWP) gene of EHP (SWP-PCR). The new SWP-PCR method did not produce false positive results from closely related microsporidia. The first PCR step of the SWP-PCR method was 100 times (104 plasmid copies per reaction vial) more sensitive than that of the existing SSU-PCR method (106 copies) but sensitivity was equal for both in the nested step (10 copies). Since the hepatopancreas of cultivated shrimp is not currently known to be infected with microsporidia other than EHP, the SSU-PCR methods are still valid for analyzing hepatopancreatic samples despite the lower sensitivity than the SWP-PCR method. However, due to its greater specificity and sensitivity, we recommend that the SWP-PCR method be used to screen for EHP in feces, feed and environmental samples for potential EHP carriers.
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Qiang; Fan, Liang-Shih, E-mail: fan.1@osu.edu
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge–Kutta schemes in the coupled fluid–particle interaction. The major challenge to implement high-order Runge–Kutta schemes in the LBM is that the flow information suchmore » as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid–particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge–Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and −0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding may lead to more comprehensive studies of the effect of the particle rotation on fluid–solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge–Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier–Stokes solver. - Highlights: • The IBM is embedded in the LBM using Runge–Kutta time schemes. • The effectiveness of the present IB-LBM is validated by benchmark applications. • For the first time, the IB-LBM achieves the second-order accuracy. • The numerical stability of the present IB-LBM is better than previous methods.« less
NASA Astrophysics Data System (ADS)
Cox, Christopher
Low-order numerical methods are widespread in academic solvers and ubiquitous in industrial solvers due to their robustness and usability. High-order methods are less robust and more complicated to implement; however, they exhibit low numerical dissipation and have the potential to improve the accuracy of flow simulations at a lower computational cost when compared to low-order methods. This motivates our development of a high-order compact method using Huynh's flux reconstruction scheme for solving unsteady incompressible flow on unstructured grids. We use Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. In 2D, an implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation. The high-order solver is extended to 3D and parallelized using MPI. Due to its simplicity, time marching for 3D problems is done explicitly. The feasibility of using the current implicit time stepping scheme for large scale three-dimensional problems with high-order polynomial basis still remains to be seen. We directly use the aforementioned numerical solver to simulate pulsatile flow of a Newtonian blood-analog fluid through a rigid 180-degree curved artery model. One of the most physiologically relevant forces within the cardiovascular system is the wall shear stress. This force is important because atherosclerotic regions are strongly correlated with curvature and branching in the human vasculature, where the shear stress is both oscillatory and multidirectional. Also, the combined effect of curvature and pulsatility in cardiovascular flows produces unsteady vortices. The aim of this research as it relates to cardiovascular fluid dynamics is to predict the spatial and temporal evolution of vortical structures generated by secondary flows, as well as to assess the correlation between multiple vortex pairs and wall shear stress. We use a physiologically (pulsatile) relevant flow rate and generate results using both fully developed and uniform entrance conditions, the latter being motivated by the fact that flow upstream of a curved artery may not have sufficient straight entrance length to become fully developed. Under the two pulsatile inflow conditions, we characterize the morphology and evolution of various vortex pairs and their subsequent effect on relevant haemodynamic wall shear stress metrics.
Lü, Hai-tao; Liu, Jing; Deng, Rui; Song, Ji-ying
2012-01-01
Indigo and indirubin are the main active ingredients found in traditional Chinese herbal medicine Folium isatidis. An effective method for the isolation and purification of indigo and indirubin from Folium isatidis is needed. Compared with the conventional column chromatographic techniques, high-speed counter-current chromatography (HSCCC) is a suitable alternative for the enrichment and purification of these target compounds, and eliminates the complications resulting from a solid support matrix. To develop a reliable HSCCC method for isolation and identification of indigo and indirubin in a one-step separation from Folium isatidis. The optimum extracting conditions of indigo and indirubin from Folium isatidis were investigated by orthogonal test L(16) (4(5)). The target compounds were isolated and purified with a solvent system of n-hexane:ethyl acetate:ethanol:water (1:1:1:1, v/v) and the lower phase was used as the mobile phase in the head-to-tail elution mode. The purities of target compounds were tested by HPLC and their structures were identified by UV, IR, electrospray ion source (ESI)-MS, (1) H-NMR and (13) C-NMR analyses. From 165 mg of the crude extract, 5.65 mg of indigo and 1.00 mg of indirubin were obtained by HPLC analysis with purities of 98.4% and 99.0% respectively, and their mean recoveries were 91.0% and 90.7%, respectively. The HSCCC method is effective for the preparative separation and purification of indigo and indirubin in a one-step separation from Folium isatidis. Copyright © 2012 John Wiley & Sons, Ltd.
March, Melissa I.; Modest, Anna M.; Ralston, Steven J.; Hacker, Michele R.; Gupta, Munish; Brown, Florence M.
2016-01-01
Abstract Objective: To compare characteristics and outcomes of women diagnosed with gestational diabetes mellitus (GDM) by the newer one-step glucose tolerance test and those diagnosed with the traditional two-step method. Research design and methods: This was a retrospective cohort study of women with GDM who delivered in 2010–2011. Data are reported as proportion or median (interquartile range) and were compared using a Chi-square, Fisher's exact or Wilcoxon rank sum test based on data type. Results: Of 235 women with GDM, 55.7% were diagnosed using the two-step method and 44.3% with the one-step method. The groups had similar demographics and GDM risk factors. The two-step method group was diagnosed with GDM one week later [27.0 (24.0–29.0) weeks versus 26.0 (24.0–28.0 weeks); p = 0.13]. The groups had similar median weight gain per week before diagnosis. After diagnosis, women in the one-step method group had significantly higher median weight gain per week [0.67 pounds/week (0.31–1.0) versus 0.56 pounds/week (0.15–0.89); p = 0.047]. In the one-step method group more women had suspected macrosomia (11.7% versus 5.3%, p = 0.07) and more neonates had a birth weight >4000 g (13.6% versus 7.5%, p = 0.13); however, these differences were not statistically significant. Other pregnancy and neonatal complications were similar. Conclusions: Women diagnosed with the one-step method gained more weight per week after GDM diagnosis and had a non-statistically significant increased risk for suspected macrosomia. Our data suggest the one-step method identifies women with at least equally high risk as the two-step method. PMID:25958989
Islam and Muslim Life in Current Bavarian Geography Textbooks
ERIC Educational Resources Information Center
Zecha, Stefanie; Popp, Stephan; Yasar, Aysun
2016-01-01
This paper investigates the Islam and Muslim life in German textbooks. The study is based on the analysis of current Geography textbooks in Bavarian secondary schools. As a first step, the authors developed a system for objective analysis of the textbooks that structures the content in categories. In a second step, the authors used the qualitative…
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. J. Galyean; A. M. Whaley; D. L. Kelly
This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from themore » psychology literature.« less
Soualmia, L F; Charlet, J
2016-11-10
To summarize excellent current research in the field of Knowledge Representation and Management (KRM) within the health and medical care domain. We provide a synopsis of the 2016 IMIA selected articles as well as a related synthetic overview of the current and future field activities. A first step of the selection was performed through MEDLINE querying with a list of MeSH descriptors completed by a list of terms adapted to the KRM section. The second step of the selection was completed by the two section editors who separately evaluated the set of 1,432 articles. The third step of the selection consisted of a collective work that merged the evaluation results to retain 15 articles for peer-review. The selection and evaluation process of this Yearbook's section on Knowledge Representation and Management has yielded four excellent and interesting articles regarding semantic interoperability for health care by gathering heterogeneous sources (knowledge and data) and auditing ontologies. In the first article, the authors present a solution based on standards and Semantic Web technologies to access distributed and heterogeneous datasets in the domain of breast cancer clinical trials. The second article describes a knowledge-based recommendation system that relies on ontologies and Semantic Web rules in the context of chronic diseases dietary. The third article is related to concept-recognition and text-mining to derive common human diseases model and a phenotypic network of common diseases. In the fourth article, the authors highlight the need for auditing the SNOMED CT. They propose to use a crowdbased method for ontology engineering. The current research activities further illustrate the continuous convergence of Knowledge Representation and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care by proposing solutions to cope with the problem of semantic interoperability. Indeed, there is a need for powerful tools able to manage and interpret complex, large-scale and distributed datasets and knowledge bases, but also a need for user-friendly tools developed for the clinicians in their daily practice.
Crowdsourced Curriculum Development for Online Medical Education.
Shappell, Eric; Chan, Teresa M; Thoma, Brent; Trueger, N Seth; Stuntz, Bob; Cooney, Robert; Ahn, James
2017-12-08
In recent years online educational content, efforts at quality appraisal, and integration of online material into institutional teaching initiatives have increased. However, medical education has yet to develop large-scale online learning centers. Crowd-sourced curriculum development may expedite the realization of this potential while providing opportunities for innovation and scholarship. This article describes the current landscape, best practices, and future directions for crowdsourced curriculum development using Kern's framework for curriculum development and the example topic of core content in emergency medicine. A scoping review of online educational content was performed by a panel of subject area experts for each step in Kern's framework. Best practices and recommendations for future development for each step were established by the same panel using a modified nominal group consensus process. The most prevalent curriculum design steps were (1) educational content and (2) needs assessments. Identified areas of potential innovation within these steps included targeting gaps in specific content areas and developing underrepresented instructional methods. Steps in curriculum development without significant representation included (1) articulation of goals and objectives and (2) tools for curricular evaluation. By leveraging the power of the community, crowd-sourced curriculum development offers a mechanism to diffuse the burden associated with creating comprehensive online learning centers. There is fertile ground for innovation and scholarship in each step along the continuum of curriculum development. Realization of this paradigm's full potential will require individual developers to strongly consider how their contributions will align with the work of others.
Crowdsourced Curriculum Development for Online Medical Education
Chan, Teresa M; Thoma, Brent; Trueger, N Seth; Stuntz, Bob; Cooney, Robert; Ahn, James
2017-01-01
In recent years online educational content, efforts at quality appraisal, and integration of online material into institutional teaching initiatives have increased. However, medical education has yet to develop large-scale online learning centers. Crowd-sourced curriculum development may expedite the realization of this potential while providing opportunities for innovation and scholarship. This article describes the current landscape, best practices, and future directions for crowdsourced curriculum development using Kern’s framework for curriculum development and the example topic of core content in emergency medicine. A scoping review of online educational content was performed by a panel of subject area experts for each step in Kern’s framework. Best practices and recommendations for future development for each step were established by the same panel using a modified nominal group consensus process. The most prevalent curriculum design steps were (1) educational content and (2) needs assessments. Identified areas of potential innovation within these steps included targeting gaps in specific content areas and developing underrepresented instructional methods. Steps in curriculum development without significant representation included (1) articulation of goals and objectives and (2) tools for curricular evaluation. By leveraging the power of the community, crowd-sourced curriculum development offers a mechanism to diffuse the burden associated with creating comprehensive online learning centers. There is fertile ground for innovation and scholarship in each step along the continuum of curriculum development. Realization of this paradigm’s full potential will require individual developers to strongly consider how their contributions will align with the work of others. PMID:29464134
NASA Astrophysics Data System (ADS)
Hefferman, Gerald; Chen, Zhen; Wei, Tao
2017-07-01
This article details the generation of an extended-bandwidth frequency sweep using a single, communication grade distributed feedback (DFB) laser. The frequency sweep is generated using a two-step technique. In the first step, injection current modulation is employed as a means of varying the output frequency of a DFB laser over a bandwidth of 99.26 GHz. A digital optical phase lock loop is used to lock the frequency sweep speed during current modulation, resulting in a linear frequency chirp. In the second step, the temperature of the DFB laser is modulated, resulting in a shifted starting laser output frequency. A laser frequency chirp is again generated beginning at this shifted starting frequency, resulting in a frequency-shifted spectrum relative to the first recorded data. This process is then repeated across a range of starting temperatures, resulting in a series of partially overlapping, frequency-shifted spectra. These spectra are then aligned using cross-correlation and combined using averaging to form a single, broadband spectrum with a total bandwidth of 510.9 GHz. In order to investigate the utility of this technique, experimental testing was performed in which the approach was used as the swept-frequency source of a coherent optical frequency domain reflectometry system. This system was used to interrogate an optical fiber containing a 20 point, 1-mm pitch length fiber Bragg grating, corresponding to a period of 100 GHz. Using this technique, both the periodicity of the grating in the frequency domain and the individual reflector elements of the structure in the time domain were resolved, demonstrating the technique's potential as a method of extending the sweeping bandwidth of semiconductor lasers for frequency-based sensing applications.
Hefferman, Gerald; Chen, Zhen; Wei, Tao
2017-07-01
This article details the generation of an extended-bandwidth frequency sweep using a single, communication grade distributed feedback (DFB) laser. The frequency sweep is generated using a two-step technique. In the first step, injection current modulation is employed as a means of varying the output frequency of a DFB laser over a bandwidth of 99.26 GHz. A digital optical phase lock loop is used to lock the frequency sweep speed during current modulation, resulting in a linear frequency chirp. In the second step, the temperature of the DFB laser is modulated, resulting in a shifted starting laser output frequency. A laser frequency chirp is again generated beginning at this shifted starting frequency, resulting in a frequency-shifted spectrum relative to the first recorded data. This process is then repeated across a range of starting temperatures, resulting in a series of partially overlapping, frequency-shifted spectra. These spectra are then aligned using cross-correlation and combined using averaging to form a single, broadband spectrum with a total bandwidth of 510.9 GHz. In order to investigate the utility of this technique, experimental testing was performed in which the approach was used as the swept-frequency source of a coherent optical frequency domain reflectometry system. This system was used to interrogate an optical fiber containing a 20 point, 1-mm pitch length fiber Bragg grating, corresponding to a period of 100 GHz. Using this technique, both the periodicity of the grating in the frequency domain and the individual reflector elements of the structure in the time domain were resolved, demonstrating the technique's potential as a method of extending the sweeping bandwidth of semiconductor lasers for frequency-based sensing applications.
Introduction to Space Resource Mining
NASA Technical Reports Server (NTRS)
Mueller, Robert P.
2013-01-01
There are vast amounts of resources in the solar system that will be useful to humans in space and possibly on Earth. None of these resources can be exploited without the first necessary step of extra-terrestrial mining. The necessary technologies for tele-robotic and autonomous mining have not matured sufficiently yet. The current state of technology was assessed for terrestrial and extraterrestrial mining and a taxonomy of robotic space mining mechanisms was presented which was based on current existing prototypes. Terrestrial and extra-terrestrial mining methods and technologies are on the cusp of massive changes towards automation and autonomy for economic and safety reasons. It is highly likely that these industries will benefit from mutual cooperation and technology transfer.
Design and fabrication of highly sensitive and stable biochip for glucose biosensing
NASA Astrophysics Data System (ADS)
Lu, Shi-Yu; Lu, Yao; Jin, Meng; Bao, Shu-Juan; Li, Wan-Yun; Yu, Ling
2017-11-01
Common producing steps for test strips is complex and fussy. In this work, we proposed a feasible binder-free test strips fabrication method to directly grow enzyme/manganese phosphate nanosheets hybrids on the screen-print electrodes (SPE). Combined with microfluidic packaging technology, the ready-made portable electrochemical biochip shows a wider linear range (1-40 mM, R2 = 0.9998) and excellent stability (maintained 98% response current after 20 days store and retained 75% response current after continuous 30 days determination) for the detection of glucose. Compared with commercial test strips, the biochip exhibits excellent sensitivity, stability and accuracy, which is indicative of its potential application in real samples.
A Fourier Method for Sidelobe Reduction in Equally Spaced Linear Arrays
NASA Astrophysics Data System (ADS)
Safaai-Jazi, Ahmad; Stutzman, Warren L.
2018-04-01
Uniformly excited, equally spaced linear arrays have a sidelobe level larger than -13.3 dB, which is too high for many applications. This limitation can be remedied by nonuniform excitation of array elements. We present an efficient method for sidelobe reduction in equally spaced linear arrays with low penalty on the directivity. The method involves the following steps: construction of a periodic function containing only the sidelobes of the uniformly excited array, calculation of the Fourier series of this periodic function, subtracting the series from the array factor of the original uniformly excited array after it is truncated, and finally mitigating the truncation effects which yields significant increase in sidelobe level reduction. A sidelobe reduction factor is incorporated into element currents that makes much larger sidelobe reductions possible and also allows varying the sidelobe level incrementally. It is shown that such newly formed arrays can provide sidelobe levels that are at least 22.7 dB below those of the uniformly excited arrays with the same size and number of elements. Analytical expressions for element currents are presented. Radiation characteristics of the sidelobe-reduced arrays introduced here are examined, and numerical results for directivity, sidelobe level, and half-power beam width are presented for example cases. Performance improvements over popular conventional array synthesis methods, such as Chebyshev and linear current tapered arrays, are obtained with the new method.
Cho, Il-Hoon; Ku, Seockmo
2017-09-30
The development of novel and high-tech solutions for rapid, accurate, and non-laborious microbial detection methods is imperative to improve the global food supply. Such solutions have begun to address the need for microbial detection that is faster and more sensitive than existing methodologies (e.g., classic culture enrichment methods). Multiple reviews report the technical functions and structures of conventional microbial detection tools. These tools, used to detect pathogens in food and food homogenates, were designed via qualitative analysis methods. The inherent disadvantage of these analytical methods is the necessity for specimen preparation, which is a time-consuming process. While some literature describes the challenges and opportunities to overcome the technical issues related to food industry legal guidelines, there is a lack of reviews of the current trials to overcome technological limitations related to sample preparation and microbial detection via nano and micro technologies. In this review, we primarily explore current analytical technologies, including metallic and magnetic nanomaterials, optics, electrochemistry, and spectroscopy. These techniques rely on the early detection of pathogens via enhanced analytical sensitivity and specificity. In order to introduce the potential combination and comparative analysis of various advanced methods, we also reference a novel sample preparation protocol that uses microbial concentration and recovery technologies. This technology has the potential to expedite the pre-enrichment step that precedes the detection process.
Rodríguez, Roberto A; Love, David C; Stewart, Jill R; Tajuba, Julianne; Knee, Jacqueline; Dickerson, Jerold W; Webster, Laura F; Sobsey, Mark D
2012-04-01
Methods for detection of two fecal indicator viruses, F+ and somatic coliphages, were evaluated for application to recreational marine water. Marine water samples were collected during the summer of 2007 in Southern California, United States from transects along Avalon Beach (n=186 samples) and Doheny Beach (n=101 samples). Coliphage detection methods included EPA method 1601 - two-step enrichment (ENR), EPA method 1602 - single agar layer (SAL), and variations of ENR. Variations included comparison of two incubation times (overnight and 5-h incubation) and two final detection steps (lysis zone assay and a rapid latex agglutination assay). A greater number of samples were positive for somatic and F+ coliphages by ENR than by SAL (p<0.01). The standard ENR with overnight incubation and detection by lysis zone assay was the most sensitive method for the detection of F+ and somatic coliphages from marine water, although the method takes up to three days to obtain results. A rapid 5-h enrichment version of ENR also performed well, with more positive samples than SAL, and could be performed in roughly 24h. Latex agglutination-based detection methods require the least amount of time to perform, although the sensitivity was less than lysis zone-based detection methods. Rapid culture-based enrichment of coliphages in marine water may be possible by further optimizing culture-based methods for saline water conditions to generate higher viral titers than currently available, as well as increasing the sensitivity of latex agglutination detection methods. Copyright © 2012 Elsevier B.V. All rights reserved.
Rebolledo-Leiva, Ricardo; Angulo-Meza, Lidia; Iriarte, Alfredo; González-Araya, Marcela C
2017-09-01
Operations management tools are critical in the process of evaluating and implementing action towards a low carbon production. Currently, a sustainable production implies both an efficient resource use and the obligation to meet targets for reducing greenhouse gas (GHG) emissions. The carbon footprint (CF) tool allows estimating the overall amount of GHG emissions associated with a product or activity throughout its life cycle. In this paper, we propose a four-step method for a joint use of CF assessment and Data Envelopment Analysis (DEA). Following the eco-efficiency definition, which is the delivery of goods using fewer resources and with decreasing environmental impact, we use an output oriented DEA model to maximize production and reduce CF, taking into account simultaneously the economic and ecological perspectives. In another step, we stablish targets for the contributing CF factors in order to achieve CF reduction. The proposed method was applied to assess the eco-efficiency of five organic blueberry orchards throughout three growing seasons. The results show that this method is a practical tool for determining eco-efficiency and reducing GHG emissions. Copyright © 2017 Elsevier B.V. All rights reserved.
Bastani, Peivand; Mehralian, Gholamhossein; Dinarvand, Rasoul
2015-01-01
The aim of this study was to review the current methods of pharmaceutical purchasing by Iranian insurance organizations within the World Bank conceptual framework model so as to provide applicable pharmaceutical resource allocation and purchasing (RAP) arrangements in Iran. This qualitative study was conducted through a qualitative document analysis (QDA), applying the four-step Scott method in document selection, and conducting 20 semi-structured interviews using a triangulation method. Furthermore, the data were analyzed applying five steps framework analysis using Atlas-ti software. The QDA showed that the purchasers face many structural, financing, payment, delivery and service procurement and purchasing challenges. Moreover, the findings of interviews are provided in three sections including demand-side, supply-side and price and incentive regime. Localizing RAP arrangements as a World Bank Framework in a developing country like Iran considers the following as the prerequisite for implementing strategic purchasing in pharmaceutical sector: The improvement of accessibility, subsidiary mechanisms, reimbursement of new drugs, rational use, uniform pharmacopeia, best supplier selection, reduction of induced demand and moral hazard, payment reform. It is obvious that for Iran, these customized aspects are more various and detailed than those proposed in a World Bank model for developing countries.
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
Multiscale modeling of porous ceramics using movable cellular automaton method
NASA Astrophysics Data System (ADS)
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
Incorporating current research into formal higher education settings using Astrobites
NASA Astrophysics Data System (ADS)
Sanders, Nathan E.; Kohler, Susanna; Faesi, Chris; Villar, Ashley; Zevin, Michael
2017-10-01
A primary goal of many undergraduate- and graduate-level courses in the physical sciences is to prepare students to engage in scientific research or to prepare students for careers that leverage skillsets similar to those used by research scientists. Even for students who may not intend to pursue a career with these characteristics, exposure to the context of applications in modern research can be a valuable tool for teaching and learning. However, a persistent barrier to student participation in research is familiarity with the technical language, format, and context that academic researchers use to communicate research methods and findings with each other: the literature of the field. Astrobites, an online web resource authored by graduate students, has published brief and accessible summaries of more than 1300 articles from the astrophysical literature since its founding in 2010. This article presents three methods for introducing students at all levels within the formal higher education setting to approaches and results from modern research. For each method, we provide a sample lesson plan that integrates content and principles from Astrobites, including step-by-step instructions for instructors, suggestions for adapting the lesson to different class levels across the undergraduate and graduate spectrum, sample student handouts, and a grading rubric.
System calibration method for Fourier ptychographic microscopy
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.
An efficient mode-splitting method for a curvilinear nearshore circulation model
Shi, Fengyan; Kirby, James T.; Hanes, Daniel M.
2007-01-01
A mode-splitting method is applied to the quasi-3D nearshore circulation equations in generalized curvilinear coordinates. The gravity wave mode and the vorticity wave mode of the equations are derived using the two-step projection method. Using an implicit algorithm for the gravity mode and an explicit algorithm for the vorticity mode, we combine the two modes to derive a mixed difference–differential equation with respect to surface elevation. McKee et al.'s [McKee, S., Wall, D.P., and Wilson, S.K., 1996. An alternating direction implicit scheme for parabolic equations with mixed derivative and convective terms. J. Comput. Phys., 126, 64–76.] ADI scheme is then used to solve the parabolic-type equation in dealing with the mixed derivative and convective terms from the curvilinear coordinate transformation. Good convergence rates are found in two typical cases which represent respectively the motions dominated by the gravity mode and the vorticity mode. Time step limitations imposed by the vorticity convective Courant number in vorticity-mode-dominant cases are discussed. Model efficiency and accuracy are verified in model application to tidal current simulations in San Francisco Bight.
[Effect of two-step sintering method on properties of zirconia ceramic].
Huang, Hui; Wei, Bin; Zhang, Fu-Qiang; Sun, Jing; Gao, Lian
2008-04-01
To study the influence of two-step sintering method on the sintering property, mechanical properties and microstructure of zirconia ceramic. The nano-size zirconia powder were compacted and divided into two groups, one group for one-step sintering method, another group for two-step sintering method. All samples sintered at different temperature. The relative density, three-bend strength, HV hardness, fracture toughness and microstructure of sintered block were investigated. Two-step sintering method influenced the sintering property and mechanical properties of zirconia ceramic. The maximal relative density was 98.49% at 900 degrees C/1,450 degrees C sintering temperature. There were significant difference of mechanical properties between one-step sintering and two-step sintering, the three-bend strength and fracture toughness declined, hardness increased at two-step sintering. The three-bend strength, HV hardness and fracture toughness reached to maximum value as 1,059.08 MPa +/- 75.24 MPa, 1,377.00 MPa +/- 16.37 MPa and 5.92 MPa x m1/2 +/- 0.37 MPa x m1/2 at 900 degrees C/1,450 degrees C sintering temperature respectively. Microscopy revealed the relationship between the porosity and shapes of grains was correlated to strength of the zirconia ceramics. Despite of the two-step sintering method influences the properties of zirconia, it also is a promising esthetic all-ceramic dental material.
Mertz, Marcel; Strech, Daniel
2014-12-04
Clinical practice guidelines (CPGs), a core tool to foster medical professionalism, differ widely in whether and how they address disease-specific ethical issues (DSEIs), and current manuals for CPG development are silent on this issue. The implementation of an explicit method faces two core challenges: first, it adds further complexity to CPG development and requires human and financial resources. Second, in contrast to the in-depth treatment of ethical issues that is standard in bioethics, the inclusion of DSEIs in CPGs need to be more pragmatic, reductive, and simplistic, but without rendering the resulting recommendations useless or insufficiently justified. This paper outlines a six-step approach, EthicsGuide, for the systematic and transparent inclusion of ethical issues and recommendations in CPGs. The development of EthicsGuide is based on (a) methodological standards in evidence-based CPG development, (b) principles of bioethics, (c) research findings on how DSEIs are currently addressed in CPGs, and (d) findings from two proof-of-concept analyses of the EthicsGuide approach. The six steps are 1) determine the DSEI spectrum and the need for ethical recommendations; 2) develop statements on which to base ethical recommendations; 3) categorize, classify, condense, and paraphrase the statements; 4) write recommendations in a standard form; 5) validate and justify recommendations, making any necessary modifications; and 6) address consent. All six steps necessarily come into play when including DSEIs in CPGs. If DSEIs are not explicitly addressed, they are unavoidably dealt with implicitly. We believe that as ethicists gain greater involvement in decision-making about health, personal rights, or economic issues, they should make their methods transparent and replicable by other researchers; and as ethical issues become more widely reflected in CPGs, CPG developers have to learn how to address them in a methodologically adequate way. The approach proposed should serve as a basis for further discussion on how to reach these goals. It breaks open the black box of what ethicists implicitly do when they develop recommendations. Further, interdisciplinary discussion and pilot tests are needed to explore the minimal requirements that guarantee a simplified procedure which is still acceptable and does not become mere window dressing.
NASA Astrophysics Data System (ADS)
Dalrymple, Odesma Onika
Undergraduate engineering institutions are currently seeking to improve recruiting practices and to retain engineering majors particularly by addressing what many studies document as a major challenge of poor instruction. There is an undisputed need for instructional practices that motivate students in addition to facilitating the transfer of learning beyond the classroom. Reverse engineering and product dissection, more broadly termed Disassemble/Analyze/Assemble (DAA) activities, have shown potential to address these concerns, based on the reviews of students and professors alike. DAA activities involve the systematic deconstruction of an artifact, the subsequent analysis and possible reconstruction of its components for the purpose of understanding the embodied fundamental concepts, design principles and developmental processes. These activities have been part of regular industry practice for some time; however, the systematic analysis of their benefits for learning and instruction is a relatively recent phenomenon. A number of studies have provided highly descriptive accounts of curricula and possible outcomes of DAA activities; but, relatively few have compared participants doing DAA activities to a control group doing more traditional activities. In this respect, two quasi-experiments were conducted as part of a first-year engineering laboratory, and it was hypothesized that students who engaged in the DAA activity would be more motivated and would demonstrate higher frequencies of transfer than the control. A DAA activity that required students to disassemble a single-use camera and analyze its components to discover how it works was compared to a step-by-step laboratory activity in the first experiment and a lecture method of instruction in the second experiment. In both experiments, over forty percent of the students that engaged in the DAA activity demonstrated the ability to transfer the knowledge gained about the functions of the camera's components and their interconnectedness and describe an approach for modifying the camera that involved the adaptation of a current mechanism to add new functionality. This exhibition of transfer was significantly greater than the frequency of transfer yielded by the comparative traditional activities. In addition, the post laboratory surveys indicated that the DAA activities elicited significantly higher levels of motivation than the step-by-step laboratory and the direct instructional method.
Bastani, Peivand; Mehralian, Gholamhossein; Dinarvand, Rasoul
2015-01-01
Objective: The aim of this study was to review the current methods of pharmaceutical purchasing by Iranian insurance organizations within the World Bank conceptual framework model so as to provide applicable pharmaceutical resource allocation and purchasing (RAP) arrangements in Iran. Methods: This qualitative study was conducted through a qualitative document analysis (QDA), applying the four-step Scott method in document selection, and conducting 20 semi-structured interviews using a triangulation method. Furthermore, the data were analyzed applying five steps framework analysis using Atlas-ti software. Findings: The QDA showed that the purchasers face many structural, financing, payment, delivery and service procurement and purchasing challenges. Moreover, the findings of interviews are provided in three sections including demand-side, supply-side and price and incentive regime. Conclusion: Localizing RAP arrangements as a World Bank Framework in a developing country like Iran considers the following as the prerequisite for implementing strategic purchasing in pharmaceutical sector: The improvement of accessibility, subsidiary mechanisms, reimbursement of new drugs, rational use, uniform pharmacopeia, best supplier selection, reduction of induced demand and moral hazard, payment reform. It is obvious that for Iran, these customized aspects are more various and detailed than those proposed in a World Bank model for developing countries. PMID:25710045
Demitri, Nevine; Zoubir, Abdelhak M
2017-01-01
Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.
Numerical study on flow over stepped spillway using Lagrangian method
NASA Astrophysics Data System (ADS)
Wang, Junmin; Fu, Lei; Xu, Haibo; Jin, Yeechung
2018-02-01
Flow over stepped spillway has been studied for centuries, due to its unstable and the characteristics of cavity, the simulation of this type of spillway flow is always difficult. Most of the early studies of flow over stepped spillway are based on experiment, while in the recent decades, numerical studies of flow over stepped spillway draw most of the researchers’ attentions due to its simplicity and efficiency. In this study, a new Lagrangian based particle method is introduced to reproduce the phenomenon of flow over stepped spillway, the inherent advantages of this particle based method provide a convincing free surface and velocity profiles compared with previous experimental data. The capacity of this new method is proved and it is anticipated to be an alternative tool of traditional mesh based method in environmental engineering field such as the simulation of flow over stepped spillway.
NASA Astrophysics Data System (ADS)
Zhu, Xiaoyong; Quan, Li; Chen, Yunyun; Liu, Guohai; Shen, Yue; Liu, Hui
2012-04-01
The concept of the memory motor is based on the fact that the magnetization level of the AlNiCo permanent magnet in the motor can be regulated by a temporary current pulse and memorized automatically. In this paper, a new type of memory motor is proposed, namely a flux mnemonic double salient motor drive, which is particularly attractive for electric vehicles. To accurately analyze the motor, an improved hysteresis model is employed in the time-stepping finite element method. Both simulation and experimental results are given to verify the validity of the new method.
NASA Astrophysics Data System (ADS)
Wang, Zicheng; Wei, Renbo; Liu, Xiaobo
2017-01-01
Reduced graphene oxide/copper phthalocyanine nanocomposites are successfully prepared through a simple and effective two-step method, involving preferential reduction of graphene oxide and followed by self-assembly with copper phthalocyanine. The results of photographs, ultraviolet visible, x-ray diffraction, x-ray photoelectron spectroscopy, and scanning electron microscopy show that the in situ blending method can effectively facilitate graphene sheets to disperse homogenously in the copper phthalocyanine matrix through π- π interactions. As a result, the reduction of graphene oxide and restoration of the sp 2 carbon sites in graphene can enhance the dielectric properties and alternating current conductivity of copper phthalocyanine effectively.
Experimental design and quantitative analysis of microbial community multiomics.
Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis
2017-11-30
Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.
Jarmusch, Alan K; Pirro, Valentina; Kerian, Kevin S; Cooks, R Graham
2014-10-07
Strep throat causing Streptococcus pyogenes was detected in vitro and in simulated clinical samples by performing touch spray ionization-mass spectrometry. MS analysis took only seconds to reveal characteristic bacterial and human lipids. Medical swabs were used as the substrate for ambient ionization. This work constitutes the initial step in developing a non-invasive MS-based test for clinical diagnosis of strep throat. It is limited to the single species, S. pyogenes, which is responsible for the vast majority of cases. The method is complementary to and, with further testing, a potential alternative to current methods of point-of-care detection of S. pyogenes.
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
Predictors of posttraumatic stress symptoms following childbirth
2014-01-01
Background Posttraumatic stress disorder (PTSD) following childbirth has gained growing attention in the recent years. Although a number of predictors for PTSD following childbirth have been identified (e.g., history of sexual trauma, emergency caesarean section, low social support), only very few studies have tested predictors derived from current theoretical models of the disorder. This study first aimed to replicate the association of PTSD symptoms after childbirth with predictors identified in earlier research. Second, cognitive predictors derived from Ehlers and Clark’s (2000) model of PTSD were examined. Methods N = 224 women who had recently given birth completed an online survey. In addition to computing single correlations between PTSD symptom severities and variables of interest, in a hierarchical multiple regression analyses posttraumatic stress symptoms were predicted by (1) prenatal variables, (2) birth-related variables, (3) postnatal social support, and (4) cognitive variables. Results Wellbeing during pregnancy and age were the only prenatal variables contributing significantly to the explanation of PTSD symptoms in the first step of the regression analysis. In the second step, the birth-related variables peritraumatic emotions and wellbeing during childbed significantly increased the explanation of variance. Despite showing significant bivariate correlations, social support entered in the third step did not predict PTSD symptom severities over and above the variables included in the first two steps. However, with the exception of peritraumatic dissociation all cognitive variables emerged as powerful predictors and increased the amount of variance explained from 43% to a total amount of 68%. Conclusions The findings suggest that the prediction of PTSD following childbirth can be improved by focusing on variables derived from a current theoretical model of the disorder. PMID:25026966
Vail, III, William B.
1993-01-01
Methods of operation of an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. First and second order errors of measurement are identified, and the measurement step and two compensation steps provide methods to substantially eliminate their influence on the results. A multiple frequency apparatus adapted to movement within the well is described which simultaneously provide the measurement and two compensation steps.
Animal Disease Import Risk Analysis--a Review of Current Methods and Practice.
Peeler, E J; Reese, R A; Thrush, M A
2015-10-01
The application of risk analysis to the spread of disease with international trade in animals and their products, that is, import risk analysis (IRA), has been largely driven by the Sanitary and Phytosanitary (SPS) agreement of the World Trade Organization (WTO). The degree to which the IRA standard established by the World Organization for Animal Health (OIE), and associated guidance, meets the needs of the SPS agreement is discussed. The use of scenario trees is the core modelling approach used to represent the steps necessary for the hazard to occur. There is scope to elaborate scenario trees for commodity IRA so that the quantity of hazard at each step is assessed, which is crucial to the likelihood of establishment. The dependence between exposure and establishment suggests that they should fall within the same subcomponent. IRA undertaken for trade reasons must include an assessment of consequences to meet SPS criteria, but guidance is sparse. The integration of epidemiological and economic modelling may open a path for better methods. Matrices have been used in qualitative IRA to combine estimates of entry and exposure, and consequences with likelihood, but this approach has flaws and better methods are needed. OIE IRA standards and guidance indicate that the volume of trade should be taken into account, but offer no detail. Some published qualitative IRAs have assumed current levels and patterns of trade without specifying the volume of trade, which constrains the use of IRA to determine mitigation measures (to reduce risk to an acceptable level) and whether the principle of equivalence, fundamental to the SPS agreement, has been observed. It is questionable whether qualitative IRA can meet all the criteria set out in the SPS agreement. Nevertheless, scope exists to elaborate the current standards and guidance, so they better serve the principle of science-based decision-making. © 2013 Crown copyright. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
Bova, G Steven; Eltoum, Isam A; Kiernan, John A; Siegal, Gene P; Frost, Andra R; Best, Carolyn J M; Gillespie, John W; Su, Gloria H; Emmert-Buck, Michael R
2005-02-01
Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of any tissue-based biological phenomenon. This article reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification and quantification. We provide a detailed comparison of some current tissue microdissection technologies, and provide detailed example protocols for tissue component handling upstream and downstream from microdissection. We also discuss some of the physical and chemical issues related to optimal tissue processing, and include methods specific to cytology specimens. We encourage each laboratory to use these as a starting point for optimization of their overall process of moving from collected tissue to high quality, appropriately anatomically tagged scientific results. In optimized protocols is a source of inefficiency in current life science research. Improvement in this area will significantly increase life science quality and productivity. The article is divided into introduction, materials, protocols, and notes sections. Because many protocols are covered in each of these sections, information relating to a single protocol is not contiguous. To get the greatest benefit from this article, readers are advised to read through the entire article first, identify protocols appropriate to their laboratory for each step in their workflow, and then reread entries in each section pertaining to each of these single protocols.
Metabolomics as a tool in the identification of dietary biomarkers.
Gibbons, Helena; Brennan, Lorraine
2017-02-01
Current dietary assessment methods including FFQ, 24-h recalls and weighed food diaries are associated with many measurement errors. In an attempt to overcome some of these errors, dietary biomarkers have emerged as a complementary approach to these traditional methods. Metabolomics has developed as a key technology for the identification of new dietary biomarkers and to date, metabolomic-based approaches have led to the identification of a number of putative biomarkers. The three approaches generally employed when using metabolomics in dietary biomarker discovery are: (i) acute interventions where participants consume specific amounts of a test food, (ii) cohort studies where metabolic profiles are compared between consumers and non-consumers of a specific food and (iii) the analysis of dietary patterns and metabolic profiles to identify nutritypes and biomarkers. The present review critiques the current literature in terms of the approaches used for dietary biomarker discovery and gives a detailed overview of the currently proposed biomarkers, highlighting steps needed for their full validation. Furthermore, the present review also evaluates areas such as current databases and software tools, which are needed to advance the interpretation of results and therefore enhance the utility of dietary biomarkers in nutrition research.
Numerical simulation of conservation laws
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; To, Wai-Ming
1992-01-01
A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.
Meirovitch, Hagai
2010-01-01
The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently, HSMD is being extended for calculating the absolute and relative free energies of ligand-enzyme binding. We describe the whole approach and discuss future directions. 2009 John Wiley & Sons, Ltd.
Shorofsky, Stephen R; Peters, Robert W; Rashba, Eric J; Gold, Michael R
2004-02-01
Determination of DFT is an integral part of ICD implantation. Two commonly used methods of DFT determination, the step-down method and the binary search method, were compared in 44 patients undergoing ICD testing for standard clinical indications. The step-down protocol used an initial shock of 18 J. The binary search method began with a shock energy of 9 J and successive shock energies were increased or decreased depending on the success of the previous shock. The DFT was defined as the lowest energy that successfully terminated ventricular fibrillation. The binary search method has the advantage of requiring a predetermined number of shocks, but some have questioned its accuracy. The study found that (mean) DFT obtained by the step-down method was 8.2 +/- 5.0, whereas by the binary search method DFT was 8.1 +/- 0.7 J, P = NS. DFT differed by no more than one step between methods in 32 (71%) of patients. The number of shocks required to determine DFT by the step-down method was 4.6 +/- 1.4, whereas by definition, the binary search method always required three shocks. In conclusion, the binary search method is preferable because it is of comparable efficacy and requires fewer shocks.