Science.gov

Sample records for dc method applied

  1. Triple voltage dc-to-dc converter and method

    SciTech Connect

    Su, Gui-Jia

    2008-08-05

    A circuit and method of providing three dc voltage buses and transforming power between a low voltage dc converter and a high voltage dc converter, by coupling a primary dc power circuit and a secondary dc power circuit through an isolation transformer; providing the gating signals to power semiconductor switches in the primary and secondary circuits to control power flow between the primary and secondary circuits and by controlling a phase shift between the primary voltage and the secondary voltage. The primary dc power circuit and the secondary dc power circuit each further comprising at least two tank capacitances arranged in series as a tank leg, at least two resonant switching devices arranged in series with each other and arranged in parallel with the tank leg, and at least one voltage source arranged in parallel with the tank leg and the resonant switching devices, said resonant switching devices including power semiconductor switches that are operated by gating signals. Additional embodiments having a center-tapped battery on the low voltage side and a plurality of modules on both the low voltage side and the high voltage side are also disclosed for the purpose of reducing ripple current and for reducing the size of the components.

  2. Center for Applied Linguistics, Washington DC, USA

    ERIC Educational Resources Information Center

    Sugarman, Julie; Fee, Molly; Donovan, Anne

    2015-01-01

    The Center for Applied Linguistics (CAL) is a private, nonprofit organization with over 50 years' experience in the application of research on language and culture to educational and societal concerns. CAL carries out its mission to improve communication through better understanding of language and culture by engaging in a variety of projects in…

  3. ASDTIC control and standardized interface circuits applied to buck, parallel and buck-boost dc to dc power converters

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.

  4. Dynamic Resistance of YBCO-Coated Conductors in Applied AC Fields with DC Transport Currents and DC Background Fields

    SciTech Connect

    Duckworth, Robert C; Zhang, Yifei; Ha, Tam T; Gouge, Michael J

    2011-01-01

    In order to predict heat loads in future saturable core fault-current-limiting devices due to ac fringing fields, dynamic resistance in YBCO-coated conductors was measured at 77 K in peak ac fields up to 25 mT at 60 Hz and in dc fields up to 1 T. With the sample orientation set such that the conductor face was either parallel or perpendicular to the ac and dc applied fields, the dynamic resistance was measured at different fractions of the critical current to determine the relationship between the dc transport current and the applied fields. With respect to field orientation, the dynamic resistance for ac fields that were perpendicular to the conductor face was significantly higher than when the ac fields were parallel to the conductor face. It was also observed that the dynamic resistance: (1) increased with increasing fraction of the dc transport current to the critical current, (2) was proportional to the inverse of the critical current, and (3) demonstrated a linear dependence with the applied ac field once a threshold field was exceeded. This functional behavior was consistent with a critical state model for the dynamic resistance, but discrepancies in absolute value of the dynamic resistance suggested that further theoretical development is needed.

  5. Refining Diagnoses: Applying the DC-LD to an Irish Population with Intellectual Disability

    ERIC Educational Resources Information Center

    Felstrom, A.; Mulryan, N.; Reidy, J.; Staines, M.; Hillery, J.

    2005-01-01

    Background: The diagnostic criteria for psychiatric disorders for use with adults with learning disabilities/mental retardation (DC-LD) is a diagnostic tool developed in 2001 to improve upon existing classification systems for adults with learning disability. The aim of this study was to apply the classification system described by the DC-LD to a…

  6. Methods of applied dynamics

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.; Wilson, H. B.

    1991-01-01

    The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and applied to the dynamic modeling of aerospace structures using the modal synthesis technique.

  7. Siemens programmable variable speed DC drives applied to wet and dry expansion engines

    SciTech Connect

    Markley, Daniel J.

    1997-07-01

    This document describes the technical details of the Siemens SIMOREG line of DC variable speed drives as applied to Fermilab wet and dry mechanical expander engines. The expander engines are used throughout the lab in Helium refrigerator installations.

  8. DC to DC power converters and methods of controlling the same

    DOEpatents

    Steigerwald, Robert Louis; Elasser, Ahmed; Sabate, Juan Antonio; Todorovic, Maja Harfman; Agamy, Mohammed

    2012-12-11

    A power generation system configured to provide direct current (DC) power to a DC link is described. The system includes a first power generation unit configured to output DC power. The system also includes a first DC to DC converter comprising an input section and an output section. The output section of the first DC to DC converter is coupled in series with the first power generation unit. The first DC to DC converter is configured to process a first portion of the DC power output by the first power generation unit and to provide an unprocessed second portion of the DC power output of the first power generation unit to the output section.

  9. Design of piezoelectric transformer for DC/DC converter with stochastic optimization method

    NASA Astrophysics Data System (ADS)

    Vasic, Dejan; Vido, Lionel

    2016-04-01

    Piezoelectric transformers were adopted in recent year due to their many inherent advantages such as safety, no EMI problem, low housing profile, and high power density, etc. The characteristics of the piezoelectric transformers are well known when the load impedance is a pure resistor. However, when piezoelectric transformers are used in AC/DC or DC/DC converters, there are non-linear electronic circuits connected before and after the transformer. Consequently, the output load is variable and due to the output capacitance of the transformer the optimal working point change. This paper starts from modeling a piezoelectric transformer connected to a full wave rectifier in order to discuss the design constraints and configuration of the transformer. The optimization method adopted here use the MOPSO algorithm (Multiple Objective Particle Swarm Optimization). We start with the formulation of the objective function and constraints; then the results give different sizes of the transformer and the characteristics. In other word, this method is looking for a best size of the transformer for optimal efficiency condition that is suitable for variable load. Furthermore, the size and the efficiency are found to be a trade-off. This paper proposes the completed design procedure to find the minimum size of PT in need. The completed design procedure is discussed by a given specification. The PT derived from the proposed design procedure can guarantee both good efficiency and enough range for load variation.

  10. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  11. METHOD OF APPLYING METALLIC COATINGS

    DOEpatents

    Robinson, J.W.; Eubank, L.D.

    1961-08-01

    A method for applying a protective coating to a uranium rod is described. The steps include preheating the unanium rod to the coating temperature, placement of the rod between two rotating rollers, pouring a coating metal such as aluminum-silicon in molten form between one of the rotating rollers and the uranium rod, and rotating the rollers continually until the coating is built up to the desired thickness. (AEC)

  12. Self-pumped and double phase conjugation in GaAs with applied dc electric field

    NASA Technical Reports Server (NTRS)

    Chua, P. L.; Liu, D. T. H.; Cheng, L. J.

    1990-01-01

    Self-pumped and double phase conjugation are first demonstrated in undoped GaAs with applied dc electric field at 1.06 micron wavelength. Phase-conjugate reflectivities of up to 3 percent and 0.5 percent, respectively, are obtained and other dependences are reported. Reported values of the self-pumped phase-conjugate reflectivity are compared with those of InP.

  13. A broadband reflective filter for applying dc biases to high-Q superconducting microwave cavities

    NASA Astrophysics Data System (ADS)

    Hao, Yu; Rouxinol, Francisco; Lahaye, Matt

    2015-03-01

    The integration of dc-bias circuitry into low-loss microwave cavities is an important technical issue for topics in many fields that include research with qubit- and cavity-coupled mechanical system, circuit QED and quantum dynamics of nonlinear systems. The applied potentials or currents serve a variety of functions such as maintaining the operating state of device or establishing tunable electrostatic interactions between devices (for example, in order to couple a nanomechanical resonator to a superconducting qubit to generate and detect quantum states of a mechanical resonator). Here we report a bias-circuit design that utilizes a broadband reflective filter to connect to a high-Q superconducting coplanar waveguide (CPW) cavity. Our design allows us to apply dc-voltages to the center trace of CPW, with negligible changes in loaded quality factors of the fundamental mode. Simulations and measurements of the filter demonstrate insertion loss greater than 20 dB in the range of 3 to 10 GHz. Transmission measurements of the voltage-biased CPW show that loaded quality factors exceeding 105 can be achieved for dc-voltages as high as V = +/- 20V for the cavity operated in the single photon regime. National Science Foundation under Grant No. DMR-1056423 and Grant No. DMR-1312421.

  14. Automated decision algorithm applied to a field experiment with multiple research objectives: The DC3 campaign

    NASA Astrophysics Data System (ADS)

    Hanlon, C. J.; Small, A.; Bose, S.; Young, G. S.; Verlinde, J.

    2013-12-01

    undertaken by DC3 investigators, depending on the scoring method used. Reliability diagram for the algorithmic system used to forecast isolated convective thunderstorms for the DC3 field campaign. The clustering of points around the 45-degree line indicates that the forecasting system is well-calibrated -- a critical requirement for an algorithmic flight decision recommendation system.

  15. Ripple current loss measurement with DC bias condition for high temperature superconducting power cable using calorimetry method

    NASA Astrophysics Data System (ADS)

    Kim, D. W.; Kim, J. G.; Kim, A. R.; Park, M.; Yu, I. K.; Sim, K. D.; Kim, S. H.; Lee, S. J.; Cho, J. W.; Won, Y. J.

    2010-11-01

    The authors calculated the loss of the High Temperature Superconducting (HTS) model cable using Norris ellipse formula, and measured the loss of the model cable experimentally. Two kinds of measuring method are used. One is the electrical method, and the other is the calorimetric method. The electrical method can be used only in AC condition. But the calorimetric method can be used in both AC and DC bias conditions. In order to propose an effective measuring approach for Ripple Dependent Loss (RDL) under DC bias condition using the calorimetric method, Bismuth Strontium Calcium Copper Oxide (BSCCO) wires were used for the HTS model cable, and the SUS tapes were used as a heating tape to make the same pattern of the temperature profiles as in the electrical method without the transport current. The temperature-loss relations were obtained by the electrical method, and then applied to the calorimetric method by which the RDL under DC bias condition was well estimated.

  16. Effects of applied dc radial electric fields on particle transport in a bumpy torus plasma

    NASA Technical Reports Server (NTRS)

    Roth, J. R.

    1978-01-01

    The influence of applied dc radial electric fields on particle transport in a bumpy torus plasma is studied. The plasma, magnetic field, and ion heating mechanism are operated in steady state. Ion kinetic temperature is more than a factor of ten higher than electron temperature. The electric fields raise the ions to energies on the order of kilovolts and then point radially inward or outward. Plasma number density profiles are flat or triangular across the plasma diameter. It is suggested that the radial transport processes are nondiffusional and dominated by strong radial electric fields. These characteristics are caused by the absence of a second derivative in the density profile and the flat electron temperature profiles. If the electric field acting on the minor radius of the toroidal plasma points inward, plasma number density and confinement time are increased.

  17. Body edge delineation in 2D DC resistivity imaging using differential method

    NASA Astrophysics Data System (ADS)

    Susanto, Kusnahadi; Fitrah Bahari, Mohammad

    2016-01-01

    DC resistivity is widely used to identify the kind of rock and the lithology contact. However, the image resulting from resistivity processing is shown in a contour image. There is be a problem to interpret where the edge of body location is. This study uses differential method to delineate the edge of body in DC resistivity contour. This method was applied to the boundary between gravel and underlying clay layer. The first and the second order differential method is applied to the delineation of lithology contact. The profiling curve has to be sliced and extracted from the resistivity contour before the differential method can be used. The spectral analysis shows the frequency and wavenumber of the profiling curve used to make gridding. The slicing process was conducted horizontally and vertically in order to get the mesh size which will be used in the differential method. The second order differential, the Laplace operator, is able to show the edge of body more clearly than the first order differential and shows the contact between gravel and clay.

  18. Circuit and Method for Communication Over DC Power Line

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2007-01-01

    A circuit and method for transmitting and receiving on-off-keyed (OOK) signals with fractional signal-to-noise ratios uses available high-temperature silicon- on-insulator (SOI) components to move computational, sensing, and actuation abilities closer to high-temperature or high-ionizing radiation environments such as vehicle engine compartments, deep-hole drilling environments, industrial control and monitoring of processes like smelting, and operations near nuclear reactors and in space. This device allows for the networking of multiple, like nodes to each other and to a central processor. It can do this with nothing more than the already in-situ power wiring of the system. The device s microprocessor allows it to make intelligent decisions within the vehicle operational loop and to effect control outputs to its associated actuators. The figure illustrates how each node converts digital serial data to OOK 18-kHz in transmit mode and vice-versa in receive mode; though operations at lower frequencies or up to a megahertz are within reason using this method and these parts. This innovation s technique modulates a DC power bus with millivolt-level signals through a MOSFET (metal oxide semiconductor field effect transistor) and resistor by OOK. It receives and demodulates this signal from the DC power bus through capacitive coupling at high temperature and in high ionizing radiation environments. The demodulation of the OOK signal is accomplished by using an asynchronous quadrature detection technique realized by a quasi-discrete Fourier transform through use of the quadrature components (0 and 90 phases) of the carrier frequency as generated by the microcontroller and as a function of the selected crystal frequency driving its oscillator. The detected signal is rectified using an absolute-value circuit containing no diodes (diodes being non-operational at high temperatures), and only operational amplifiers. The absolute values of the two phases of the received signal

  19. Modelling of stress fields during LFEM DC casting of aluminium billets by a meshless method

    NASA Astrophysics Data System (ADS)

    Mavrič, B.; Šarler, B.

    2015-06-01

    Direct Chill (DC) casting of aluminium alloys is a widely established technology for efficient production of aluminium billets and slabs. The procedure is being further improved by the application of Low Frequency Electromagnetic Field (LFEM) in the area of the mold. Novel LFEM DC processing technique affects many different phenomena which occur during solidification, one of them being the stresses and deformations present in the billet. These quantities can have a significant effect on the quality of the cast piece, since they impact porosity, hot-tearing and cold cracking. In this contribution a novel local radial basis function collocation method (LRBFCM) is successfully applied to the problem of stress field calculation during the stationary state of DC casting of aluminium alloys. The formulation of the method is presented in detail, followed by the presentation of the tackled physical problem. The model describes the deformations of linearly elastic, inhomogeneous isotropic solid with a given temperature field. The temperature profile is calculated using the in-house developed heat and mass transfer model. The effects of low frequency EM casting process parameters on the vertical, circumferential and radial stress and on the deformation of billet surface are presented. The application of the LFEM appears to decrease the amplitudes of the tensile stress occurring in the billet.

  20. Synthesis of silicon nanotubes by DC arc plasma method

    SciTech Connect

    Tank, C. M.; Bhoraskar, S. V.; Mathe, V. L.

    2012-06-05

    Plasma synthesis is a novel technique of synthesis of nanomaterials as they provide high rate of production and promote metastable reactions. Very thin walled silicon nanotubes were synthesized in a DC direct arc thermal plasma reactor. The effect of parameters of synthesis i.e. arc current and presence of hydrogen on the morphology of Si nanoparticles is reported. Silicon nanotubes were characterized by Transmission Electron Microscopy (TEM), Local Energy Dispersive X-ray analysis (EDAX), and Scanning Tunneling Microscopy (STM).

  1. Automated decision algorithm applied to a field experiment with multiple research objectives: The DC3 campaign

    NASA Astrophysics Data System (ADS)

    Hanlon, Christopher J.; Small, Arthur A.; Bose, Satyajit; Young, George S.; Verlinde, Johannes

    2014-10-01

    Automated decision systems have shown the potential to increase data yields from field experiments in atmospheric science. The present paper describes the construction and performance of a flight decision system designed for a case in which investigators pursued multiple, potentially competing objectives. The Deep Convective Clouds and Chemistry (DC3) campaign in 2012 sought in situ airborne measurements of isolated deep convection in three study regions: northeast Colorado, north Alabama, and a larger region extending from central Oklahoma through northwest Texas. As they confronted daily flight launch decisions, campaign investigators sought to achieve two mission objectives that stood in potential tension to each other: to maximize the total amount of data collected while also collecting approximately equal amounts of data from each of the three study regions. Creating an automated decision system involved understanding how investigators would themselves negotiate the trade-offs between these potentially competing goals, and representing those preferences formally using a utility function that served to rank-order the perceived value of alternative data portfolios. The decision system incorporated a custom-built method for generating probabilistic forecasts of isolated deep convection and estimated climatologies calibrated to historical observations. Monte Carlo simulations of alternative future conditions were used to generate flight decision recommendations dynamically consistent with the expected future progress of the campaign. Results show that a strict adherence to the recommendations generated by the automated system would have boosted the data yield of the campaign by between 10 and 57%, depending on the metrics used to score success, while improving portfolio balance.

  2. Multilevel DC link inverter

    DOEpatents

    Su, Gui-Jia

    2003-06-10

    A multilevel DC link inverter and method for improving torque response and current regulation in permanent magnet motors and switched reluctance motors having a low inductance includes a plurality of voltage controlled cells connected in series for applying a resulting dc voltage comprised of one or more incremental dc voltages. The cells are provided with switches for increasing the resulting applied dc voltage as speed and back EMF increase, while limiting the voltage that is applied to the commutation switches to perform PWM or dc voltage stepping functions, so as to limit current ripple in the stator windings below an acceptable level, typically 5%. Several embodiments are disclosed including inverters using IGBT's, inverters using thyristors. All of the inverters are operable in both motoring and regenerating modes.

  3. Test case set generation method on MC/DC based on binary tree

    NASA Astrophysics Data System (ADS)

    Wang, Jun-jie; Zhang, Bo; Chen, Yuan

    2013-03-01

    Exploring efficient, reliable test case design methods has been tester pursuit of the goal. Along with the aerospace software logic complexity of improving and software scale enlarging, this requirement also gets more compelling. Test case design techniques suited for MC/DC improved test case design efficiency, increase the test coverage. It is suitable to test the software that logical relationship is complicated comparatively. Some software test tools provide the function to calculate the test coverage. And it can assess the test cases whether on the MC/DC or not. But the software tester needs the reverse thinking. The paper puts forward that design the test case by Unique-cause and Masking approach. And it proposes automatic generation method of test case on MC/DC. It improved the efficiency and correctness of generation the test case set on DC/DC.

  4. Three dimensional finite element methods: Their role in the design of DC accelerator systems

    SciTech Connect

    Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.

    2013-04-19

    High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 Multiplication-Sign 300 mm{sup 2}. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.

  5. Three dimensional finite element methods: Their role in the design of DC accelerator systems

    NASA Astrophysics Data System (ADS)

    Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.

    2013-04-01

    High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 × 300 mm2. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.

  6. Method to eliminate flux linkage DC component in load transformer for static transfer switch.

    PubMed

    He, Yu; Mao, Chengxiong; Lu, Jiming; Wang, Dan; Tian, Bing

    2014-01-01

    Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2 ~ 30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method. PMID:25133255

  7. Method to Eliminate Flux Linkage DC Component in Load Transformer for Static Transfer Switch

    PubMed Central

    2014-01-01

    Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2~30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method. PMID:25133255

  8. [Montessori method applied to dementia - literature review].

    PubMed

    Brandão, Daniela Filipa Soares; Martín, José Ignacio

    2012-06-01

    The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study. PMID:23155599

  9. System and Method for Determining Rate of Rotation Using Brushless DC Motor

    NASA Technical Reports Server (NTRS)

    Howard, David E. (Inventor); Smith, Dennis A. (Inventor)

    2000-01-01

    A system and method are provided for measuring rate of rotation. A brushless DC motor is rotated and produces a back electromagnetic force (emf) on each winding thereof. Each winding's back-emf is squared. The squared outputs associated with each winding are combined, with the square root being taken of such combination, to produce a DC output proportional only to the rate of rotation of the motor's shaft.

  10. The averaging method in applied problems

    NASA Astrophysics Data System (ADS)

    Grebenikov, E. A.

    1986-04-01

    The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.

  11. A new method for speed control of a DC motor using magnetorheological clutch

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc Hung; Choi, Seung-Bok

    2014-03-01

    In this research, a new method to control speed of DC motor using magnetorheological (MR) clutch is proposed and realized. Firstly, the strategy of a DC motor speed control using MR clutch is proposed. The MR clutch configuration is then proposed and analyzed based on Bingham-plastic rheological model of MR fluid. An optimal designed of the MR clutch is then studied to find out the optimal geometric dimensions of the clutch that can transform a required torque with minimum mass. A prototype of the optimized MR clutch is then manufactured and its performance characteristics are experimentally investigated. A DC motor speed control system featuring the optimized MR clutch is designed and manufactured. A PID controller is then designed to control the output speed of the system. In order to evaluate the effectiveness of the proposed DC motor speed control system, experimental results of the system such as speed tracking performance are obtained and presented with discussions.

  12. Investigation of an innovative method for DC flow suppression of double-inlet pulse tube coolers

    NASA Astrophysics Data System (ADS)

    Hu, J. Y.; Luo, E. C.; Wu, Z. H.; Dai, W.; Zhu, S. L.

    2007-05-01

    The use of double-inlet mode in the pulse tube cooler opens up a possibility of DC flow circulating around the regenerator and the pulse tube. The DC flow sometimes deteriorates the performance of the cryocooler because such a steady flow adds an unwanted thermal load to the cold heat exchanger. It seems that this problem is still not well solved although a lot of effort has been made. Here we introduce a membrane-barrier method for DC flow suppression in double-inlet pulse tube coolers. An elastic membrane is installed between the pulse tube cooler inlet and the double-inlet valve to break the closed-loop flow path of DC flow. The membrane is acoustically transparent, but would block the DC flow completely. Thus the DC flow is thoroughly suppressed and the merit of double-inlet mode is remained. With this method, a temperature reduction of tens of Kelvin was obtained in our single-stage pulse tube cooler and the lowest temperature reached 29.8 K.

  13. Entropy viscosity method applied to Euler equations

    SciTech Connect

    Delchini, M. O.; Ragusa, J. C.; Berry, R. A.

    2013-07-01

    The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)

  14. Novel ac Heating-dc Detection Method for Active Thermoelectric Scanning Thermal Microscopy

    NASA Astrophysics Data System (ADS)

    Miao, Tingting; Ma, Weigang; Zhang, Xing

    2015-11-01

    A novel and reliable ac heating-dc detection method is developed for active thermoelectric scanning thermal microscopy, which can map out local thermal property imaging by point-heating and point-sensing with nanoscale spatial resolution. The thermoelectric probe is electrically heated by an ac current, and the corresponding dc thermoelectric voltage is detected. Using the measured dc voltage, the temperature information can be extracted with the known Seebeck coefficient of the thermoelectric probe. The validity and accuracy of this method have been verified by a 25.4 \\upmu m thick K-type thermocouple by both experiment and numerical simulation in high vacuum and in air. The experimental results show that the proposed method is reliable and convenient to monitor the temperature of the junction.

  15. An electrochemical study of corrosion protection by primer-topcoat systems on 4130 steel with ac impedance and dc methods

    NASA Technical Reports Server (NTRS)

    Mendrek, M. J.; Higgins, R. H.; Danford, M. D.

    1988-01-01

    To investigate metal surface corrosion and the breakdown of metal protective coatings, the ac impedance method is applied to six systems of primer coated and primer topcoated 4130 steel. Two primers were used: a zinc-rich epoxy primer and a red lead oxide epoxy primer. The epoxy-polyamine topcoat was used in four of the systems. The EG and G-PARC Model 368 ac impedance measurement system, along with dc measurements with the same system using the polarization resistance method, were used to monitor changing properties of coated 4230 steel disks immersed in 3.5 percent NaCl solutions buffered at pH 5.4 over periods of 40 to 60 days. The corrosion system can be represented by an electronic analog called an equivalent circuit consisting of resistors and capacitors in specific arrangements. This equivalent circuit parallels the impedance behavior of the corrosion system during a frequency scan. Values for the resistors and capacitors, that can be assigned in the equivalent circuit following a least-squares analysis of the data, describe changes that occur on the corroding metal surface and in the protective coatings. Two equivalent circuits have been determined that predict the correct Bode phase and magnitude of the experimental sample at different immersion times. The dc corrosion current density data are related to equivalent circuit element parameters. Methods for determining corrosion rate with ac impedance parameters are verified by the dc method.

  16. A novel method for simulation of brushless DC motor servo-control system based on MATLAB

    NASA Astrophysics Data System (ADS)

    Tao, Keyan; Yan, Yingmin

    2006-11-01

    This paper provides a research about the simulation of brush-less DC motor (BLDCM) servo control system. Based on the mathematical model of Brush-less DC motor (BLDCM), built the system simulation model with the MATLAB software. When the system model is made, the isolated functional blocks, such as BLDCM block, the rotor's position detection block, change-phase logic block etc. have been modeled. By the organic combination of these blocks, the model of BLDCM can be established easily. The reasonability and validity have been testified by the simulation results and this novel method offers a new thought way for designing and debugging actual motors.

  17. New Current Control Method of DC Power Supply for Magnetic Perturbation Coils on J-TEXT

    NASA Astrophysics Data System (ADS)

    Zeng, Wubing; Ding, Yonghua; Yi, Bin; Xu, Hangyu; Rao, Bo; Zhang, Ming; Liu, Minghai

    2014-11-01

    In order to advance the research on suppressing tearing modes and driving plasma rotation, a DC power supply (PS) system has been developed for dynamic resonant magnetic perturbation (DRMP) coils and applied in the J-TEXT experiment. To enrich experimental phenomena in the J-TEXT tokamak, applying the circulating current four-quadrant operation mode in the DRMP DC PS system is proposed. By using the circulating current four-quadrant operation, DRMP coils can be smoothly controlled without the dead-time when the current polarity reverses. Essential circuit analysis, control optimization and simulation of desired scenarios have been performed for normal current. Relevant simulation and test results are also presented.

  18. Quasilinearization method applied to multidimensional quantum tunneling

    NASA Astrophysics Data System (ADS)

    Razavy, M.; Cote, Vincent J.

    1994-04-01

    We apply the quasilinearization method of Bellman and Kalaba [Quasilinearization and Nonlinear Boundary-Value Problems (Elsevier, New York, 1965)] to find approximate solutions for the multidimensional quantum tunneling for separable as well as nonseparable wave equations. By introducing the idea of the complex ``semiclassical trajectory'' which is valid for the motion over and under the barrier, and which, in the proper limit, reduces to the real classical trajectory in the allowed region, we obtain an eigenvalue equation for the characteristic wave numbers. This eigenvalue equation is similar to the corresponding equation obtained from the WKB approximation and yields complex eigenvalues with negative imaginary parts. When the barrier changes very rapidly as a function of the radial distance, we can replace the concept of the semiclassical trajectory, which may not be applicable in this case, by the concept of a complex ``quantum trajectory.'' The trajectory defined either way depends on a constant of integration, and by minimizing the action with respect to this constant we can obtain the minimum escape path. The case of two-dimensional tunneling is discussed as an example of this method.

  19. Adaptable DC offset correction

    NASA Technical Reports Server (NTRS)

    Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)

    2009-01-01

    Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.

  20. Forward modeling of marine DC resistivity method for a layered anisotropic earth

    NASA Astrophysics Data System (ADS)

    Yin, Chang-Chun; Zhang, Ping; Cai, Jing

    2016-06-01

    Since the ocean bottom is a sedimentary environment wherein stratification is well developed, the use of an anisotropic model is best for studying its geology. Beginning with Maxwell's equations for an anisotropic model, we introduce scalar potentials based on the divergence-free characteristic of the electric and magnetic (EM) fields. We then continue the EM fields down into the deep earth and upward into the seawater and couple them at the ocean bottom to the transmitting source. By studying both the DC apparent resistivity curves and their polar plots, we can resolve the anisotropy of the ocean bottom. Forward modeling of a high-resistivity thin layer in an anisotropic half-space demonstrates that the marine DC resistivity method in shallow water is very sensitive to the resistive reservoir but is not influenced by airwaves. As such, it is very suitable for oil and gas exploration in shallowwater areas but, to date, most modeling algorithms for studying marine DC resistivity are based on isotropic models. In this paper, we investigate one-dimensional anisotropic forward modeling for marine DC resistivity method, prove the algorithm to have high accuracy, and thus provide a theoretical basis for 2D and 3D forward modeling.

  1. Method and apparatus for generating radiation utilizing DC to AC conversion with a conductive front

    DOEpatents

    Dawson, John M.; Mori, Warren B.; Lai, Chih-Hsiang; Katsouleas, Thomas C.

    1998-01-01

    Method and apparatus for generating radiation of high power, variable duration and broad tunability over several orders of magnitude from a laser-ionized gas-filled capacitor array. The method and apparatus convert a DC electric field pattern into a coherent electromagnetic wave train when a relativistic ionization front passes between the capacitor plates. The frequency and duration of the radiation is controlled by the gas pressure and capacitor spacing.

  2. Method and apparatus for generating radiation utilizing DC to AC conversion with a conductive front

    DOEpatents

    Dawson, J.M.; Mori, W.B.; Lai, C.H.; Katsouleas, T.C.

    1998-07-14

    Method and apparatus ar disclosed for generating radiation of high power, variable duration and broad tunability over several orders of magnitude from a laser-ionized gas-filled capacitor array. The method and apparatus convert a DC electric field pattern into a coherent electromagnetic wave train when a relativistic ionization front passes between the capacitor plates. The frequency and duration of the radiation is controlled by the gas pressure and capacitor spacing. 4 figs.

  3. Study of New Start Method for Position Sensorless Brushless DC Motor

    NASA Astrophysics Data System (ADS)

    Kawabata, Yukio; Endo, Tsunehiro; Takakura, Yuhachi; Ishii, Makoto

    The position sensor-less drive technique based on the back electromotive force (EMF) has been widely used for brush-less DC motor drives. However, it is impossible to detect the rotor position at low-speed by using this technique. Therefore, the motor must be accelerated by the open loop based synchronous drive up to the middle speed. The open loop based synchronous drive extremely influences the motor performance. The torque pulsation and the over current can be occurred by using that. This paper proposes a new start method for the brush-less DC motors. In this method, the rotor position can be detected the moment the motor is driven. As a result, the open loop based synchronous drive can be eliminated, rapid acceleration and high performance of the motor drives are achieved. Effectiveness of the proposed method is shown by experimental results.

  4. Applied Mathematical Methods in Theoretical Physics

    NASA Astrophysics Data System (ADS)

    Masujima, Michio

    2005-04-01

    All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.

  5. An IPOT meshless method using DC PSE approximation for fluid flow equations in 2D and 3D geometries

    NASA Astrophysics Data System (ADS)

    Bourantas, G. C.; Loukopoulos, V. C.; Skouras, E. D.; Burganos, V. N.; Nikiforidis, G. C.

    2016-06-01

    Navier-Stokes (N-S) equations, in their primitive variable (u-v-p) formulation, are numerically solved using the Implicit Potential (IPOT) numerical scheme in the context of strong form Meshless Point Collocation (MPC) method. The unknown field functions are computed using the Discretization Correction Particle Strength Exchange (DC PSE) approximation method. The latter makes use of discrete moment conditions to derive the operator kernels, which leads to low condition number for the moment matrix compared to other meshless interpolation methods and increased stability for the numerical solution. The proposed meshless scheme is applied on 2D and 3D spatial domains, using uniform or irregular set of nodes to represent the domain. The numerical results obtained are compared against those obtained using well-established methods.

  6. Method of measuring the dc electric field and other tokamak parameters

    DOEpatents

    Fisch, Nathaniel J.; Kirtz, Arnold H.

    1992-01-01

    A method including externally imposing an impulsive momentum-space flux to perturb hot tokamak electrons thereby producing a transient synchrotron radiation signal, in frequency-time space, and the inference, using very fast algorithms, of plasma parameters including the effective ion charge state Z.sub.eff, the direction of the magnetic field, and the position and width in velocity space of the impulsive momentum-space flux, and, in particular, the dc toroidal electric field.

  7. A new theoretical formulation of coupling thermo-electric breakdown in LDPE film under dc high applied fields

    NASA Astrophysics Data System (ADS)

    Boughariou, F.; Chouikhi, S.; Kallel, A.; Belgaroui, E.

    2015-12-01

    In this paper, we present a new theoretical and numerical formulation for the electrical and thermal breakdown phenomena, induced by charge packet dynamics, in low-density polyethylene (LDPE) insulating film under dc high applied field. The theoretical physical formulation is composed by the equations of bipolar charge transport as well as by the thermo-electric coupled equation associated for the first time in modeling to the bipolar transport problem. This coupled equation is resolved by the finite-element numerical model. For the first time, all bipolar transport results are obtained under non-uniform temperature distributions in the sample bulk. The principal original results show the occurring of very sudden abrupt increase in local temperature associated to a very sharp increase in external and conduction current densities appearing during the steady state. The coupling between these electrical and thermal instabilities reflects physically the local coupling between electrical conduction and thermal joule effect. The results of non-uniform temperature distributions induced by non-uniform electrical conduction current are also presented for several times. According to our formulation, the strong injection current is the principal factor of the electrical and thermal breakdown of polymer insulating material. This result is shown in this work. Our formulation is also validated experimentally.

  8. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  9. Perturbation approach applied to modal diffraction methods.

    PubMed

    Bischoff, Joerg; Hehl, Karl

    2011-05-01

    Eigenvalue computation is an important part of many modal diffraction methods, including the rigorous coupled wave approach (RCWA) and the Chandezon method. This procedure is known to be computationally intensive, accounting for a large proportion of the overall run time. However, in many cases, eigenvalue information is already available from previous calculations. Some of the examples include adjacent slices in the RCWA, spectral- or angle-resolved scans in optical scatterometry and parameter derivatives in optimization. In this paper, we present a new technique that provides accurate and highly reliable solutions with significant improvements in computational time. The proposed method takes advantage of known eigensolution information and is based on perturbation method. PMID:21532698

  10. Applying Human Computation Methods to Information Science

    ERIC Educational Resources Information Center

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  11. Surface Analytical Methods Applied to Magnesium Corrosion.

    PubMed

    Dauphin-Ducharme, Philippe; Mauzeroll, Janine

    2015-08-01

    Understanding magnesium alloy corrosion is of primary concern, and scanning probe techniques are becoming key analytical characterization methods for that purpose. This Feature presents recent trends in this field as the progressive substitution of steel and aluminum car components by magnesium alloys to reduce the overall weight of vehicles is an irreversible trend. PMID:25826577

  12. METHOD OF APPLYING COPPER COATINGS TO URANIUM

    DOEpatents

    Gray, A.G.

    1959-07-14

    A method is presented for protecting metallic uranium, which comprises anodic etching of the uranium in an aqueous phosphoric acid solution containing chloride ions, cleaning the etched uranium in aqueous nitric acid solution, promptly electro-plating the cleaned uranium in a copper electro-plating bath, and then electro-plating thereupon lead, tin, zinc, cadmium, chromium or nickel from an aqueous electro-plating bath.

  13. Metal alloy coatings and methods for applying

    DOEpatents

    Merz, Martin D.; Knoll, Robert W.

    1991-01-01

    A method of coating a substrate comprises plasma spraying a prealloyed feed powder onto a substrate, where the prealloyed feed powder comprises a significant amount of an alloy of stainless steel and at least one refractory element selected from the group consisting of titanium, zirconium, hafnium, niobium, tantalum, molybdenum, and tungsten. The plasma spraying of such a feed powder is conducted in an oxygen containing atmosphere and forms an adherent, corrosion resistant, and substantially homogenous metallic refractory alloy coating on the substrate.

  14. Applying New Methods to Diagnose Coral Diseases

    USGS Publications Warehouse

    Kellogg, Christina A.; Zawada, David G.

    2009-01-01

    Coral disease, one of the major causes of reef degradation and coral death, has been increasing worldwide since the 1970s, particularly in the Caribbean. Despite increased scientific study, simple questions about the extent of disease outbreaks and the causative agents remain unanswered. A component of the U.S. Geological Survey Coral Reef Ecosystem STudies (USGS CREST) project is focused on developing and using new methods to approach the complex problem of coral disease.

  15. ALLOY COATINGS AND METHOD OF APPLYING

    DOEpatents

    Eubank, L.D.; Boller, E.R.

    1958-08-26

    A method for providing uranium articles with a pro tective coating by a single dip coating process is presented. The uranium article is dipped into a molten zinc bath containing a small percentage of aluminum. The resultant product is a uranium article covered with a thin undercoat consisting of a uranium-aluminum alloy with a small amount of zinc, and an outer layer consisting of zinc and aluminum. The article may be used as is, or aluminum sheathing may then be bonded to the aluminum zinc outer layer.

  16. METHOD OF APPLYING NICKEL COATINGS ON URANIUM

    DOEpatents

    Gray, A.G.

    1959-07-14

    A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.

  17. Scanning methods applied to bitemark analysis

    NASA Astrophysics Data System (ADS)

    Bush, Peter J.; Bush, Mary A.

    2010-06-01

    The 2009 National Academy of Sciences report on forensics focused criticism on pattern evidence subdisciplines in which statements of unique identity are utilized. One principle of bitemark analysis is that the human dentition is unique to the extent that a perpetrator may be identified based on dental traits in a bitemark. Optical and electron scanning methods were used to measure dental minutia and to investigate replication of detail in human skin. Results indicated that being a visco-elastic substrate, skin effectively reduces the resolution of measurement of dental detail. Conclusions indicate caution in individualization statements.

  18. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  19. Point of collapse and continuation methods for large ac/dc systems

    SciTech Connect

    Canizares, C.A. ); Alvarado, F.L. )

    1993-02-01

    This paper describes the implementation of both Point of Collapse (PoC) methods and continuation methods for the computation of voltage collapse points (saddle-node bifurcations) in large ac/dc systems. A comparison of the performance of these methods is presented for real systems of up to 2,158 buses. The paper discusses computational details of the implementation of the PoC and continuation methods, and the unique challenges encountered due to the presence of high voltage direct current (HVDC) transmission, area interchange power control regulating transformers, and voltage and reactive power limits. The characteristics of a robust PoC power flow program are presented, and its application to detection and solution of voltage stability problems is demonstrated.

  20. Reflections on Mixing Methods in Applied Linguistics Research

    ERIC Educational Resources Information Center

    Hashemi, Mohammad R.

    2012-01-01

    This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…

  1. Calculation of the ac to dc resistance ratio of conductive nonmagnetic straight conductors by applying FEM simulations

    NASA Astrophysics Data System (ADS)

    Riba, Jordi-Roger

    2015-09-01

    This paper analyzes the skin and proximity effects in different conductive nonmagnetic straight conductor configurations subjected to applied alternating currents and voltages. These effects have important consequences, including a rise of the ac resistance, which in turn increases power loss, thus limiting the rating for the conductor. Alternating current (ac) resistance is important in power conductors and bus bars for line frequency applications, as well as in smaller conductors for high frequency applications. Despite the importance of this topic, it is not usually analyzed in detail in undergraduate and even in graduate studies. To address this, this paper compares the results provided by available exact formulas for simple geometries with those obtained by means of two-dimensional finite element method (FEM) simulations and experimental results. The paper also shows that FEM results are very accurate and more general than those provided by the formulas, since FEM models can be applied in a wide range of electrical frequencies and configurations.

  2. dc3dm: Software to efficiently form and apply a 3D DDM operator for a nonuniformly discretized rectangular planar fault

    NASA Astrophysics Data System (ADS)

    Bradley, A. M.

    2013-12-01

    My poster will describe dc3dm, a free open source software (FOSS) package that efficiently forms and applies the linear operator relating slip and traction components on a nonuniformly discretized rectangular planar fault in a homogeneous elastic (HE) half space. This linear operator implements what is called the displacement discontinuity method (DDM). The key properties of dc3dm are: 1. The mesh can be nonuniform. 2. Work and memory scale roughly linearly in the number of elements (rather than quadratically). 3. The order of accuracy of my method on a nonuniform mesh is the same as that of the standard method on a uniform mesh. Property 2 is achieved using my FOSS package hmmvp [AGU 2012]. A nonuniform mesh (property 1) is natural for some problems. For example, in a rate-state friction simulation, nucleation length, and so required element size, scales reciprocally with effective normal stress. Property 3 assures that if a nonuniform mesh is more efficient than a uniform mesh (in the sense of accuracy per element) at one level of mesh refinement, it will remain so at all further mesh refinements. I use the routine DC3D of Y. Okada, which calculates the stress tensor at a receiver resulting from a rectangular uniform dislocation source in an HE half space. On a uniform mesh, straightforward application of this Green's function (GF) yields a DDM I refer to as DDMu. On a nonuniform mesh, this same procedure leads to artifacts that degrade the order of accuracy of the DDM. I have developed a method I call IGA that implements the DDM using this GF for a nonuniformly discretized mesh having certain properties. Importantly, IGA's order of accuracy on a nonuniform mesh is the same as DDMu's on a uniform one. Boundary conditions can be periodic in the surface-parallel direction (in both directions if the GF is for a whole space), velocity on any side, and free surface. The mesh must have the following main property: each uniquely sized element must tile each element

  3. PLURAL METALLIC COATINGS ON URANIUM AND METHOD OF APPLYING SAME

    DOEpatents

    Gray, A.G.

    1958-09-16

    A method is described of applying protective coatings to uranlum articles. It consists in applying chromium plating to such uranium articles by electrolysis in a chromic acid bath and subsequently applying, to this minum containing alloy. This aluminum contalning alloy (for example one of aluminum and silicon) may then be used as a bonding alloy between the chromized surface and an aluminum can.

  4. Control method for peak power delivery with limited DC-bus voltage

    SciTech Connect

    Edwards, John; Xu, Longya; Bhargava, Brij B.

    2006-09-05

    A method for driving a neutral point-clamped multi-level voltage source inverter supplying a synchronous motor is provided. A DC current is received at a neutral point-clamped multi-level voltage source inverter. The inverter has first, second, and third output nodes. The inverter also has a plurality of switches. A desired speed of a synchronous motor connected to the inverter by the first second and third nodes is received by the inverter. The synchronous motor has a rotor and the speed of the motor is defined by the rotational rate of the rotor. A position of the rotor is sensed, current flowing to the motor out of at least two of the first, second, and third output nodes is sensed, and predetermined switches are automatically activated by the inverter responsive to the sensed rotor position, the sensed current, and the desired speed.

  5. Comparison between the NWF and DC methods for implementing HR Schemes within a Fully Coupled Finite Volume Solver

    SciTech Connect

    Moukalled, F.; Aziz, A. Abdel; Darwish, M.

    2009-09-09

    This paper reports on the performance of a high resolution implemented as part of an implicit fully coupled velocity-pressure algorithm for the solution of laminar incompressible flow problems. The numerical implementation of high resolution convective schemes follows two techniques; (i) the Deferred Correction (DC) approach, and (ii) the Normalized Weighting Factor (NWF) method. The superiority of the NWF method over the DC approach is demonstrated by solving the sudden expansion in a square cavity problem. Results indicate that the number of iterations needed by the NWF solver is grid independent. Moreover, recorded CPU time values reveal that the NWF method substantially reduces the computational cost.

  6. DC voltage-voltage method to measure the interface traps in sub-micron MOSTs

    NASA Astrophysics Data System (ADS)

    Jie, B. B.; Li, M. F.; Chim, W. K.; Chan, D. S. H.; Lo, K. F.

    1999-07-01

    A dc voltage-voltage technique for the measurement of stress-generated interface traps in submicron MOSTs is demonstrated. This method uses the source-bulk-drain of a submicron MOST as an effective lateral bipolar transistor when the channel region is out of inversion under the control of the gate voltage Vgb. The emitter injects the minority carriers to the base region and the collector is open. The Vcb versus Vgb spectrum can be explained quantitatively in the spirit of the extended Ebers-Moll equations and interface trap SRH recombination. The spectrum shows clear information on stress-generated interface traps located at the collector-junction region. The new method has the advantages of simplicity, high sensitivity and wide application range to different device structures. A single effective interface trap at the source or drain side could be detected, and interface traps at the source side can be separated from those at the drain side by the new method. Moreover, we propose an improved gated-diode method to separate interface traps at the source side from those at the drain side.

  7. DC/DC Converter Stability Testing Study

    NASA Technical Reports Server (NTRS)

    Wang, Bright L.

    2008-01-01

    This report presents study results on hybrid DC/DC converter stability testing methods. An input impedance measurement method and a gain/phase margin measurement method were evaluated to be effective to determine front-end oscillation and feedback loop oscillation. In particular, certain channel power levels of converter input noises have been found to have high degree correlation with the gain/phase margins. It becomes a potential new method to evaluate stability levels of all type of DC/DC converters by utilizing the spectral analysis on converter input noises.

  8. Building "Applied Linguistic Historiography": Rationale, Scope, and Methods

    ERIC Educational Resources Information Center

    Smith, Richard

    2016-01-01

    In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…

  9. Applying Mixed Methods Research at the Synthesis Level: An Overview

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Maes, Bea; Onghena, Patrick

    2011-01-01

    Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…

  10. Method for manufacturing compound semiconductor field-effect transistors with improved DC and high frequency performance

    DOEpatents

    Zolper, John C.; Sherwin, Marc E.; Baca, Albert G.

    2000-01-01

    A method for making compound semiconductor devices including the use of a p-type dopant is disclosed wherein the dopant is co-implanted with an n-type donor species at the time the n-channel is formed and a single anneal at moderate temperature is then performed. Also disclosed are devices manufactured using the method. In the preferred embodiment n-MESFETs and other similar field effect transistor devices are manufactured using C ions co-implanted with Si atoms in GaAs to form an n-channel. C exhibits a unique characteristic in the context of the invention in that it exhibits a low activation efficiency (typically, 50% or less) as a p-type dopant, and consequently, it acts to sharpen the Si n-channel by compensating Si donors in the region of the Si-channel tail, but does not contribute substantially to the acceptor concentration in the buried p region. As a result, the invention provides for improved field effect semiconductor and related devices with enhancement of both DC and high-frequency performance.

  11. The Effectiveness of the Learning-Cycle Method on Teaching DC Circuits to Prospective Female and Male Science Teachers

    ERIC Educational Resources Information Center

    Ates, Salih

    2005-01-01

    This study was undertaken to explore the effectiveness of the learning-cycle method when teaching direct current (DC) circuits to university students. Four Physics II classes participated in the study, which lasted approximately two and a half weeks in the middle of the spring semester of 2003. Participants were 120 freshmen (55 females and 65…

  12. Applied AC and DC magnetic fields cause alterations in the mitotic cycle of early sea urchin embryos

    SciTech Connect

    Levin, M.; Ernst, S.G.

    1995-09-01

    This study demonstrates that exposure to 60 Hz magnetic fields (3.4--8.8 mt) and magnetic fields over the range DC-600 kHz (2.5--6.5 mT) can alter the early embryonic development of sea urchin embryos by inducing alterations in the timing of the cell cycle. Batches of fertilized eggs were exposed to the fields produced by a coil system. Samples of the continuous cultures were taken and scored for cell division. The times of both the first and second cell divisions were advanced by ELF AC fields and by static fields. The magnitude of the 60 Hz effect appears proportional to the field strength over the range tested. the relationship to field frequency was nonlinear and complex. For certain frequencies above the ELF range, the exposure resulted in a delay of the onset of mitosis. The advance of mitosis was also dependent on the duration of exposure and on the timing of exposure relative to fertilization.

  13. Fokker-Planck equation with arbitrary dc and ac fields: continued fraction method.

    PubMed

    Lee, Chee Kong; Gong, Jiangbin

    2011-07-01

    The continued fraction method (CFM) is used to solve the Fokker-Planck equation with arbitrary dc and ac fields. With an appropriate choice of basis functions, the Fokker-Planck equation is converted into a set of linear algebraic equations with short-ranged coupling and then CFM is implemented to obtain numerical solutions with high efficiency. Both a proposed perturbative CFM and the numerically exact matrix CFM are used to study the nonlinear response of driven systems, with their results compared to assess the validity regime of the perturbative approach. The proposed perturbative CFM approach needs scalar quantities only and hence is more efficient within its validity regime. Two nonlinear systems of different nature are used as examples: molecular dipole (rotational Brownian motion) and particle in a periodic potential (translational Brownian motion). The associated full dynamics is presented in the compact form of hysteresis loops. It is observed that as the strength of an AC driving field increases, pronounced nonlinear effects are manifested in the deformation of the hysteresis loops. PMID:21867110

  14. Limitations of the Conventional Phase Advance Method for Constant Power Operation of the Brushless DC Motor

    SciTech Connect

    Lawler, J.S.

    2001-10-29

    The brushless dc motor (BDCM) has high-power density and efficiency relative to other motor types. These properties make the BDCM well suited for applications in electric vehicles provided a method can be developed for driving the motor over the 4 to 6:1 constant power speed range (CPSR) required by such applications. The present state of the art for constant power operation of the BDCM is conventional phase advance (CPA) [1]. In this paper, we identify key limitations of CPA. It is shown that the CPA has effective control over the developed power but that the current magnitude is relatively insensitive to power output and is inversely proportional to motor inductance. If the motor inductance is low, then the rms current at rated power and high speed may be several times larger than the current rating. The inductance required to maintain rms current within rating is derived analytically and is found to be large relative to that of BDCM designs using high-strength rare earth magnets. Th us, the CPA requires a BDCM with a large equivalent inductance.

  15. Design and development of DC high current sensor using Hall-Effect method

    NASA Astrophysics Data System (ADS)

    Dewi, Sasti Dwi Tungga; Panatarani, C.; Joni, I. Made

    2016-02-01

    This paper report a newly developed high DC current sensor by using a Hall effect method and also the measurement system. The Hall effect sensor receive the magnetic field generated by a current carrying conductor wire. The SS49E (Honeywell) magnetoresistive sensor was employed to sense the magnetic field from the field concentrator. The voltage received from SS49E then converted into digital by using analog to digital converter (ADC-10 bit). The digital data then processed in the microcontroller to be displayed as the value of the electric current in the LCD display. In addition the measurement was interfaced into Personal Computer (PC) using the communication protocols of RS232 which was finally displayed in real-time graphical form on the PC display. The performance test on the range ± 40 Ampere showed that the maximum relative error is 5.26%. It is concluded that the sensors and the measurement system worked properly according to the design with acceptable accuracy.

  16. Optical methods of stress analysis applied to cracked components

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1991-01-01

    After briefly describing the principles of frozen stress photoelastic and moire interferometric analyses, and the corresponding algorithms for converting optical data from each method into stress intensity factors (SIF), the methods are applied to the determination of crack shapes, SIF determination, crack closure displacement fields, and pre-crack damage mechanisms in typical aircraft component configurations.

  17. Aircraft operability methods applied to space launch vehicles

    SciTech Connect

    Young, D.

    1997-01-01

    The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The {open_quotes}building in{close_quotes} of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program. {copyright} {ital 1997 American Institute of Physics.}

  18. Aircraft operability methods applied to space launch vehicles

    NASA Astrophysics Data System (ADS)

    Young, Douglas

    1997-01-01

    The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.

  19. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  20. Applying Taguchi Methods To Brazing Of Rocket-Nozzle Tubes

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Bellows, William J.; Deily, David C.; Brennan, Alex; Somerville, John G.

    1995-01-01

    Report describes experimental study in which Taguchi Methods applied with view toward improving brazing of coolant tubes in nozzle of main engine of space shuttle. Dr. Taguchi's parameter design technique used to define proposed modifications of brazing process reducing manufacturing time and cost by reducing number of furnace brazing cycles and number of tube-gap inspections needed to achieve desired small gaps between tubes.

  1. Alternating method applied to edge and surface crack problems.

    NASA Technical Reports Server (NTRS)

    Hartranft, R. J.; Sih, G. C.

    1973-01-01

    The alternating method, which intimately combines analytical results with numerical calculations, as applied to edge crack problems in two dimensions and surface crack problems in three dimensions, is treated. The case of a crack perpendicular to the edge of a semiinfinite material is considered. One of the crack geometries that has received continual interest in fracture mechanics is that of a semielliptical crack whose major axis lies on a stress free surface. In order to demonstrate the sensitivity of the solution to the influence of the free surface the semicircular crack problem is again treated by the alternating method.

  2. Self-stabilization techniques for intermediate power level in stacked-Vdd integrated circuits using DC-balanced coding methods

    NASA Astrophysics Data System (ADS)

    Kohara, Yusuke; Kubo, Naoya; Nishiyama, Tomofumi; Koizuka, Taiki; Alimudin, Mohammad; Rahmat, Amirul; Okamura, Hitoshi; Yamanokuchi, Tomoyuki; Nakamura, Kazuyuki

    2016-04-01

    Two new parallel bus coding methods for generating a DC-balanced code with additional bits are proposed to achieve the self-stabilization of the intermediate power level in Stacked-Vdd integrated circuits. They contribute to producing a uniform switching current in parallel inputs and outputs (I/Os). Type I coding minimizes the difference in the number of switchings between the upper and lower CMOS I/Os by 8B/10B coding followed by toggle conversion. Type II coding, in which the multi-value running disparity control feature is integrated into the bus-invert coding, requires only one redundant bit for any wider bus. Their DC-balanced feature and the stability effect of the intermediate power level in the Stacked-Vdd structure were experimentally confirmed from the measurement results obtained from the developed test chips.

  3. Newton-Krylov methods applied to nonequilibrium radiation diffusion

    SciTech Connect

    Knoll, D.A.; Rider, W.J.; Olsen, G.L.

    1998-03-10

    The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton`s method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton`s method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step.

  4. Transparent conducting Al-doped ZnO thin films prepared by magnetron sputtering with dc and rf powers applied in combination

    SciTech Connect

    Minami, Tadatsugu; Ohtani, Yuusuke; Miyata, Toshihiro; Kuboi, Takeshi

    2007-07-15

    A newly developed Al-doped ZnO (AZO) thin-film magnetron-sputtering deposition technique that decreases resistivity, improves resistivity distribution, and produces high-rate depositions has been demonstrated by dc magnetron-sputtering depositions that incorporate rf power (dc+rf-MS), either with or without the introduction of H{sub 2} gas into the deposition chamber. The dc+rf-MS preparations were carried out in a pure Ar or an Ar+H{sub 2} (0%-2%) gas atmosphere at a pressure of 0.4 Pa by adding a rf component (13.56 MHz) to a constant dc power of 80 W. The deposition rate in a dc+rf-MS deposition incorporating a rf power of 150 W was approximately 62 nm/min, an increase from the approximately 35 nm/min observed in dc magnetron sputtering with a dc power of 80 W. A resistivity as low as 3x10{sup -4} {omega} cm and an improved resistivity distribution could be obtained in AZO thin films deposited on substrates at a low temperature of 150 deg. C by dc+rf-MS with the introduction of hydrogen gas with a content of 1.5%. This article describes the effects of adding a rf power component (i.e., dc+rf-MS deposition) as well as introducing H{sub 2} gas into dc magnetron-sputtering preparations of transparent conducting AZO thin films.

  5. Applying Quantitative Genetic Methods to Primate Social Behavior

    PubMed Central

    Brent, Lauren J. N.

    2013-01-01

    Increasingly, behavioral ecologists have applied quantitative genetic methods to investigate the evolution of behaviors in wild animal populations. The promise of quantitative genetics in unmanaged populations opens the door for simultaneous analysis of inheritance, phenotypic plasticity, and patterns of selection on behavioral phenotypes all within the same study. In this article, we describe how quantitative genetic techniques provide studies of the evolution of behavior with information that is unique and valuable. We outline technical obstacles for applying quantitative genetic techniques that are of particular relevance to studies of behavior in primates, especially those living in noncaptive populations, e.g., the need for pedigree information, non-Gaussian phenotypes, and demonstrate how many of these barriers are now surmountable. We illustrate this by applying recent quantitative genetic methods to spatial proximity data, a simple and widely collected primate social behavior, from adult rhesus macaques on Cayo Santiago. Our analysis shows that proximity measures are consistent across repeated measurements on individuals (repeatable) and that kin have similar mean measurements (heritable). Quantitative genetics may hold lessons of considerable importance for studies of primate behavior, even those without a specific genetic focus. PMID:24659839

  6. Force acting on a dielectric particle in a concentration gradient by ionic concentration polarization under an externally applied DC electric field.

    PubMed

    Kang, Kwan Hyoung; Li, Dongqing

    2005-06-15

    There is a concentration-polarization (CP) force acting on a particle submerged in an electrolyte solution with a concentration (conductivity) gradient under an externally applied DC electric field. This force originates from the two mechanisms: (i) gradient of electrohydrodynamic pressure around the particle developed by the Coulombic force acting on induced free charges by the concentration polarization, and (ii) dielectric force due to nonuniform electric field induced by the conductivity gradient. A perturbation analysis is performed for the electric field, the concentration field, and the hydrodynamic field, under the assumptions of creeping flow and small concentration gradient. The leading order component of this force acting on a dielectric spherical particle is obtained by integrating the Maxwell and the hydrodynamic stress tensors. The analytical results are validated by comparing the surface pressure and the skin friction to those of a numerical analysis. The CP force is proportional to square of the applied electric field, effective for electrically neutral particles, and always directs towards the region of higher ionic concentration. The magnitude of the CP force is compared to that of the electrophoretic and the conventional dielectrophoretic forces. PMID:15897097

  7. Modeling of DC spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1995-01-01

    Future spacecraft power systems must be capable of supplying power to various loads. This delivery of power may necessitate the use of high-voltage, high-power dc distribution systems to transmit power from the source to the loads. Using state-of-the-art power conditioning electronics such as dc-dc converters, complex series and parallel configurations may be required at the interface between the source and the distribution system and between the loads and the distribution system. This research will use state-variables to model and simulate a dc spacecraft power system. Each component of the dc power system will be treated as a multiport network, and a state model will be written with the port voltages as the inputs. The state model of a component will be solved independently from the other components using its state transition matrix. A state-space averaging method is developed first in general for any dc-dc switching converter, and then demonstrated in detail for the particular case of the boost power stage. General equations for both steady-state (dc) and dynamic effects (ac) are obtained, from which important transfer functions are derived and applied to a special case of the boost power stage.

  8. Methods for model selection in applied science and engineering.

    SciTech Connect

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  9. The Lattice Boltzmann Method applied to neutron transport

    SciTech Connect

    Erasmus, B.; Van Heerden, F. A.

    2013-07-01

    In this paper the applicability of the Lattice Boltzmann Method to neutron transport is investigated. One of the main features of the Lattice Boltzmann method is the simultaneous discretization of the phase space of the problem, whereby particles are restricted to move on a lattice. An iterative solution of the operator form of the neutron transport equation is presented here, with the first collision source as the starting point of the iteration scheme. A full description of the discretization scheme is given, along with the quadrature set used for the angular discretization. An angular refinement scheme is introduced to increase the angular coverage of the problem phase space and to mitigate lattice ray effects. The method is applied to a model problem to investigate its applicability to neutron transport and the results are compared to a reference solution calculated, using MCNP. (authors)

  10. Applied methods of testing and evaluation for IR imaging system

    NASA Astrophysics Data System (ADS)

    Liao, Xiao-yue; Lu, Jin

    2009-07-01

    Different methods of testing and evaluation for IR imaging system are used with the application of the 2nd and the 3rd generation infrared detectors. The performance of IR imaging system can be reflected by many specifications, such as Noise Equivalent Temperature Difference (NETD), Nonuniformity, system Modulation Transfer Function (MTF), Minimum Resolvable Temperature Difference (MRTD), and Minimum Detectable Temperature Difference (MRTD) etc. The sensitivity of IR sensors is estimated by NETD. The sensitivity of thermal imaging sensors and space resolution are evaluated by MRTD, which is the chief specification of system. In this paper, the theoretical analysis of different testing methods is introduced. The characteristics of them are analyzed and compared. Based on discussing the factors that affect measurement results, an applied method of testing NETD and MRTD for IR system is proposed.

  11. Advancing MODFLOW Applying the Derived Vector Space Method

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.

    2015-12-01

    The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  12. About the method of investigation of applied unstable process

    NASA Astrophysics Data System (ADS)

    Romanova, O. V.; Sapega, V. F.

    2003-04-01

    ABOUT THE METHOD OF INVESTIGATION OF APPLIED UNSTABLE PROCESS O.V. Romanova (1), V.F. Sapega (1) (1) All-russian geological institute (VSEGEI) zapgeo@mail.wpus.net (mark: for Romanova)/Fax: +7-812-3289283 Samples of Late Proterosoic (Rephean) rocks from Arkhangelsk, Jaroslav and Leningrad regions were prepared by the developed method of sample preparation and researched by X-ray analysis. The presence of mantle fluid process had been previously estabished in some of the samples (injecting tuffizites) (Kazak, Jakobsson, 1999). It appears that unchanged rephean rocks contain the set of low-temperature minerals as illite, chlorite, vermiculite, goethite, indicates conditions of diagenesis with temperature less than 300° C. Presense of corrensite, rectorite, illite-montmorillonite indicates application of the post-diagenesis low-temperature process to the original sediment rock. At the same time the rocks involved in the fluid process, contain such minerals as olivine, pyrope, graphite and indicate application of the high-temperature process not less than 650-800°C. Within these samples a set of low-temperature minerals occur also, this demonstrates the short-timing and disequilibrium of the applied high-temperature process. Therefore implementation of the x-ray method provides unambiguous criterion to the establishment of the fluid process which as a rule is coupled with the development of kimberlite rock fields.

  13. "Influence Method" applied to measure a moderated neutron flux

    NASA Astrophysics Data System (ADS)

    Rios, I. J.; Mayer, R. E.

    2016-01-01

    The "Influence Method" is conceived for the absolute determination of a nuclear particle flux in the absence of known detector efficiency. This method exploits the influence of the presence of one detector, in the count rate of another detector when they are placed one behind the other and define statistical estimators for the absolute number of incident particles and for the efficiency. The method and its detailed mathematical description were recently published (Rios and Mayer, 2015 [1]). In this article we apply it to the measurement of the moderated neutron flux produced by an 241AmBe neutron source surrounded by a light water sphere, employing a pair of 3He detectors. For this purpose, the method is extended for its application where particles arriving at the detector obey a Poisson distribution and also, for the case when efficiency is not constant over the energy spectrum of interest. Experimental distributions and derived parameters are compared with theoretical predictions of the method and implications concerning the potential application to the absolute calibration of neutron sources are considered.

  14. Extrapolation techniques applied to matrix methods in neutron diffusion problems

    NASA Technical Reports Server (NTRS)

    Mccready, Robert R

    1956-01-01

    A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.

  15. Adapted G-mode Clustering Method applied to Asteroid Taxonomy

    NASA Astrophysics Data System (ADS)

    Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.

    2013-11-01

    The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.

  16. DC-pulsed voltage electrochemical method based on duty cycle self-control for producing TERS gold tips

    NASA Astrophysics Data System (ADS)

    Vasilchenko, V. E.; Kharintsev, S. S.; Salakhov, M. Kh

    2013-12-01

    This paper presents a modified dc-pulsed low voltage electrochemical method in which a duty cycle is self tuned while etching. A higher yield of gold tips suitable for performing tip-enhanced Raman scattering (TERS) measurements is demonstrated. The improvement is caused by the self-control of the etching rate along the full surface of the tip. A capability of the gold tips to enhance a Raman signal is exemplified by TERS spectroscopy of single walled carbon nanotubes bundle, sulfur and vanadium oxide.

  17. [Comparison of two types of double-lined simulated landfill leakage detection based on high voltage DC method].

    PubMed

    Yang, Ping; Nai, Chang-Xin; Dong, Lu; Wang, Qi; Wang, Yan-Wen

    2006-01-01

    Two types of double high density polyethylene (HDPE) liners landfill that clay or geogrid was added between the two HDPE liners. The general resistance of the second mode is 15% larger than the general resistance of the first mode in the primary HDPE liner detection, and 20% larger than that of the first one in the secondary HDPE liner detection. High voltage DC method can accomplish the leakage detection and location of these two types of landfill and the error of leakage location is less than 10cm when electrode space is 1m. PMID:16599145

  18. An Analytical Design Method for a Regenerative Braking Control System for DC-electrified Railway Systems under Light Load Conditions

    NASA Astrophysics Data System (ADS)

    Saito, Tatsuhito; Kondo, Keiichiro; Koseki, Takafumi

    A DC-electrified railway system that is fed by diode rectifiers at a substation is unable to return the electric power to an AC grid. Accordingly, the braking cars have to restrict regenerative braking power when the power consumption of the powering cars is not sufficient. However, the characteristics of a DC-electrified railway system, including the powering cars, is not known, and a mathematical model for designing a controller has not been established yet. Hence, the object of this study is to obtain the mathematical model for an analytical design method of the regenerative braking control system. In the first part of this paper, the static characteristics of this system are presented to show the position of the equilibrium point. The linearization of this system at the equilibrium point is then performed to describe the dynamic characteristics of the system. An analytical design method is then proposed on the basis of these characteristics. The proposed design method is verified by experimental tests with a 1kW class miniature model, and numerical simulations.

  19. Six Sigma methods applied to cryogenic coolers assembly line

    NASA Astrophysics Data System (ADS)

    Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René

    2009-05-01

    Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.

  20. Teaching organization theory for healthcare management: three applied learning methods.

    PubMed

    Olden, Peter C

    2006-01-01

    Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and methods with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three applied teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT applied to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management. PMID:16566496

  1. New method of applying conformal group to quantum fields

    NASA Astrophysics Data System (ADS)

    Han, Lei; Wang, Hai-Jun

    2015-09-01

    Most of previous work on applying the conformal group to quantum fields has emphasized its invariant aspects, whereas in this paper we find that the conformal group can give us running quantum fields, with some constants, vertex and Green functions running, compatible with the scaling properties of renormalization group method (RGM). We start with the renormalization group equation (RGE), in which the differential operator happens to be a generator of the conformal group, named dilatation operator. In addition we link the operator/spatial representation and unitary/spinor representation of the conformal group by inquiring a conformal-invariant interaction vertex mimicking the similar process of Lorentz transformation applied to Dirac equation. By this kind of application, we find out that quite a few interaction vertices are separately invariant under certain transformations (generators) of the conformal group. The significance of these transformations and vertices is explained. Using a particular generator of the conformal group, we suggest a new equation analogous to RGE which may lead a system to evolve from asymptotic regime to nonperturbative regime, in contrast to the effect of the conventional RGE from nonperturbative regime to asymptotic regime. Supported by NSFC (91227114)

  2. The Exoplanet Census: A General Method Applied to Kepler

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew N.

    2011-11-01

    We develop a general method to fit the underlying planetary distribution function (PLDF) to exoplanet survey data. This maximum likelihood method accommodates more than one planet per star and any number of planet or target star properties. We apply the method to announced Kepler planet candidates that transit solar-type stars. The Kepler team's estimates of the detection efficiency are used and are shown to agree with theoretical predictions for an ideal transit survey. The PLDF is fit to a joint power law in planet radius, down to 0.5 R ⊕, and orbital period, up to 50 days. The estimated number of planets per star in this sample is ~0.7-1.4, where the range covers systematic uncertainties in the detection efficiency. To analyze trends in the PLDF we consider four planet samples, divided between shorter and longer periods at 7 days and between large and small radii at 3 R ⊕. The size distribution changes appreciably between these four samples, revealing a relative deficit of ~3 R ⊕ planets at the shortest periods. This deficit is suggestive of preferential evaporation and sublimation of Neptune- and Saturn-like planets. If the trend and explanation hold, it would be spectacular observational support of the core accretion and migration hypotheses, and would allow refinement of these theories.

  3. Analytical methods applied to diverse types of Brazilian propolis

    PubMed Central

    2011-01-01

    Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen) can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented. PMID:21631940

  4. THE EXOPLANET CENSUS: A GENERAL METHOD APPLIED TO KEPLER

    SciTech Connect

    Youdin, Andrew N.

    2011-11-20

    We develop a general method to fit the underlying planetary distribution function (PLDF) to exoplanet survey data. This maximum likelihood method accommodates more than one planet per star and any number of planet or target star properties. We apply the method to announced Kepler planet candidates that transit solar-type stars. The Kepler team's estimates of the detection efficiency are used and are shown to agree with theoretical predictions for an ideal transit survey. The PLDF is fit to a joint power law in planet radius, down to 0.5 R{sub Circled-Plus }, and orbital period, up to 50 days. The estimated number of planets per star in this sample is {approx}0.7-1.4, where the range covers systematic uncertainties in the detection efficiency. To analyze trends in the PLDF we consider four planet samples, divided between shorter and longer periods at 7 days and between large and small radii at 3 R{sub Circled-Plus }. The size distribution changes appreciably between these four samples, revealing a relative deficit of {approx}3 R{sub Circled-Plus} planets at the shortest periods. This deficit is suggestive of preferential evaporation and sublimation of Neptune- and Saturn-like planets. If the trend and explanation hold, it would be spectacular observational support of the core accretion and migration hypotheses, and would allow refinement of these theories.

  5. Understanding the impulse response method applied to concrete bridge decks

    NASA Astrophysics Data System (ADS)

    Clem, D. J.; Popovics, J. S.; Schumacher, T.; Oh, T.; Ham, S.; Wu, D.

    2013-01-01

    The Impulse Response (IR) method is a well-established form of non-destructive testing (NDT) where the dynamic response of an element resulting from an impact event (hammer blow) is measured with a geophone to make conclusions about the element's integrity, stiffness, and/or support conditions. The existing ASTM Standard C1740-10 prescribes a set of parameters that can be used to evaluate the conditions above. These parameters are computed from the so-called `mobility' spectrum which is obtained by dividing the measured bridge deck response by the measured impact force in the frequency domain. While applying the test method in the laboratory as well as on an actual in-service concrete bridge deck, the authors of this paper observed several limitations that are presented and discussed in this paper. In order to better understand the underlying physics of the IR method, a Finite Element (FE) model was created. Parameters prescribed in the Standard were then computed from the FE data and are discussed. One main limitation appears to be the use of a fixed upper frequency of 800 Hz. Test data from the real bridge deck as well as the FE model both show that most energy is found above that limit. This paper presents and discusses limitations of the ASTM Standard found by the authors and suggests ways for improving it.

  6. Method for applying photographic resists to otherwise incompatible substrates

    NASA Technical Reports Server (NTRS)

    Fuhr, W. (Inventor)

    1981-01-01

    A method for applying photographic resists to otherwise incompatible substrates, such as a baking enamel paint surface, is described wherein the uncured enamel paint surface is coated with a non-curing lacquer which is, in turn, coated with a partially cured lacquer. The non-curing lacquer adheres to the enamel and a photo resist material satisfactorily adheres to the partially cured lacquer. Once normal photo etching techniques are employed the lacquer coats can be easily removed from the enamel leaving the photo etched image. In the case of edge lighted instrument panels, a coat of uncured enamel is placed over the cured enamel followed by the lacquer coats and the photo resists which is exposed and developed. Once the etched uncured enamel is cured, the lacquer coats are removed leaving an etched panel.

  7. Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective

    ERIC Educational Resources Information Center

    Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith

    2010-01-01

    The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…

  8. Applying to the DC Opportunity Scholarship Program: How Do Parents Rate Their Children's Current Schools at Time of Application and What Do They Want in New Schools? NCEE Evaluation Brief. NCEE 2016-4003

    ERIC Educational Resources Information Center

    Dynarski, Mark; Betts, Julian; Feldman, Jill

    2016-01-01

    The DC Opportunity Scholarship Program (OSP), established in 2004, is the only federally-funded private school voucher program for low-income parents in the United States. This evaluation brief describes findings using data from more than 2,000 applicants' parents, who applied to the program from spring 2011 to spring 2013 following…

  9. Study on low temperature DC electrical conductivity of SnO{sub 2} nanomaterial synthesized by simple gel combustion method

    SciTech Connect

    P, Rajeeva M.; S, Naveen C.; Lamani, Ashok R.; Jayanna, H. S.; Bothla, V Prasad

    2015-06-24

    Nanocrystalline tin oxide (SnO{sub 2}) material of different particle size was synthesized using gel combustion method by varying oxidizer (HNO{sub 3}) and keeping fuel as a constant. The prepared samples were characterized by X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM) and Energy Dispersive Analysis X-ray Spectroscope (EDAX). The effect of oxidizer in the gel combustion method was investigated by inspecting the particle size of nano SnO{sub 2} powder. The particle size was found to be increases with the increase of oxidizer from 8 to 12 moles. The X-ray diffraction patterns of the calcined product showed the formation of high purity tetragonal tin (IV) oxide with the particle size in the range of 17 to 31 nm which was calculated by Scherer’s formula. The particles and temperature dependence of direct (DC) electrical conductivity of SnO{sub 2} nanomaterial was studied using Keithley source meter. The DC electrical conductivity of SnO{sub 2} nanomaterial increases with the temperature from 80 to 300K and decrease with the particle size at constant temperature.

  10. Random-breakage mapping method applied to human DNA sequences

    NASA Technical Reports Server (NTRS)

    Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)

    1996-01-01

    The random-breakage mapping method [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was applied to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping method.

  11. RAMSES: Applied research on separation methods using space electrophoresis

    NASA Astrophysics Data System (ADS)

    Jamin Changeart, F.; Faure, F.; Sanchez, V.; Schoot, B.; Simonis, M.; Renard, A.; Collete, J. P.; Perez, D.; Val, J. M.; de Olano, A. l.

    Eight european industrial companies, the CNRS and University Paul Sabatier and CNES/ Centre National d'Etudes Spatiales collaborate on the SBS (Space Bio Separation) project which aims at demonstrating the possibility of preparing high-purity biomaterials under microgravity conditions. As a first step of SBS, the proposal of a cooperative flight of the RAMSES facility on board Spacelab during the IML-2 mission, scheduled January 1993, has been selected by NASA. RAMSES allows basic and applied research on free flow zone electrophoresis, in order to assess the influence of a low-gravity environment on the purification of biological products. Experiments will be performed by European and American scientists. The facility will be integrated in a Spacelab single rack. Using in situ diagnostics with a U.V. photometer and a cross illuminator, RAMSES investigates a wide variety of transport phenomena to better understand the basic mechanisms which govern electrophoresis method. RAMSES should be a basis for a more complete facility dedicated to the purification of biomaterials, associating various separation methods. This paper will provide an overview of this space facility RAMSES with emphasis on continuous flow zone electrophoresis technique, scientific back-ground, RAMSES experimental program, RAMSES main functions and an overall description of the RAMSES main units.

  12. Turbulence profiling methods applied to ESO's adaptive optics facility

    NASA Astrophysics Data System (ADS)

    Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.

    2014-07-01

    Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.

  13. Urban drainage control applying rational method and geographic information technologies

    NASA Astrophysics Data System (ADS)

    Aldalur, Beatriz; Campo, Alicia; Fernández, Sandra

    2013-09-01

    The objective of this study is to develop a method of controlling urban drainages in the town of Ingeniero White motivated by the problems arising as a result of floods, water logging and the combination of southeasterly and high tides. A Rational Method was applied to control urban watersheds and used tools of Geographic Information Technology (GIT). A Geographic Information System was developed on the basis of 28 panchromatic aerial photographs of 2005. They were georeferenced with control points measured with Global Positioning Systems (basin: 6 km2). Flow rates of basins and sub-basins were calculated and it was verified that the existing open channels have a low slope with the presence of permanent water and generate stagnation of water favored by the presence of trash. It is proposed for the output of storm drains, the use of an existing channel to evacuate the flow. The solution proposed in this work is complemented by the placement of three pumping stations: one on a channel to drain rain water which will allow the drain of the excess water from the lower area where is located the Ingeniero White city and the two others that will drain the excess liquid from the port area.

  14. DC attenuation meter

    DOEpatents

    Hargrove, Douglas L.

    2004-09-14

    A portable, hand-held meter used to measure direct current (DC) attenuation in low impedance electrical signal cables and signal attenuators. A DC voltage is applied to the signal input of the cable and feedback to the control circuit through the signal cable and attenuators. The control circuit adjusts the applied voltage to the cable until the feedback voltage equals the reference voltage. The "units" of applied voltage required at the cable input is the system attenuation value of the cable and attenuators, which makes this meter unique. The meter may be used to calibrate data signal cables, attenuators, and cable-attenuator assemblies.

  15. Advanced Signal Processing Methods Applied to Digital Mammography

    NASA Technical Reports Server (NTRS)

    Stauduhar, Richard P.

    1997-01-01

    without further support. Task 5: Better modeling does indeed make an improvement in the detection output. After the proposal ended, we came up with some new theoretical explanations that helps in understanding when the D4 filter should be better. This work is currently in the review process. Task 6: N/A. This no longer applies in view of Tasks 4-5. Task 7: Comprehensive plans for further work have been completed. These plans are the subject of two proposals, one to NASA and one to HHS. These proposals represent plans for a complete evaluation of the methods for identifying normal mammograms, augmented with significant further theoretical work.

  16. Applying sociodramatic methods in teaching transition to palliative care.

    PubMed

    Baile, Walter F; Walters, Rebecca

    2013-03-01

    We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. PMID:22889858

  17. Milliwatt dc/dc Inverter

    NASA Technical Reports Server (NTRS)

    Mclyman, C. W.

    1983-01-01

    Compact dc/dc inverter uses single integrated-circuit package containing six inverter gates that generate and amplify 100-kHz square-wave switching signal. Square-wave switching inverts 10-volt local power to isolated voltage at another desired level. Relatively high operating frequency reduces size of filter capacitors required, resulting in small package unit.

  18. A GIS modeling method applied to predicting forest songbird habitat

    USGS Publications Warehouse

    Dettmers, Randy; Bart, Jonathan

    1999-01-01

    We have developed an approach for using a??presencea?? data to construct habitat models. Presence data are those that indicate locations where the target organism is observed to occur, but that cannot be used to define locations where the organism does not occur. Surveys of highly mobile vertebrates often yield these kinds of data. Models developed through our approach yield predictions of the amount and the spatial distribution of good-quality habitat for the target species. This approach was developed primarily for use in a GIS context; thus, the models are spatially explicit and have the potential to be applied over large areas. Our method consists of two primary steps. In the first step, we identify an optimal range of values for each habitat variable to be used as a predictor in the model. To find these ranges, we employ the concept of maximizing the difference between cumulative distribution functions of (1) the values of a habitat variable at the observed presence locations of the target organism, and (2) the values of that habitat variable for all locations across a study area. In the second step, multivariate models of good habitat are constructed by combining these ranges of values, using the Boolean operators a??anda?? and a??or.a?? We use an approach similar to forward stepwise regression to select the best overall model. We demonstrate the use of this method by developing species-specific habitat models for nine forest-breeding songbirds (e.g., Cerulean Warbler, Scarlet Tanager, Wood Thrush) studied in southern Ohio. These models are based on speciesa?? microhabitat preferences for moisture and vegetation characteristics that can be predicted primarily through the use of abiotic variables. We use slope, land surface morphology, land surface curvature, water flow accumulation downhill, and an integrated moisture index, in conjunction with a land-cover classification that identifies forest/nonforest, to develop these models. The performance of these

  19. Early Oscillation Detection for DC/DC Converter Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Wang, Bright L.

    2011-01-01

    The electrical power system of a spacecraft plays a very critical role for space mission success. Such a modern power system may contain numerous hybrid DC/DC converters both inside the power system electronics (PSE) units and onboard most of the flight electronics modules. One of the faulty conditions for DC/DC converter that poses serious threats to mission safety is the random occurrence of oscillation related to inherent instability characteristics of the DC/DC converters and design deficiency of the power systems. To ensure the highest reliability of the power system, oscillations in any form shall be promptly detected during part level testing, system integration tests, flight health monitoring, and on-board fault diagnosis. The popular gain/phase margin analysis method is capable of predicting stability levels of DC/DC converters, but it is limited only to verification of designs and to part-level testing on some of the models. This method has to inject noise signals into the control loop circuitry as required, thus, interrupts the DC/DC converter's normal operation and increases risks of degrading and damaging the flight unit. A novel technique to detect oscillations at early stage for flight hybrid DC/DC converters was developed.

  20. Temperature dependent DC electrical conductivity studies of ZnO nanoparticle thick films prepared by simple solution combustion method

    SciTech Connect

    Naveen, C. S. Jayanna, H. S. Lamani, Ashok R. Rajeeva, M. P.

    2014-04-24

    ZnO nanoparticles of different size were prepared by varying the molar ratio of glycine and zinc nitrate hexahydrate as fuel and oxidizer (F/O = 0.8, 1.11, 1.7) by simple solution combustion method. Powder samples were characterized by UV-Visible spectrophotometer, X-ray diffractometer, Scanning electron microscope (SEM). DC electrical conductivity measurements at room temperature and in the temperature range of 313-673K were carried out for the prepared thick films and it was found to increase with increase of temperature which confirms the semiconducting nature of the samples. Activation energies were calculated and it was found that, F/O molar ratio 1.7 has low E{sub AL} (Low temperature activation energy) and high E{sub AH} (High temperature activation energy) compared to other samples.

  1. Nano-Crystalline Diamond Films with Pineapple-Like Morphology Grown by the DC Arcjet vapor Deposition Method

    NASA Astrophysics Data System (ADS)

    Li, Bin; Zhang, Qin-Jian; Shi, Yan-Chao; Li, Jia-Jun; Li, Hong; Lu, Fan-Xiu; Chen, Guang-Chao

    2014-08-01

    A nano-crystlline diamond film is grown by the dc arcjet chemical vapor deposition method. The film is characterized by scanning electron microscopy, high-resolution transmission electron microscopy (HRTEM), x-ray diffraction (XRD) and Raman spectra, respectively. The nanocrystalline grains are averagely with 80 nm in the size measured by XRD, and further proven by Raman and HRTEM. The observed novel morphology of the growth surface, pineapple-like morphology, is constructed by cubo-octahedral growth zones with a smooth faceted top surface and coarse side surfaces. The as-grown film possesses (100) dominant surface containing a little amorphous sp2 component, which is far different from the nano-crystalline film with the usual cauliflower-like morphology.

  2. Comprehensive Method for Analysis of Vitamin D in Foods (Experimetntal Biology Annual Meeting, April, 2007, Washington, D.C.)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A comprehensive method for Vitamin D analysis has been developed by using the best aspects of currently available published methods. The comprehensive method can be applied to a wide range of food samples including dry breakfast cereal, diet supplement drinks, powdered infant formula, cheese and ot...

  3. Simplified dc to dc converter

    NASA Technical Reports Server (NTRS)

    Gruber, R. P. (Inventor)

    1984-01-01

    A dc to dc converter which can start with a shorted output and which regulates output voltage and current is described. Voltage controlled switches directed current through the primary of a transformer the secondary of which includes virtual reactance. The switching frequency of the switches is appropriately varied to increase the voltage drop across the virtual reactance in the secondary winding to which there is connected a low impedance load. A starting circuit suitable for voltage switching devices is provided.

  4. The method of characteristics applied to Stirling engines

    SciTech Connect

    Taylor, D.R.

    1984-08-01

    Since Finkelstein first proposed a method of solving the equations of continuity, momentum and energy in a rigorous fashion, most analysts have concentrated on the nodal method for simulating Stirling engines. Organ has proposed a set of isothermal equations which may be solved by the method of characteristics. A solution method, by Benson, of the full set of equations has been in use for several years for the analysis of diesel engines. This paper discusses the application of the method of characteristics to the simulation of Stirling cycle machines.

  5. RP-HPLC method for the quantitative analysis of naturally occurring flavonoids in leaves of Blumea balsamifera DC.

    PubMed

    Nessa, Fazilatun; Ismail, Zhari; Karupiah, Sundram; Mohamed, Nornisah

    2005-09-01

    A selective and sensitive reversed-phase (RP) high-performance liquid chromatographic method is developed for the quantitative analysis of five naturally occurring flavonoids of Blumea balsamifera DC, namely dihydroquercetin-7,4'-dimethyl ether (DQDE), blumeatin (BL), quercetin (QN), 5,7,3',5'-tetrahydroxyflavanone (THFE), and dihydroquercetin-4'-methyl ether (DQME). These compounds have been isolated using various chromatographic methods. The five compounds are completely separated within 35 min using an RP C18, Nucleosil column and with an isocratic methanol-0.5% phosphoric acid (50:50, v/v) mobile phase at the flow rate of 0.9 mL/min. The separation of the compounds is monitored at 285 nm using UV detection. Identifications of specific flavonoids are made by comparing their retention times with those of the standards. Reproducibility of the method is good, with coefficients of variation of 1.48% for DQME, 2.25% for THFE, 2.31% for QN, 2.23% for DQDE, and 1.51% for BL. The average recoveries of pure flavonoids upon addition to lyophilized powder and subsequent extraction are 99.8% for DQME, 99.9% for THFE, 100.0% for BL, 100.6% for DQDE, and 97.4% for QN. PMID:16212782

  6. ELECTROCHEMICAL METHODS APPLIED TO THE ANALYSIS OF ENVIRONMENTAL SAMPLES

    EPA Science Inventory

    The fundamental principles of electroanalytical methods based on potentiometry, coulometry, conductance, and voltammetry are reviewed, and examples are given of applications to environmental analyses.

  7. The flow curvature method applied to canard explosion

    NASA Astrophysics Data System (ADS)

    Ginoux, Jean-Marc; Llibre, Jaume

    2011-11-01

    The aim of this work is to establish that the bifurcation parameter value leading to a canard explosion in dimension 2 obtained by the so-called geometric singular perturbation method can be found according to the flow curvature method. This result will be then exemplified with the classical Van der Pol oscillator.

  8. Fabrication of LiCoO{sub 2} thin film cathodes by DC magnetron sputtering method

    SciTech Connect

    Noh, Jung-pil; Cho, Gyu-bong; Jung, Ki-taek; Kang, Won-gyeong; Ha, Chung-wan; Ahn, Hyo-jun; Ahn, Jou-Hyeon; Nam, Tae-hyun; Kim, Ki-won

    2012-10-15

    LiCoO{sub 2} thin films were fabricated on Al substrate by direct current magnetron sputtering method. The effects of Ar/O{sub 2} gas rates and annealing temperatures were investigated. Crystal structures and surface morphologies of the deposited films were investigated by X-ray diffraction, Raman scattering spectroscopy and field emission scanning electron microscopy. The as-deposited LiCoO{sub 2} thin films exhibited amorphous structure. The crystallization starts at the annealing temperature over 400 °C. However, the annealed films have the partially disordered structure without completely ordered crystalline structure even at 600 °C annealing. The electrochemical properties of the LiCoO{sub 2} films were investigated by the charge–discharge and cycle measurements. The 500 °C annealing film has the highest capacity retention rate of 78.2% at 100th cycles.

  9. Applying Statistical Methods To The Proton Radius Puzzle

    NASA Astrophysics Data System (ADS)

    Higinbotham, Douglas

    2016-03-01

    In recent nuclear physics publications, one can find many examples where chi2 and reduced chi2 are the only tools used for the selection of models even though a chi2 difference test is only meaningful for nested models. With this in mind, we reanalyze electron scattering data, being careful to clearly define our selection criteria as well as using a co-variance matrix and confidence levels as per the statistics section of the particle data book. We will show that when applying such techniques to hydrogen elastic scattering data, the nested models often require fewer parameters than typically used and that non-nested models are often rejected inappropriately.

  10. A Method to Apply Friction Modifier in Railway System

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kosuke; Suda, Yoshihiro; Iwasa, Takashi; Fujii, Takeshi; Tomeoka, Masao; Tanimoto, Masuhisa; Kishimoto, Yasushi; Nakai, Takuji

    Controlling the friction between wheel and rail is direct and very effective measures to improve the curving performances of bogie trucks, because the curving performances of bogie truck depend much on friction characteristics. Authors have proposed a method, “friction control”, which utilizes friction modifier (KELTRACKTM HPF) with onboard spraying system. With the method, not only friction coefficient, but also friction characteristics are able to be controlled as expected. In this paper, results of fundamental experiments are reported which play an important role to realize the new method.

  11. EMD Method Applied to Identification of Logging Sequence Strata

    NASA Astrophysics Data System (ADS)

    Zhao, Ni; Li, Rui

    2015-10-01

    In this work, we compare Fourier transform, wavelet transform, and empirical mode decomposition (EMD), and point out that EMD method decomposes complex signal into a series of component functions through curves of local mean value. Each of Intrinsic Mode Functions (IMFs - component functions) contains all the information on the original signal. Therefore, it is more suitable for the interface identification of logging sequence strata. Well logging data reflect rich geological information and belong to non-linear and non-stationary signals and EMD method can deal with non-stationary and non-linear signals very well. By selecting sensitive parameters combination that reflects the regional geological structure and lithology, the combined parameter can be decomposed through EMD method to study the correlation and the physical meaning of each intrinsic mode function. Meanwhile, it identifies the stratigraphy and cycle sequence perfectly and provides an effective signal treatment method for sequence interface.

  12. Method development and survey of Sudan I–IV in palm oil and chilli spices in the Washington, DC, area

    PubMed Central

    Genualdi, Susie; MacMahon, Shaun; Robbins, Katherine; Farris, Samantha; Shyong, Nicole; DeJager, Lowri

    2016-01-01

    Sudan I, II, III and IV dyes are banned for use as food colorants in the United States and European Union because they are toxic and carcinogenic. These dyes have been illegally used as food additives in products such as chilli spices and palm oil to enhance their red colour. From 2003 to 2005, the European Union made a series of decisions requiring chilli spices and palm oil imported to the European Union to contain analytical reports declaring them free of Sudan I–IV. In order for the USFDA to investigate the adulteration of palm oil and chilli spices with unapproved colour additives in the United States, a method was developed for the extraction and analysis of Sudan dyes in palm oil, and previous methods were validated for Sudan dyes in chilli spices. Both LC-DAD and LC-MS/MS methods were examined for their limitations and effectiveness in identifying adulterated samples. Method validation was performed for both chilli spices and palm oil by spiking samples known to be free of Sudan dyes at concentrations close to the limit of detection. Reproducibility, matrix effects, and selectivity of the method were also investigated. Additionally, for the first time a survey of palm oil and chilli spices was performed in the United States, specifically in the Washington, DC, area. Illegal dyes, primarily Sudan IV, were detected in palm oil at concentrations from 150 to 24 000 ng ml−1. Low concentrations (< 21 μg kg−1) of Sudan dyes were found in 11 out of 57 spices and are most likely a result of cross-contamination during preparation and storage and not intentional adulteration. PMID:26824489

  13. Method development and survey of Sudan I-IV in palm oil and chilli spices in the Washington, DC, area.

    PubMed

    Genualdi, Susie; MacMahon, Shaun; Robbins, Katherine; Farris, Samantha; Shyong, Nicole; DeJager, Lowri

    2016-01-01

    Sudan I, II, III and IV dyes are banned for use as food colorants in the United States and European Union because they are toxic and carcinogenic. These dyes have been illegally used as food additives in products such as chilli spices and palm oil to enhance their red colour. From 2003 to 2005, the European Union made a series of decisions requiring chilli spices and palm oil imported to the European Union to contain analytical reports declaring them free of Sudan I-IV. In order for the USFDA to investigate the adulteration of palm oil and chilli spices with unapproved colour additives in the United States, a method was developed for the extraction and analysis of Sudan dyes in palm oil, and previous methods were validated for Sudan dyes in chilli spices. Both LC-DAD and LC-MS/MS methods were examined for their limitations and effectiveness in identifying adulterated samples. Method validation was performed for both chilli spices and palm oil by spiking samples known to be free of Sudan dyes at concentrations close to the limit of detection. Reproducibility, matrix effects, and selectivity of the method were also investigated. Additionally, for the first time a survey of palm oil and chilli spices was performed in the United States, specifically in the Washington, DC, area. Illegal dyes, primarily Sudan IV, were detected in palm oil at concentrations from 150 to 24 000 ng ml(-1). Low concentrations (< 21 µg kg(-1)) of Sudan dyes were found in 11 out of 57 spices and are most likely a result of cross-contamination during preparation and storage and not intentional adulteration. PMID:26824489

  14. Spectral methods applied to fluidized bed combustors. Final report

    SciTech Connect

    Brown, R.C.; Christofides, N.J.; Junk, K.W.; Raines, T.S.; Thiede, T.D.

    1996-08-01

    The objective of this project was to develop methods for characterizing fuels and sorbents from time-series data obtained during transient operation of fluidized bed boilers. These methods aimed at determining time constants for devolatilization and char burnout using carbon dioxide (CO{sub 2}) profiles and from time constants for the calcination and sulfation processes using CO{sub 2} and sulfur dioxide (SO{sub 2}) profiles.

  15. Optimal Scheduling Method of Controllable Loads in DC Smart Apartment Building

    NASA Astrophysics Data System (ADS)

    Shimoji, Tsubasa; Tahara, Hayato; Matayoshi, Hidehito; Yona, Atsushi; Senjyu, Tomonobu

    2015-12-01

    From the perspective of global warming suppression and the depletion of energy resources, renewable energies, such as the solar collector (SC) and photovoltaic generation (PV), have been gaining attention in worldwide. Houses or buildings with PV and heat pumps (HPs) are recently being used in residential areas widely due to the time of use (TOU) electricity pricing scheme which is essentially inexpensive during middle-night and expensive during day-time. If fixed batteries and electric vehicles (EVs) can be introduced in the premises, the electricity cost would be even more reduced. While, if the occupants arbitrarily use these controllable loads respectively, power demand in residential buildings may fluctuate in the future. Thus, an optimal operation of controllable loads such as HPs, batteries and EV should be scheduled in the buildings in order to prevent power flow from fluctuating rapidly. This paper proposes an optimal scheduling method of controllable loads, and the purpose is not only the minimization of electricity cost for the consumers, but also suppression of fluctuation of power flow on the power supply side. Furthermore, a novel electricity pricing scheme is also suggested in this paper.

  16. Newton like: Minimal residual methods applied to transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Wong, Y. S.

    1984-01-01

    A computational technique for the solution of the full potential equation is presented. The method consists of outer and inner iterations. The outer iterate is based on a Newton like algorithm, and a preconditioned Minimal Residual method is used to seek an approximate solution of the system of linear equations arising at each inner iterate. The present iterative scheme is formulated so that the uncertainties and difficulties associated with many iterative techniques, namely the requirements of acceleration parameters and the treatment of additional boundary conditions for the intermediate variables, are eliminated. Numerical experiments based on the new method for transonic potential flows around the NACA 0012 airfoil at different Mach numbers and different angles of attack are presented, and these results are compared with those obtained by the Approximate Factorization technique. Extention to three dimensional flow calculations and application in finite element methods for fluid dynamics problems by the present method are also discussed. The Inexact Newton like method produces a smoother reduction in the residual norm, and the number of supersonic points and circulations are rapidly established as the number of iterations is increased.

  17. DC to DC battery charger

    SciTech Connect

    Carr, F.L.; Terrill, L.R.

    1987-01-20

    A DC to DC battery charger is described for a vehicle comprising: adapter plug means for making electrical connections to a first battery through a cigarette lighter socket in the vehicle; means of making electrical connections to a second battery to be charged; a DC to AC converter and an AC to DC rectifier for elevating the voltage from the first battery to a voltage above that of the second battery; integrated circuit means for generating a pulse width modulated current as a function for the charged condition of the second battery; transistor switch means supplied with the pulse width modulated current for developing a charging voltage; a choke coil and a capacitor serially connected to the transistor switch means; and a diode connected across the choke coil and the capacitor whereby the capacitor is charged during pulses of current from the transistor switch means through the choke coil. The choke coil reverses polarity at the termination of the pulses of current and continues to charge the battery through the diode. The DC rectified voltage is controlled by the integrated circuit means for regulating current through the choke coil.

  18. A transient method for measuring the DC streaming potential coefficient of porous and fractured rocks

    NASA Astrophysics Data System (ADS)

    Walker, E.; Glover, P. W. J.; Ruel, J.

    2014-02-01

    High-quality streaming potential coupling coefficient measurements have been carried out using a newly designed cell with both a steady state methodology and a new pressure transient approach. The pressure transient approach has shown itself to be particularly good at providing high-quality streaming potential coefficient measurements as each transient increase or decrease allows thousands of measurements to be made at different pressures to which a good linear regression can be fitted. Nevertheless, the transient method can be up to 5 times as fast as the conventional measurement approaches because data from all flow rates are taken in the same transient measurement rather than separately. Test measurements have been made on samples of Berea and Boise sandstone as a function of salinity (approximately 18 salinities between 10-5 mol/dm3 and 2 mol/dm3). The data have also been inverted to obtain the zeta potential. The streaming potential coefficient becomes greater (more negative) for fluids with lower salinities, which is consistent with existing measurements. Our measurements are also consistent with the high-salinity streaming potential coefficient measurements made by Vinogradov et al. (2010). Both the streaming potential coefficient and the zeta potential have also been modeled using the theoretical approach of Glover (2012). This modeling allows the microstructural, electrochemical, and fluid properties of the saturated rock to be taken into account in order to provide a relationship that is unique to each particular rock sample. In all cases, we found that the experimental data were a good match to the theoretical model.

  19. Three-dimensional unstructured grid method applied to turbomachinery

    NASA Technical Reports Server (NTRS)

    Kwon, Oh Joon; Hah, Chunill

    1993-01-01

    This work has three objectives: to develop a three-dimensional flow solver based on unstructured tetrahedral meshes for turbomachinery flows; to validate the solver through comparisons with experimental data; and to apply the solver for better understanding of the flow through turbomachinery geometries and design improvement. The work followed three different approaches: an existing external flow solver/grid generator (USM3D/VGRID) was extensively modified for internal flows; a three-dimensional, finite-volume solver based on Roe's flux-difference splitting and explicit Runge-Kutta time stepping; and three-dimensional unstructured tetrahedral mesh generation using an advancing-front technique. A discussion of these topics is presented in viewgraph form.

  20. Programmed pulsewidth modulated waveforms for electromagnetic interference mitigation in dc-dc converters

    SciTech Connect

    Wang, A.C.; Sanders, S.R.

    1993-10-01

    The regular switching action of a pulsewidth modulated (PWM) circuit generates conducted and radiated electro-magnetic interference (EMI), and may also generate acoustical disturbances. Programmed pulsewidth modulation techniques have been applied suing various methods to control harmonics inherent in switched power circuits. In this paper, a method to generate an optimal programmed switching waveform for a dc-dc converter is presented. This switching waveform is optimized to reduce the amplitude of harmonic peaks in the EMI generated by the converter. Experimental results, a brief discussion of sensitivity, and a practical implementation of a circuit to generate the PWM waveform are given.

  1. Variance reduction methods applied to deep-penetration problems

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.

  2. DAKOTA reliability methods applied to RAVEN/RELAP-7.

    SciTech Connect

    Swiler, Laura Painton; Mandelli, Diego; Rabiti, Cristian; Alfonsi, Andrea

    2013-09-01

    This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).

  3. The colour analysis method applied to homogeneous rocks

    NASA Astrophysics Data System (ADS)

    Halász, Amadé; Halmai, Ákos

    2015-12-01

    Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.

  4. Method for applying pyrolytic carbon coatings to small particles

    DOEpatents

    Beatty, Ronald L.; Kiplinger, Dale V.; Chilcoat, Bill R.

    1977-01-01

    A method for coating small diameter, low density particles with pyrolytic carbon is provided by fluidizing a bed of particles wherein at least 50 per cent of the particles have a density and diameter of at least two times the remainder of the particles and thereafter recovering the small diameter and coated particles.

  5. [Synchrotron-based characterization methods applied to ancient materials (I)].

    PubMed

    Anheim, Étienne; Thoury, Mathieu; Bertrand, Loïc

    2015-12-01

    This article aims at presenting the first results of a transdisciplinary research programme in heritage sciences. Based on the growing use and on the potentialities of micro- and nano-characterization synchrotron-based methods to study ancient materials (archaeology, palaeontology, cultural heritage, past environments), this contribution will identify and test conceptual and methodological elements of convergence between physicochemical and historical sciences. PMID:25200450

  6. GENERAL CONSIDERATIONS FOR GEOPHYSICAL METHODS APPLIED TO AGRICULTURE

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Geophysics is the application of physical quantity measurement techniques to provide information on conditions or features beneath the earth’s surface. With the exception of borehole geophysical methods and soil probes like a cone penetrometer, these techniques are generally noninvasive with physica...

  7. System Identification and POD Method Applied to Unsteady Aerodynamics

    NASA Technical Reports Server (NTRS)

    Tang, Deman; Kholodar, Denis; Juang, Jer-Nan; Dowell, Earl H.

    2001-01-01

    The representation of unsteady aerodynamic flow fields in terms of global aerodynamic modes has proven to be a useful method for reducing the size of the aerodynamic model over those representations that use local variables at discrete grid points in the flow field. Eigenmodes and Proper Orthogonal Decomposition (POD) modes have been used for this purpose with good effect. This suggests that system identification models may also be used to represent the aerodynamic flow field. Implicit in the use of a systems identification technique is the notion that a relative small state space model can be useful in describing a dynamical system. The POD model is first used to show that indeed a reduced order model can be obtained from a much larger numerical aerodynamical model (the vortex lattice method is used for illustrative purposes) and the results from the POD and the system identification methods are then compared. For the example considered, the two methods are shown to give comparable results in terms of accuracy and reduced model size. The advantages and limitations of each approach are briefly discussed. Both appear promising and complementary in their characteristics.

  8. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    SciTech Connect

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  9. Paraxial WKB Method Applied to the Lower Hybrid Wave Propagation

    SciTech Connect

    Bertelli, N; Poli, E; Harvey, R; Wright, J C; Bonoli, P T; Phillips, C K; Simov, A P; Valeo, E

    2012-07-12

    The paraxial WKB (pWKB) approximation, also called beam tracing method, has been employed in order to study the propagation of lower hybrid (LH) waves in a tokamak plasma. Analogous to the well-know ray tracing method, this approach reduces Maxwell's equations to a set of ordinary differential equations, while, in addition, retains the effects of the finite beam cross-section, and, thus, the effects of diffraction. A new code, LHBEAM (Lower Hybrid BEAM tracing), is presented, which solves the pWKB equations in tokamak geometry for arbitrary launching conditions and for analytic and experimental plasma equilibria. In addition, LHBEAM includes linear electron Landau damping for the evaluation of the absorbed power density and the reconstruction of the wave electric field in both the physical and Fourier space. Illustrative LHBEAM calculations are presented along with a comparison with the ray tracing code GENRAY and the full wave solver TORIC-LH.

  10. Use Conditions and Efficiency Measurements of DC Power Optimizers for Photovoltaic Systems: Preprint

    SciTech Connect

    Deline, C.; MacAlpine, S.

    2013-10-01

    No consensus standard exists for estimating annual conversion efficiency of DC-DC converters or power optimizers in photovoltaic (PV) applications. The performance benefits of PV power electronics including per-panel DC-DC converters depend in large part on the operating conditions of the PV system, along with the performance characteristics of the power optimizer itself. This work presents acase study of three system configurations that take advantage of the capabilities of DC power optimizers. Measured conversion efficiencies of DC-DC converters are applied to these scenarios to determine the annual weighted operating efficiency. A simplified general method of reporting weighted efficiency is given, based on the California Energy Commission's CEC efficiency rating and severalinput / output voltage ratios. Efficiency measurements of commercial power optimizer products are presented using the new performance metric, along with a description of the limitations of the approach.

  11. MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

    SciTech Connect

    R. ESTEP; ET AL

    2000-06-01

    Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

  12. Differential correction method applied to measurement of the FAST reflector

    NASA Astrophysics Data System (ADS)

    Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng

    2016-08-01

    The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%–80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.

  13. Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

    NASA Technical Reports Server (NTRS)

    Stolzer, Alan J.; Halford, Carl

    2007-01-01

    In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.

  14. Atomistic Method Applied to Computational Modeling of Surface Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on

  15. System And Method Of Applying Energetic Ions For Sterlization

    DOEpatents

    Schmidt, John A.

    2002-06-11

    A method of sterilization of a container is provided whereby a cold plasma is caused to be disposed near a surface to be sterilized, and the cold plasma is then subjected to a pulsed voltage differential for producing energized ions in the plasma. Those energized ions then operate to achieve spore destruction on the surface to be sterilized. Further, a system for sterilization of a container which includes a conductive or non-conductive container, a cold plasma in proximity to the container, and a high voltage source for delivering a pulsed voltage differential between an electrode and the container and across the cold plasma, is provided.

  16. System and method of applying energetic ions for sterilization

    DOEpatents

    Schmidt, John A.

    2003-12-23

    A method of sterilization of a container is provided whereby a cold plasma is caused to be disposed near a surface to be sterilized, and the cold plasma is then subjected to a pulsed voltage differential for producing energized ions in the plasma. Those energized ions then operate to achieve spore destruction on the surface to be sterilized. Further, a system for sterilization of a container which includes a conductive or non-conductive container, a cold plasma in proximity to the container, and a high voltage source for delivering a pulsed voltage differential between an electrode and the container and across the cold plasma, is provided.

  17. Applied high resolution geophysical methods: Offshore geoengineering hazards

    SciTech Connect

    Trabant, P.K.

    1984-01-01

    This book is an examination of the purpose, methodology, equipment, and data interpretation of high-resolution geophysical methods, which are used to assess geological and manmade engineering hazards at offshore construction locations. It is a state-of-the-art review. Contents: 1. Introduction. 2. Maring geophysics, an overview. 3. Marine geotechnique, an overview. 4. Echo sounders. 5. Side scan sonar. 6. Subbottom profilers. 7. Seismic sources. 8. Single-channel seismic reflection systems. 9. Multifold acquisition and digital processing. 10. Marine magnetometers. 11. Marine geoengineering hazards. 12. Survey organization, navigation, and future developments. Appendix. Glossary. References. Index.

  18. SHUFFLE: A New Statistical Bootstrap Method: Applied to Cosmological Filaments

    NASA Astrophysics Data System (ADS)

    Bhavsar, Suketu P.; Bharadwaj, Somnath; Sheth, Jatush V.

    2003-05-01

    We introduce Shuffle, a powerful statistical procedure devised by Bhavsar and Ling [1] to determine the true physical extent of the filaments in the Las Campanas Redshift Survey [LCRS]. At its heart, Shuffle falls in the category of bootstrap like methods [2]. We find that the longest physical filamentary structures in 5 of the 6 LCRS slices are longer than 50 h-1 Mpc but not quite extending to 70 h-1 Mpc. The -3 degree slice contains filamentary structure longer than 70 h-1 Mpc.

  19. Error behaviour of multistep methods applied to unstable differential systems

    NASA Technical Reports Server (NTRS)

    Brown, R. L.

    1978-01-01

    The problem of modelling a dynamic system described by a system of ordinary differential equations which has unstable components for limited periods of time is discussed. It is shown that the global error in a multistep numerical method is the solution to a difference equation initial value problem, and the approximate solution is given for several popular multistep integration formulae. Inspection of the solution leads to the formulation of four criteria for integrators appropriate to unstable problems. A sample problem is solved numerically using three popular formulae and two different stepsizes to illustrate the appropriateness of the criteria.

  20. Algebraic multigrid methods applied to problems in computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Mccormick, Steve; Ruge, John

    1989-01-01

    The development of algebraic multigrid (AMG) methods and their application to certain problems in structural mechanics are described with emphasis on two- and three-dimensional linear elasticity equations and the 'jacket problems' (three-dimensional beam structures). Various possible extensions of AMG are also described. The basic idea of AMG is to develop the discretization sequence based on the target matrix and not the differential equation. Therefore, the matrix is analyzed for certain dependencies that permit the proper construction of coarser matrices and attendant transfer operators. In this manner, AMG appears to be adaptable to structural analysis applications.

  1. Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.

    PubMed

    Ramírez, C L; Martí, M A; Roitberg, A E

    2016-01-01

    One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost. PMID:27497165

  2. Statistical methods for texture analysis applied to agronomical images

    NASA Astrophysics Data System (ADS)

    Cointault, F.; Journaux, L.; Gouton, P.

    2008-02-01

    For activities of agronomical research institute, the land experimentations are essential and provide relevant information on crops such as disease rate, yield components, weed rate... Generally accurate, they are manually done and present numerous drawbacks, such as penibility, notably for wheat ear counting. In this case, the use of color and/or texture image processing to estimate the number of ears per square metre can be an improvement. Then, different image segmentation techniques based on feature extraction have been tested using textural information with first and higher order statistical methods. The Run Length method gives the best results closed to manual countings with an average error of 3%. Nevertheless, a fine justification of hypothesis made on the values of the classification and description parameters is necessary, especially for the number of classes and the size of analysis windows, through the estimation of a cluster validity index. The first results show that the mean number of classes in wheat image is of 11, which proves that our choice of 3 is not well adapted. To complete these results, we are currently analysing each of the class previously extracted to gather together all the classes characterizing the ears.

  3. Applied methods to verify LP turbine performance after retrofit

    SciTech Connect

    Overby, R.; Lindberg, G.

    1996-12-31

    With increasing operational hours of power plants, many utilities may find it necessary to replace turbine components, i.e., low pressure turbines. In order to decide between different technical and economic solutions, the utility often takes the opportunity to choose between an OEM or non-OEM supplier. This paper will deal with the retrofitting of LP turbines. Depending on the scope of supply the contract must define the amount of improvement and specifically how to verify this improvement. Unfortunately, today`s Test Codes, such as ASME PTC 6 and 6.1, do not satisfactorily cover these cases. The methods used by Florida Power and Light (FP and L) and its supplier to verify the improvement of the low pressure turbine retrofit at the Martin No. 1 and Sanford No. 4 units will be discussed and the experience gained will be presented. In particular the influence of the thermal cycle on the applicability of the available methods will be analyzed and recommendations given.

  4. Microcanonical ensemble simulation method applied to discrete potential fluids.

    PubMed

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002)0129-183110.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties. PMID:26465582

  5. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    NASA Astrophysics Data System (ADS)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  6. Microcanonical ensemble simulation method applied to discrete potential fluids

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  7. The Movable Type Method Applied to Protein-Ligand Binding.

    PubMed

    Zheng, Zheng; Ucisik, Melek N; Merz, Kenneth M

    2013-12-10

    Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term "movable type". Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the free

  8. The Movable Type Method Applied to Protein-Ligand Binding

    PubMed Central

    Zheng, Zheng; Ucisik, Melek N.; Merz, Kenneth M.

    2013-01-01

    Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term “movable type”. Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the

  9. Data Reduction Methods Applied to the Fastrac Engine

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1999-01-01

    The Fastrac rocket engine is currently being developed for the X-34 technology demonstrator vehicle. The engine performance model must be calibrated to support accurate performance prediction. Data reduction is the process of estimating hardware characteristics from available test data, and is essential for effective performance model calibration and prediction. A new data reduction procedure was developed, implemented, and tested using data from Fastrac engine tests. The procedure selects hardware and test measurements to use in the reduction process based on examination of the model influence matrix condition number. Predicted hardware characteristics are recovered from the solution of a quadratic programming problem. Computational tests indicate that the new procedure provides a significant improvement in test data reduction capability. Enhancements include improved test data utilization and time history data reduction capability. The new method is generically applicable to other systems.

  10. Integrated Research Methods for Applied Urban Hydrogeology of Karst Sites

    NASA Astrophysics Data System (ADS)

    Epting, J.; Romanov, D. K.; Kaufmann, G.; Huggenberger, P.

    2008-12-01

    measures. Theories describing the evolution of karst systems are mainly based on conceptual models. Although these models are based on fundamental and well established physical and chemical principles that allow studying important processes from initial small scale fracture networks to the mature karst, systems for monitoring the evolution of karst phenomena are rare. Integrated process-oriented investigation methods are presented, comprising the combination of multiple data sources (lithostratigraphic information of boreholes, extensive groundwater monitoring, dye tracer tests, geophysics) with high-resolution numerical groundwater modeling and model simulations of karstification below the dam. Subsequently, different scenarios evaluated the future development of the groundwater flow regime, the karstification processes as well as possible remediation measures. The approach presented assists in optimizing investigation methods, including measurement and monitoring technologies with predictive character for similar subsidence problems within karst environments in urban areas.

  11. Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.

    PubMed

    Reyes Santos, Joost; Haimes, Yacov Y

    2004-06-01

    The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model

  12. Applying dynamic methods in off-line signature recognition

    NASA Astrophysics Data System (ADS)

    Igarza, Juan Jose; Hernaez, Inmaculada; Goirizelaia, Inaki; Espinosa, Koldo

    2004-08-01

    In this paper we present the work developed on off-line signature verification using Hidden Markov Models (HMM). HMM is a well-known technique used by other biometric features, for instance, in speaker recognition and dynamic or on-line signature verification. Our goal here is to extend Left-to-Right (LR)-HMM to the field of static or off-line signature processing using results provided by image connectivity analysis. The chain encoding of perimeter points for each blob obtained by this analysis is an ordered set of points in the space, clockwise around the perimeter of the blob. We discuss two different ways of generating the models depending on the way the blobs obtained from the connectivity analysis are ordered. In the first proposed method, blobs are ordered according to their perimeter length. In the second proposal, blobs are ordered in their natural reading order, i.e. from the top to the bottom and left to right. Finally, two LR-HMM models are trained using the parameters obtained by the mentioned techniques. Verification results of the two techniques are compared and some improvements are proposed.

  13. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  14. Random particle methods applied to broadband fan interaction noise

    NASA Astrophysics Data System (ADS)

    Dieste, M.; Gabard, G.

    2012-10-01

    Predicting broadband fan noise is key to reduce noise emissions from aircraft and wind turbines. Complete CFD simulations of broadband fan noise generation remain too expensive to be used routinely for engineering design. A more efficient approach consists in synthesizing a turbulent velocity field that captures the main features of the exact solution. This synthetic turbulence is then used in a noise source model. This paper concentrates on predicting broadband fan noise interaction (also called leading edge noise) and demonstrates that a random particle mesh method (RPM) is well suited for simulating this source mechanism. The linearized Euler equations are used to describe sound generation and propagation. In this work, the definition of the filter kernel is generalized to include non-Gaussian filters that can directly follow more realistic energy spectra such as the ones developed by Liepmann and von Kármán. The velocity correlation and energy spectrum of the turbulence are found to be well captured by the RPM. The acoustic predictions are successfully validated against Amiet's analytical solution for a flat plate in a turbulent stream. A standard Langevin equation is used to model temporal decorrelation, but the presence of numerical issues leads to the introduction and validation of a second-order Langevin model.

  15. Taguchi methods applied to oxygen-enriched diesel engine experiments

    SciTech Connect

    Marr, W.W.; Sekar, R.R.; Cole, R.L.; Marciniak, T.J. ); Longman, D.E. )

    1992-01-01

    This paper describes a test series conducted on a six-cylinder diesel engine to study the impacts of controlled factors (i.e., oxygen content of the combustion air, water content of the fuel, fuel rate, and fuel-injection timing) on engine emissions using Taguchi methods. Three levels of each factor were used in the tests. Only the main effects of the factors were examined; no attempt was made to analyze the interactions among the factors. It was found that, as in the case of the single-cylinder engine tests, oxygen in the combustion air was very effective in reducing particulate and smoke emissions. Increases in NO[sub x] due to the oxygen enrichment observed in the single-cylinder tests also occurred in the present six-cylinder tests. Water in the emulsified fuel was found to be much less effective in decreasing NO[sub x] emissions for the six-cylinder engine than it was for the single-cylinder engine.

  16. Taguchi methods applied to oxygen-enriched diesel engine experiments

    SciTech Connect

    Marr, W.W.; Sekar, R.R.; Cole, R.L.; Marciniak, T.J.; Longman, D.E.

    1992-12-01

    This paper describes a test series conducted on a six-cylinder diesel engine to study the impacts of controlled factors (i.e., oxygen content of the combustion air, water content of the fuel, fuel rate, and fuel-injection timing) on engine emissions using Taguchi methods. Three levels of each factor were used in the tests. Only the main effects of the factors were examined; no attempt was made to analyze the interactions among the factors. It was found that, as in the case of the single-cylinder engine tests, oxygen in the combustion air was very effective in reducing particulate and smoke emissions. Increases in NO{sub x} due to the oxygen enrichment observed in the single-cylinder tests also occurred in the present six-cylinder tests. Water in the emulsified fuel was found to be much less effective in decreasing NO{sub x} emissions for the six-cylinder engine than it was for the single-cylinder engine.

  17. Mesoscopic electronics beyond DC transport

    NASA Astrophysics Data System (ADS)

    di Carlo, Leonardo

    Since the inception of mesoscopic electronics in the 1980's, direct current (dc) measurements have underpinned experiments in quantum transport. Novel techniques complementing dc transport are becoming paramount to new developments in mesoscopic electronics, particularly as the road is paved toward quantum information processing. This thesis describes seven experiments on GaAs/AlGaAs and graphene nanostructures unified by experimental techniques going beyond traditional dc transport. Firstly, dc current induced by microwave radiation applied to an open chaotic quantum dot is investigated. Asymmetry of mesoscopic fluctuations of induced current in perpendicular magnetic field is established as a tool for separating the quantum photovoltaic effect from classical rectification. A differential charge sensing technique is next developed using integrated quantum point contacts to resolve the spatial distribution of charge inside a double quantum clot. An accurate method for determining interdot tunnel coupling and electron temperature using charge sensing is demonstrated. A two-channel system for detecting current noise in mesoscopic conductors is developed, enabling four experiments where shot noise probes transmission properties not available in dc transport and Johnson noise serves as an electron thermometer. Suppressed shot noise is observed in quantum point contacts at zero parallel magnetic field, associated with the 0.7 structure in conductance. This suppression evolves with increasing field into the shot-noise signature of spin-lifted mode degeneracy. Quantitative agreement is found with a phenomenological model for density-dependent mode splitting. Shot noise measurements of multi-lead quantum-dot structures in the Coulomb blockade regime distill the mechanisms by which Coulomb interaction and quantum indistinguishability correlate electron flow. Gate-controlled sign reversal of noise cross correlation in two capacitively-coupled dots is observed, and shown to

  18. Analysis of self-oscillating dc-to-dc converters

    NASA Technical Reports Server (NTRS)

    Burger, P.

    1974-01-01

    The basic operational characteristics of dc-to-dc converters are analyzed along with the basic physical characteristics of power converters. A simple class of dc-to-dc power converters are chosen which could satisfy any set of operating requirements, and three different controlling methods in this class are described in detail. Necessary conditions for the stability of these converters are measured through analog computer simulation whose curves are related to other operational characteristics, such as ripple and regulation. Further research is suggested for the solution of absolute stability and efficient physical design of this class of power converters.

  19. 7 CFR 632.16 - Methods of applying planned land use and treatment.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 6 2013-01-01 2013-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply... administer a contract to perform the required treatment in accordance with 41 CFR chapters I and IV....

  20. 7 CFR 632.16 - Methods of applying planned land use and treatment.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 6 2012-01-01 2012-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply... administer a contract to perform the required treatment in accordance with 41 CFR chapters I and IV....

  1. 7 CFR 632.16 - Methods of applying planned land use and treatment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply... administer a contract to perform the required treatment in accordance with 41 CFR chapters I and IV....

  2. 7 CFR 632.16 - Methods of applying planned land use and treatment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 6 2011-01-01 2011-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply... administer a contract to perform the required treatment in accordance with 41 CFR chapters I and IV....

  3. Test plan for in situ stress measurement by the hydraulic fracturing method in boreholes RRL-6 and DC-4

    SciTech Connect

    Rundle, T.A.

    1983-10-07

    Hydrofracturing tests are to be performed to obtain experimental data regarding the magnitudes and orientations of the principal stresses in candidate repository horizons within the reference repository location (RRL). The tests are to be conducted in boreholes RRL-6 and DC-4 located in the reference repository location on the Hanford Site. This series of tests is to be limited to the performance of a maximum of 16 tests in each borehole. Basalt flows to be tested in borehole RRL-6 include the Rocky Coulee, Cohassett, McCoy Canyon, and Umtanum. Testing in borehole DC-4 will be in the Rocky Coulee and Cohassett basalt flows.

  4. Diagnostics of atmospheric-pressure pulsed-dc discharge with metal and liquid anodes by multiple laser-aided methods

    NASA Astrophysics Data System (ADS)

    Urabe, Keiichiro; Shirai, Naoki; Tomita, Kentaro; Akiyama, Tsuyoshi; Murakami, Tomoyuki

    2016-08-01

    The density and temperature of electrons and key heavy particles were measured in an atmospheric-pressure pulsed-dc helium discharge plasma with a nitrogen molecular impurity generated using system with a liquid or metal anode and a metal cathode. To obtain these parameters, we conducted experiments using several laser-aided methods: Thomson scattering spectroscopy to obtain the spatial profiles of electron density and temperature, Raman scattering spectroscopy to obtain the neutral molecular nitrogen rotational temperature, phase-modulated dispersion interferometry to determine the temporal variation of the electron density, and time-resolved laser absorption spectroscopy to analyze the temporal variation of the helium metastable atom density. The electron density and temperature measured by Thomson scattering varied from 2.4  ×  1014 cm‑3 and 1.8 eV at the center of the discharge to 0.8  ×  1014 cm‑3 and 1.5 eV near the outer edge of the plasma in the case of the metal anode, respectively. The electron density obtained with the liquid anode was approximately 20% smaller than that obtained with the metal anode, while the electron temperature was not significantly affected by the anode material. The molecular nitrogen rotational temperatures were 1200 K with the metal anode and 1650 K with the liquid anode at the outer edge of the plasma column. The density of helium metastable atoms decreased by a factor of two when using the liquid anode.

  5. Near-infrared radiation curable multilayer coating systems and methods for applying same

    DOEpatents

    Bowman, Mark P; Verdun, Shelley D; Post, Gordon L

    2015-04-28

    Multilayer coating systems, methods of applying and related substrates are disclosed. The coating system may comprise a first coating comprising a near-IR absorber, and a second coating deposited on a least a portion of the first coating. Methods of applying a multilayer coating composition to a substrate may comprise applying a first coating comprising a near-IR absorber, applying a second coating over at least a portion of the first coating and curing the coating with near infrared radiation.

  6. Radiation-Tolerant DC-DC Converters

    NASA Technical Reports Server (NTRS)

    Skutt, Glenn; Sable, Dan; Leslie, Leonard; Graham, Shawn

    2012-01-01

    A document discusses power converters suitable for space use that meet the DSCC MIL-PRF-38534 Appendix G radiation hardness level P classification. A method for qualifying commercially produced electronic parts for DC-DC converters per the Defense Supply Center Columbus (DSCC) radiation hardened assurance requirements was developed. Development and compliance testing of standard hybrid converters suitable for space use were completed for missions with total dose radiation requirements of up to 30 kRad. This innovation provides the same overall performance as standard hybrid converters, but includes assurance of radiation- tolerant design through components and design compliance testing. This availability of design-certified radiation-tolerant converters can significantly reduce total cost and delivery time for power converters for space applications that fit the appropriate DSCC classification (30 kRad).

  7. Full wave dc-to-dc converter using energy storage transformers

    NASA Technical Reports Server (NTRS)

    Moore, E. T.; Wilson, T. G.

    1969-01-01

    Full wave dc-to-dc converter, for an ion thrustor, uses energy storage transformers to provide a method of dc-to-dc conversion and regulation. The converter has a high degree of physical simplicity, is lightweight and has high efficiency.

  8. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  9. Frequency Domain Analysis of Beat-Less Control Method for Converter-Inverter Driving Systems Applied to AC Electric Cars

    NASA Astrophysics Data System (ADS)

    Kimura, Akira

    In inverter-converter driving systems for AC electric cars, the DC input voltage of an inverter contains a ripple component with a frequency that is twice as high as the line voltage frequency, because of a single-phase converter. The ripple component of the inverter input voltage causes pulsations on torques and currents of driving motors. To decrease the pulsations, a beat-less control method, which modifies a slip frequency depending on the ripple component, is applied to the inverter control. In the present paper, the beat-less control method was analyzed in the frequency domain. In the first step of the analysis, transfer functions, which revealed the relationship among the ripple component of the inverter input voltage, the slip frequency, the motor torque pulsation and the current pulsation, were derived with a synchronous rotating model of induction motors. An analysis model of the beat-less control method was then constructed using the transfer functions. The optimal setting of the control method was obtained according to the analysis model. The transfer functions and the analysis model were verified through simulations.

  10. Determination of efficiencies, loss mechanisms, and performance degradation factors in chopper controlled dc vehical motors. Section 2: The time dependent finite element modeling of the electromagnetic field in electrical machines: Methods and applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hamilton, H. B.; Strangas, E.

    1980-01-01

    The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.

  11. Effects of the duty ratio on the niobium oxide film deposited by pulsed-DC magnetron sputtering methods.

    PubMed

    Eom, Ji Mi; Oh, Hyun Gon; Cho, Il Hwan; Kwon, Sang Jik; Cho, Eou Sik

    2013-11-01

    Niobium oxide (Nb2O5) films were deposited on p-type Si wafers and sodalime glasses at a room temperature using in-line pulsed-DC magnetron sputtering system with various duty ratios. The different duty ratio was obtained by varying the reverse voltage time of pulsed DC power from 0.5 to 2.0 micros at the fixed frequency of 200 kHz. From the structural and optical characteristics of the sputtered NbOx films, it was possible to obtain more uniform and coherent NbOx films in case of the higher reverse voltage time as a result of the cleaning effect on the Nb2O5 target surface. The electrical characteristics from the metal-insulator-semiconductor (MIS) fabricated with the NbOx films shows the leakage currents are influenced by the reverse voltage time and the Schottky barrier diode characteristics. PMID:24245329

  12. High-mobility ZrInO thin-film transistor prepared by an all-DC-sputtering method at room temperature

    PubMed Central

    Xiao, Peng; Dong, Ting; Lan, Linfeng; Lin, Zhenguo; Song, Wei; Luo, Dongxiang; Xu, Miao; Peng, Junbiao

    2016-01-01

    Thin-film transistors (TFTs) with zirconium-doped indium oxide (ZrInO) semiconductor were successfully fabricated by an all-DC-sputtering method at room temperature. The ZrInO TFT without any intentionally annealing steps exhibited a high saturation mobility of 25.1 cm2V−1s−1. The threshold voltage shift was only 0.35 V for the ZrInO TFT under positive gate bias stress for 1 hour. Detailed studies showed that the room-temperature ZrInO thin film was in the amorphous state with low carrier density because of the strong bonding strength of Zr-O. The room-temperature process is attractive for its compatibility with almost all kinds of the flexible substrates, and the DC sputtering process is good for the production efficiency improvement and the fabrication cost reduction. PMID:27118177

  13. High-mobility ZrInO thin-film transistor prepared by an all-DC-sputtering method at room temperature.

    PubMed

    Xiao, Peng; Dong, Ting; Lan, Linfeng; Lin, Zhenguo; Song, Wei; Luo, Dongxiang; Xu, Miao; Peng, Junbiao

    2016-01-01

    Thin-film transistors (TFTs) with zirconium-doped indium oxide (ZrInO) semiconductor were successfully fabricated by an all-DC-sputtering method at room temperature. The ZrInO TFT without any intentionally annealing steps exhibited a high saturation mobility of 25.1 cm(2)V(-1)s(-1). The threshold voltage shift was only 0.35 V for the ZrInO TFT under positive gate bias stress for 1 hour. Detailed studies showed that the room-temperature ZrInO thin film was in the amorphous state with low carrier density because of the strong bonding strength of Zr-O. The room-temperature process is attractive for its compatibility with almost all kinds of the flexible substrates, and the DC sputtering process is good for the production efficiency improvement and the fabrication cost reduction. PMID:27118177

  14. High-mobility ZrInO thin-film transistor prepared by an all-DC-sputtering method at room temperature

    NASA Astrophysics Data System (ADS)

    Xiao, Peng; Dong, Ting; Lan, Linfeng; Lin, Zhenguo; Song, Wei; Luo, Dongxiang; Xu, Miao; Peng, Junbiao

    2016-04-01

    Thin-film transistors (TFTs) with zirconium-doped indium oxide (ZrInO) semiconductor were successfully fabricated by an all-DC-sputtering method at room temperature. The ZrInO TFT without any intentionally annealing steps exhibited a high saturation mobility of 25.1 cm2V‑1s‑1. The threshold voltage shift was only 0.35 V for the ZrInO TFT under positive gate bias stress for 1 hour. Detailed studies showed that the room-temperature ZrInO thin film was in the amorphous state with low carrier density because of the strong bonding strength of Zr-O. The room-temperature process is attractive for its compatibility with almost all kinds of the flexible substrates, and the DC sputtering process is good for the production efficiency improvement and the fabrication cost reduction.

  15. Effect of oxidizer on grain size and low temperature DC electrical conductivity of tin oxide nanomaterial synthesized by gel combustion method

    SciTech Connect

    Rajeeva, M. P. Jayanna, H. S. Ashok, R. L.; Naveen, C. S.; Bothla, V. Prasad

    2014-04-24

    Nanocrystalline Tin oxide material with different grain size was synthesized using gel combustion method by varying the fuel (C{sub 6}H{sub 8}O{sub 7}) to oxidizer (HNO{sub 3}) molar ratio by keeping the amount of fuel as constant. The prepared samples were characterized by using X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM) and Energy Dispersive Analysis X-ray Spectroscopy (EDAX). The effect of fuel to oxidizer molar ratio in the gel combustion method was investigated by inspecting the grain size of nano SnO{sub 2} powder. The grain size was found to be reduced with the amount of oxidizer increases from 0 to 6 moles in the step of 2. The X-ray diffraction patterns of the calcined product showed the formation of high purity tetragonal tin (IV) oxide with the grain size in the range of 12 to 31 nm which was calculated by Scherer's formula. Molar ratio and temperature dependence of DC electrical conductivity of SnO{sub 2} nanomaterial was studied using Keithley source meter. DC electrical conductivity of SnO{sub 2} nanomaterial increases with the temperature from 80K to 300K. From the study it was observed that the DC electrical conductivity of SnO{sub 2} nanomaterial decreases with the grain size at constant temperature.

  16. Efficient Design in a DC to DC Converter Unit

    NASA Technical Reports Server (NTRS)

    Bruemmer, Joel E.; Williams, Fitch R.; Schmitz, Gregory V.

    2002-01-01

    Space Flight hardware requires high power conversion efficiencies due to limited power availability and weight penalties of cooling systems. The International Space Station (ISS) Electric Power System (EPS) DC-DC Converter Unit (DDCU) power converter is no exception. This paper explores the design methods and tradeoffs that were utilized to accomplish high efficiency in the DDCU. An isolating DC to DC converter was selected for the ISS power system because of requirements for separate primary and secondary grounds and for a well-regulated secondary output voltage derived from a widely varying input voltage. A flyback-current-fed push-pull topology or improved Weinberg circuit was chosen for this converter because of its potential for high efficiency and reliability. To enhance efficiency, a non-dissipative snubber circuit for the very-low-Rds-on Field Effect Transistors (FETs) was utilized, redistributing the energy that could be wasted during the switching cycle of the power FETs. A unique, low-impedance connection system was utilized to improve contact resistance over a bolted connection. For improved consistency in performance and to lower internal wiring inductance and losses a planar bus system is employed. All of these choices contributed to the design of a 6.25 KW regulated dc to dc converter that is 95 percent efficient. The methodology used in the design of this DC to DC Converter Unit may be directly applicable to other systems that require a conservative approach to efficient power conversion and distribution.

  17. Early Oscillation Detection Technique for Hybrid DC/DC Converters

    NASA Technical Reports Server (NTRS)

    Wang, Bright L.

    2011-01-01

    normal operation. This technique eliminates the probing problem of a gain/phase margin method by connecting the power input to a spectral analyzer. Therefore, it is able to evaluate stability for all kinds of hybrid DC/DC converters with or without remote sense pins, and is suitable for real-time and in-circuit testing. This frequency-domain technique is more sensitive to detect oscillation at early stage than the time-domain method using an oscilloscope.

  18. Analysis of Monte Carlo methods applied to blackbody and lower emissivity cavities.

    PubMed

    Pahl, Robert J; Shannon, Mark A

    2002-02-01

    Monte Carlo methods are often applied to the calculation of the apparent emissivities of blackbody cavities. However, for cavities with complex as well as some commonly encountered geometries, the emission Monte Carlo method experiences problems of convergence. The emission and absorption Monte Carlo methods are compared on the basis of ease of implementation and convergence speed when applied to blackbody sources. A new method to determine solution convergence compatible with both methods is developed, and the convergence speeds of the two methods are compared through the application of both methods to a right-circular cylinder cavity. It is shown that the absorption method converges faster and is easier to implement than the emission method when applied to most blackbody and lower emissivity cavities. PMID:11993915

  19. Single event AC - DC electrospraying

    NASA Astrophysics Data System (ADS)

    Stachewicz, U.; Dijksman, J. F.; Marijnissen, J. C. M.

    2008-12-01

    Electrospraying is an innovative method to deposit very small amounts of, for example, biofluids (far less than 1 p1) that include DNA or protein molecules. An electric potential is applied between a nozzle filled with liquid and a counter electrode placed at 1-2 millimeter distance from the nozzle. In our set-up we use an AC field superposed on a DC field to control the droplet generation process. Our approach is to create single events of electrospraying triggered by one single AC pulse. During this pulse, the equilibrium meniscus (determined by surface tension, static pressure and the DC field) of the liquid changes rapidly into a cone and subsequently into a jet formed at the cone apex. Next, the jet breaks-up into fine droplets and the spraying stops. The meniscus returns to its equilibrium shape again. So far we obtained a stable and reproducible single event process for ethanol and ethylene glycol with water using glass pipettes. The results will be used to generate droplets on demand in a controlled way and deposit them on a pre-defined place on the substrate.

  20. High-Efficiency dc/dc Converter

    NASA Technical Reports Server (NTRS)

    Sturman, J.

    1982-01-01

    High-efficiency dc/dc converter has been developed that provides commonly used voltages of plus or minus 12 Volts from an unregulated dc source of from 14 to 40 Volts. Unique features of converter are its high efficiency at low power level and ability to provide output either larger or smaller than input voltage.

  1. Optimum Design of CMOS DC-DC Converter for Mobile Applications

    NASA Astrophysics Data System (ADS)

    Katayama, Yasushi; Edo, Masaharu; Denta, Toshio; Kawashima, Tetsuya; Ninomiya, Tamotsu

    In recent years, low output power CMOS DC-DC converters which integrate power stage MOSFETs and a PWM controller using CMOS process have been used in many mobile applications. In this paper, we propose the calculation method of CMOS DC-DC converter efficiency and report optimum design of CMOS DC-DC converter based on this method. By this method, converter efficiencies are directly calculated from converter specifications, dimensions of power stage MOSFET and device parameters. Therefore, this method can be used for optimization of CMOS DC-DC converter design, such as dimensions of power stage MOSFET and switching frequency. The efficiency calculated by the proposed method agrees well with the experimental results.

  2. An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy

    ERIC Educational Resources Information Center

    Gamso, Nancy M.

    2011-01-01

    The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…

  3. An Optimal Control Strategy for DC Bus Voltage Regulation in Photovoltaic System with Battery Energy Storage

    PubMed Central

    Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M. A.

    2014-01-01

    This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods. PMID:24883374

  4. An optimal control strategy for DC bus voltage regulation in photovoltaic system with battery energy storage.

    PubMed

    Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M A

    2014-01-01

    This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods. PMID:24883374

  5. DC-Compensated Current Transformer.

    PubMed

    Ripka, Pavel; Draxler, Karel; Styblíková, Renata

    2016-01-01

    Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830

  6. Preparation of silicon carbide SiC-based nanopowders by the aerosol-assisted synthesis and the DC thermal plasma synthesis methods

    SciTech Connect

    Czosnek, Cezary; Bućko, Mirosław M.; Janik, Jerzy F.; Olejniczak, Zbigniew; Bystrzejewski, Michał; Łabędź, Olga; Huczko, Andrzej

    2015-03-15

    Highlights: • Make-up of the SiC-based nanopowders is a function of the C:Si:O ratio in precursor. • Two-stage aerosol-assisted synthesis offers conditions close to equilibrium. • DC thermal plasma synthesis yields kinetically controlled SiC products. - Abstract: Nanosized SiC-based powders were prepared from selected liquid-phase organosilicon precursors by the aerosol-assisted synthesis, the DC thermal plasma synthesis, and a combination of the two methods. The two-stage aerosol-assisted synthesis method provides at the end conditions close to thermodynamic equilibrium. The single-stage thermal plasma method is characterized by short particle residence times in the reaction zone, which can lead to kinetically controlled products. The by-products and final nanopowders were characterized by powder XRD, infrared spectroscopy FT-IR, scanning electron microscopy SEM, and {sup 29}Si MAS NMR spectroscopy. BET specific surface areas of the products were determined by standard physical adsorption of nitrogen at 77 K. The major component in all synthesis routes was found to be cubic silicon carbide β-SiC with average crystallite sizes ranging from a few to tens of nanometers. In some cases, it was accompanied by free carbon, elemental silicon or silica nanoparticles. The final mesoporous β-SiC-based nanopowders have a potential as affordable catalyst supports.

  7. Method of error analysis for phase-measuring algorithms applied to photoelasticity.

    PubMed

    Quiroga, J A; González-Cano, A

    1998-07-10

    We present a method of error analysis that can be applied for phase-measuring algorithms applied to photoelasticity. We calculate the contributions to the measurement error of the different elements of a circular polariscope as perturbations of the Jones matrices associated with each element. The Jones matrix of the real polariscope can then be calculated as a sum of the nominal matrix and a series of contributions that depend on the errors associated with each element separately. We apply this method to the analysis of phase-measuring algorithms for the determination of isoclinics and isochromatics, including comparisons with real measurements. PMID:18285900

  8. Algebraic parameters identification of DC motors: methodology and analysis

    NASA Astrophysics Data System (ADS)

    Becedas, J.; Mamani, G.; Feliu, V.

    2010-10-01

    A fast, non-asymptotic, algebraic parameter identification method is applied to an uncertain DC motor to estimate the uncertain parameters: viscous friction coefficient and inertia. In this work, the methodology is developed and analysed, its convergence, a comparative study between the traditional recursive least square method and the algebraic identification method is carried out, and an analysis of the estimator in a noisy system is presented. Computer simulations were carried out to validate the suitability of the identification algorithm.

  9. 7 CFR 632.16 - Methods of applying planned land use and treatment.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 6 2014-01-01 2014-01-01 false Methods of applying planned land use and treatment. 632.16 Section 632.16 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE LONG TERM CONTRACTING RURAL ABANDONED MINE PROGRAM Qualifications § 632.16 Methods of...

  10. Applying of interactive methods for astronomy education in a school project "International space colony TANHGRA"

    NASA Astrophysics Data System (ADS)

    Radeva, Veselka S.

    Several interactive methods, applied in the astronomy education during creation of the project about a colony in the Space, are presented. The methods Pyramid, Brainstorm, Snow-slip (Snowball) and Aquarium give the opportunity for schooler to understand and learn well a large packet of astronomical knowledge.

  11. Methods for Smoothing Expectancy Tables Applied to the Prediction of Success in College

    ERIC Educational Resources Information Center

    Perrin, David W.; Whitney, Douglas R.

    1976-01-01

    The gains in accuracy resulting from applying any of the smoothing methods appear sufficient to justify the suggestion that all expectancy tables used by colleges for admission, guidance, or planning purposes should be smoothed. These methods on the average, reduce the criterion measure (an index of inaccuracy) by 30 percent. (Author/MV)

  12. METHODS FOR EVALUATING THE BIOLOGICAL IMPACT OF POTENTIALLY TOXIC WASTE APPLIED TO SOILS

    EPA Science Inventory

    The study was designed to evaluate two methods that can be used to estimate the biological impact of organics and inorganics that may be in wastes applied to land for treatment and disposal. The two methods were the contact test and the artificial soil test. The contact test is a...

  13. An Empirical Study of Applying Associative Method in College English Vocabulary Learning

    ERIC Educational Resources Information Center

    Zhang, Min

    2014-01-01

    Vocabulary is the basis of any language learning. To many Chinese non-English majors it is difficult to memorize English words. This paper applied associative method in presenting new words to them. It is found that associative method did receive a better result both in short-term and long-term retention of English words. Compared with the…

  14. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  15. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  16. Active Problem Solving and Applied Research Methods in a Graduate Course on Numerical Methods

    ERIC Educational Resources Information Center

    Maase, Eric L.; High, Karen A.

    2008-01-01

    "Chemical Engineering Modeling" is a first-semester graduate course traditionally taught in a lecture format at Oklahoma State University. The course as taught by the author for the past seven years focuses on numerical and mathematical methods as necessary skills for incoming graduate students. Recent changes to the course have included Visual…

  17. Proposal and Evaluation of Management Method for College Mechatronics Education Applying the Project Management

    NASA Astrophysics Data System (ADS)

    Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto

    In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.

  18. Analysis of Preconditioning and Relaxation Operators for the Discontinuous Galerkin Method Applied to Diffusion

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Shu, Chi-Wang

    2001-01-01

    The explicit stability constraint of the discontinuous Galerkin method applied to the diffusion operator decreases dramatically as the order of the method is increased. Block Jacobi and block Gauss-Seidel preconditioner operators are examined for their effectiveness at accelerating convergence. A Fourier analysis for methods of order 2 through 6 reveals that both preconditioner operators bound the eigenvalues of the discrete spatial operator. Additionally, in one dimension, the eigenvalues are grouped into two or three regions that are invariant with order of the method. Local relaxation methods are constructed that rapidly damp high frequencies for arbitrarily large time step.

  19. A study of two statistical methods as applied to shuttle solid rocket booster expenditures

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.; Huang, Y.; Graves, M.

    1974-01-01

    The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.

  20. Puncture point-traction method: A novel method applied for right internal jugular vein catheterization

    PubMed Central

    WU, TIANLIANG; ZANG, HONGCHENG

    2016-01-01

    The ultrasound probe and advancement of the needle during real-time ultrasound-assisted guidance of catheterization of the right internal jugular vein (RIJV) tend to collapse the vein, which reduces the success rate of the procedure. We have developed a novel puncture point-traction method (PPTM) to facilitate RIJV cannulation. The present study examined whether this method facilitated the performance of RIJV catheterization in anesthetized patients. In this study, 120 patients were randomly assigned to a group in which PPTM was performed (PPTM group, n=60) or a group in which it was not performed (non-PPTM group, n=60). One patient was excluded because of internal carotid artery puncture and 119 patients remained for analysis. The cross-sectional area (CSA), anteroposterior diameter (AD) and transverse diameter (TD) of the RIJV at the cricoid cartilage level following the induction of anesthesia and during catheterization were measured, and the number with obvious loss of resistance (NOLR), the number with easy aspiration of blood into syringe (NEABS) during advancement of the needle, and the number of first-pass punctures (NFPP) during catheterization were determined. In the non-PPTM group, the CSA was smaller during catheterization compared with that following the induction of anesthesia (P<0.01). In the PPTM group compared with the non-PPTM group during catheterization, the CSA was larger (P<0.01) and the AD (P<0.01) and TD (P<0.05) were wider; NOLR (P<0.01), NEABS (P<0.01) and NFPP (P<0.01) increased significantly. The findings from this study confirmed that the PPTM facilitated catheterization of the RIJV and improved the success rate of RIJV catheterization in anesthetized patients in the supine position. PMID:27347054

  1. The application of standardized control and interface circuits to three dc to dc power converters.

    NASA Technical Reports Server (NTRS)

    Yu, Y.; Biess, J. J.; Schoenfeld, A. D.; Lalli, V. R.

    1973-01-01

    Standardized control and interface circuits were applied to the three most commonly used dc to dc converters: the buck-boost converter, the series-switching buck regulator, and the pulse-modulated parallel inverter. The two-loop ASDTIC regulation control concept was implemented by using a common analog control signal processor and a novel digital control signal processor. This resulted in control circuit standardization and superior static and dynamic performance of the three dc-to-dc converters. Power components stress control, through active peak current limiting and recovery of switching losses, was applied to enhance reliability and converter efficiency.

  2. Real-time DC-dynamic biasing method for switching time improvement in severely underdamped fringing-field electrostatic MEMS actuators.

    PubMed

    Small, Joshua; Fruehling, Adam; Garg, Anurag; Liu, Xiaoguang; Peroulis, Dimitrios

    2014-01-01

    Mechanically underdamped electrostatic fringing-field MEMS actuators are well known for their fast switching operation in response to a unit step input bias voltage. However, the tradeoff for the improved switching performance is a relatively long settling time to reach each gap height in response to various applied voltages. Transient applied bias waveforms are employed to facilitate reduced switching times for electrostatic fringing-field MEMS actuators with high mechanical quality factors. Removing the underlying substrate of the fringing-field actuator creates the low mechanical damping environment necessary to effectively test the concept. The removal of the underlying substrate also a has substantial improvement on the reliability performance of the device in regards to failure due to stiction. Although DC-dynamic biasing is useful in improving settling time, the required slew rates for typical MEMS devices may place aggressive requirements on the charge pumps for fully-integrated on-chip designs. Additionally, there may be challenges integrating the substrate removal step into the back-end-of-line commercial CMOS processing steps. Experimental validation of fabricated actuators demonstrates an improvement of 50x in switching time when compared to conventional step biasing results. Compared to theoretical calculations, the experimental results are in good agreement. PMID:25145811

  3. Real-Time DC-dynamic Biasing Method for Switching Time Improvement in Severely Underdamped Fringing-field Electrostatic MEMS Actuators

    PubMed Central

    Small, Joshua; Fruehling, Adam; Garg, Anurag; Liu, Xiaoguang; Peroulis, Dimitrios

    2014-01-01

    Mechanically underdamped electrostatic fringing-field MEMS actuators are well known for their fast switching operation in response to a unit step input bias voltage. However, the tradeoff for the improved switching performance is a relatively long settling time to reach each gap height in response to various applied voltages. Transient applied bias waveforms are employed to facilitate reduced switching times for electrostatic fringing-field MEMS actuators with high mechanical quality factors. Removing the underlying substrate of the fringing-field actuator creates the low mechanical damping environment necessary to effectively test the concept. The removal of the underlying substrate also a has substantial improvement on the reliability performance of the device in regards to failure due to stiction. Although DC-dynamic biasing is useful in improving settling time, the required slew rates for typical MEMS devices may place aggressive requirements on the charge pumps for fully-integrated on-chip designs. Additionally, there may be challenges integrating the substrate removal step into the back-end-of-line commercial CMOS processing steps. Experimental validation of fabricated actuators demonstrates an improvement of 50x in switching time when compared to conventional step biasing results. Compared to theoretical calculations, the experimental results are in good agreement. PMID:25145811

  4. Applying Item Response Theory Methods to Design a Learning Progression-Based Science Assessment

    ERIC Educational Resources Information Center

    Chen, Jing

    2012-01-01

    Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1)…

  5. A Method of Measuring the Costs and Benefits of Applied Research.

    ERIC Educational Resources Information Center

    Sprague, John W.

    The Bureau of Mines studied the application of the concepts and methods of cost-benefit analysis to the problem of ranking alternative applied research projects. Procedures for measuring the different classes of project costs and benefits, both private and public, are outlined, and cost-benefit calculations are presented, based on the criteria of…

  6. Nonuniform covering method as applied to multicriteria optimization problems with guaranteed accuracy

    NASA Astrophysics Data System (ADS)

    Evtushenko, Yu. G.; Posypkin, M. A.

    2013-02-01

    The nonuniform covering method is applied to multicriteria optimization problems. The ɛ-Pareto set is defined, and its properties are examined. An algorithm for constructing an ɛ-Pareto set with guaranteed accuracy ɛ is described. The efficiency of implementing this approach is discussed, and numerical results are presented.

  7. A chemical amplification method for the sequential estimation of phosphorus, arsenic and silicon at ng/ml levels by d.c. polarography.

    PubMed

    Kannan, R; Ramakrishna, T V; Rajagopalan, S R

    1985-05-01

    A method is described for the sequential determination of phosphorus, arsenic and silicon at ng/ml levels by d.c. polarography. These elements are converted into their heteropolymolybdates and separated by selective solvent extraction. Determination of the molybdenum in the extract gives an enhancement factor of 12 for determination of the hetero-atom. A further enhancement by a factor of 40 is achieved by determining the molybdenum by catalytic polarography in nitrate medium. Charging-current compensation is employed to improve precision and the detection limit. The detection limits for phosphorus, arsenic and silicon are 0.5, 4.7 and 3.1 mu/gl., respectively and the relative standard deviation is 2-2.5%. PMID:18963870

  8. Optical properties and surface morphology of Li-doped ZnO thin films deposited on different substrates by DC magnetron sputtering method

    NASA Astrophysics Data System (ADS)

    Mohamed, Galal A.; Mohamed, El-Maghraby; Abu El-Fadl, A.

    2001-12-01

    Thin films of zinc oxide doped with Zn 1- xLi xO with x=0.2 (ZnO : Li), have been prepared on sapphire, MgO and quartz substrates by DC magnetron sputtering method at 5 mTorr. The substrate temperatures were fixed to about 573 K. We have measured the transmission and reflection spectra and determined the absorption coefficient, optical band-gap ( Egdopt), the high frequency dielectric constant ε‧ ∞ and the carrier concentration N for the as-prepared films at room temperature. The films show direct allowed optical transitions with Egdopt values of 3.38, 3.43 and 3.29 eV for films deposited on sapphire, MgO and quartz substrates, respectively. The dependence of the obtained results on the substrate type are discussed.

  9. Construction of a dc-dc transformer - A model of transitory behavior under load

    NASA Astrophysics Data System (ADS)

    Louail, G.

    A numerical model is presented for the construction of high performance dc-dc transformers for industrial applications, taking into account a variety of control techniques. Control logic to minimize fluctuations during load dumping intervals are defined. Problems linked to the demagnetization of the core are investigated and solutions are proposed. Attention is given to the selection of a commutator for a given application of a transformer, and functional characteristics of bipolar and MOS transistors are described. The principles are applied to the construction of a prototype second order transformer which is amenable to modular use. Finally, two methods of numerical modeling are presented: the first with simplified hypotheses for use with a hand calculator, and the second more rigourous, using discretized equations in a static regime. It is shown that a sudden power surge is the most critical phase for a power commutator. Progressively loading logic is devised, and the fabrication of 150 A commutators is indicated

  10. Simultaneous distribution of AC and DC power

    DOEpatents

    Polese, Luigi Gentile

    2015-09-15

    A system and method for the transport and distribution of both AC (alternating current) power and DC (direct current) power over wiring infrastructure normally used for distributing AC power only, for example, residential and/or commercial buildings' electrical wires is disclosed and taught. The system and method permits the combining of AC and DC power sources and the simultaneous distribution of the resulting power over the same wiring. At the utilization site a complementary device permits the separation of the DC power from the AC power and their reconstruction, for use in conventional AC-only and DC-only devices.

  11. Applying Padé via Lanczos to the finite element method for electromagnetic radiation problems

    NASA Astrophysics Data System (ADS)

    Slone, Rodney Daryl; Lee, Robert

    2000-03-01

    Recently there has been a great deal of interest in using the Padé via Lanczos (PVL) technique to analyze the transfer functions and impulse responses of large-scale linear circuits. In this paper, matrix-Padé via Lanczos (MPVL), which can be used on multiple-input multiple-output systems, is applied to solve models resulting from applying the finite element method (FEM) to electromagnetic wave propagation problems in the frequency domain. The resulting solution procedure of using MPVL to solve FEM equations allows for wideband frequency simulations with a reduction in total computation time. Several issues arise during this application, and each is addressed in detail. Numerical simulations using this method are shown along with traditional methods using an LU decomposition at each frequency point of interest. Comparisons in accuracy as well as computation time are also given.

  12. An applied study using systems engineering methods to prioritize green systems options

    SciTech Connect

    Lee, Sonya M; Macdonald, John M

    2009-01-01

    For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.

  13. Method of applying coatings to substrates and the novel coatings produced thereby

    DOEpatents

    Hendricks, C.D.

    1987-09-15

    A method for applying novel coatings to substrates is provided. The ends of a multiplicity of rods of different materials are melted by focused beams of laser light. Individual electric fields are applied to each of the molten rod ends, thereby ejecting charged particles that include droplets, atomic clusters, molecules, and atoms. The charged particles are separately transported, by the accelerations provided by electric potentials produced by an electrode structure, to substrates where they combine and form the coatings. Layered and thickness graded coatings comprised of hitherto unavailable compositions, are provided. 2 figs.

  14. Method of applying a cerium diffusion coating to a metallic alloy

    DOEpatents

    Jablonski, Paul D.; Alman, David E.

    2009-06-30

    A method of applying a cerium diffusion coating to a preferred nickel base alloy substrate has been discovered. A cerium oxide paste containing a halide activator is applied to the polished substrate and then dried. The workpiece is heated in a non-oxidizing atmosphere to diffuse cerium into the substrate. After cooling, any remaining cerium oxide is removed. The resulting cerium diffusion coating on the nickel base substrate demonstrates improved resistance to oxidation. Cerium coated alloys are particularly useful as components in a solid oxide fuel cell (SOFC).

  15. Method of representation of acoustic spectra and reflection corrections applied to externally blown flap noise

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1975-01-01

    A computer method for obtaining a rational function representation of an acoustic spectrum and for correcting reflection effects is introduced. The functional representation provides a means of compact storage of data and the nucleus of the data analysis method. The method is applied to noise from a full-scale externally blown flap system with a quiet 6:1 bypass ratio turbofan engine and a three-flap wing section designed to simulate the take-off condition of a conceptual STOL aircraft.

  16. Non-invasive imaging methods applied to neo- and paleontological cephalopod research

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2013-11-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.

  17. Two methods of measuring muscle tone applied in patients with decerebrate rigidity.

    PubMed Central

    Tsementzis, S A; Gillingham, F J; Gordon, A; Lakie, M D

    1980-01-01

    Two methods were used to measure muscle tone in patients with decerebrate rigidity. In the first method forces of square waveform were applied and the calculated compliance of the joint was used as an index of rigidity. Oscillatory transients were seen at the same frequency as the physiological tremor. The range of normal variation in compliance was large and the values measured in the patients flucuated markedly which limited the value of this index. In the second method, where forces of sinusoidal waveform were employed, the resonant frequency of the joint was measured and used as an index of rigidity. This index proved reliable and reproducible. PMID:7354353

  18. Determination of Essential Oil Composition of Prangos acaulis (DC) Bornm Obtained by Hydrodistillation and Supercritical Fluid Extraction Methods

    NASA Astrophysics Data System (ADS)

    Hadavand Mirzaei, Hossein; Hadi Meshkatalsadat, Mohammad; Soheilivand, Saeed

    Chemical composition of the essential oil of the Prangos acaulis was extracted by Hydrodistillation (HD) and Supercritical Fluid Extraction (SFE) methods from aerial parts at full flowering stage. Their compositions were identified using GC/MS as the method of analysis. The analyses reveal that samples differ quantitatively and qualitatively. A total of 21 compounds constituting 89.1% of aerial parts oil were in SFE method. The oil obtained by SFE was under condition: pressure 120 bar, temperature 45°C and extraction time 45 min. On the other hand, 26 compounds constituting 98.74% of oil were in HD method. In according to our results, in both extracts, the two compounds present in the biggest quantity were: Α-pinene (13.7 versus 22.87% in the SFE and HD oil, respectively) and 3-ethylidene-2-methyl-1-hexen-4-yne (14.3 versus 21.36%).

  19. Marine organism repellent covering for protection of underwater objects and method of applying same

    SciTech Connect

    Fischer, K.J.

    1993-07-13

    A method is described of protecting the surface of underwater objects from fouling by growth of marine organisms thereon comprising the steps of: (A) applying a layer of waterproof adhesive to the surface to be protected; (B) applying to the waterproof adhesive layer, a deposit of cayenne pepper material; (C) applying a permeable layer of copper containing material to the adhesive layer in such a configuration as to leave certain areas of the outer surface of the adhesive layer exposed, through open portions of the permeable layer, to the ambient environment of the surface to be protected when such surface is submerged in water; (D) the permeable layer having the property of being a repellent to marine organisms.

  20. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    SciTech Connect

    Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-15

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

  1. Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem

    SciTech Connect

    Alchalabi, R.M.; Turinsky, P.J.

    1996-12-31

    The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.

  2. DC source assemblies

    DOEpatents

    Campbell, Jeremy B; Newson, Steve

    2013-02-26

    Embodiments of DC source assemblies of power inverter systems of the type suitable for deployment in a vehicle having an electrically grounded chassis are provided. An embodiment of a DC source assembly comprises a housing, a DC source disposed within the housing, a first terminal, and a second terminal. The DC source also comprises a first capacitor having a first electrode electrically coupled to the housing, and a second electrode electrically coupled to the first terminal. The DC source assembly further comprises a second capacitor having a first electrode electrically coupled to the housing, and a second electrode electrically coupled to the second terminal.

  3. Promising new baseflow separation and recession analysis methods applied to streamflow at Glendhu Catchment, New Zealand

    NASA Astrophysics Data System (ADS)

    Stewart, M. K.

    2015-06-01

    Understanding and modelling the relationship between rainfall and runoff has been a driving force in hydrology for many years. Baseflow separation and recession analysis have been two of the main tools for understanding runoff generation in catchments, but there are many different methods for each. The new baseflow separation method presented here (the bump and rise method or BRM) aims to accurately simulate the shape of tracer-determined baseflow or pre-event water. Application of the method by calibrating its parameters, using (a) tracer data or (b) an optimising method, is demonstrated for the Glendhu Catchment, New Zealand. The calibrated BRM algorithm is then applied to the Glendhu streamflow record. The new recession approach advances the thesis that recession analysis of streamflow alone gives misleading information on catchment storage reservoirs because streamflow is a varying mixture of components of very different origins and characteristics (at the simplest level, quickflow and baseflow as identified by the BRM method). Recession analyses of quickflow, baseflow and streamflow show that the steep power-law slopes often observed for streamflow at intermediate flows are artefacts due to mixing and are not representative of catchment reservoirs. Applying baseflow separation before recession analysis could therefore shed new light on water storage reservoirs in catchments and possibly resolve some current problems with recession analysis. Among other things it shows that both quickflow and baseflow reservoirs in the studied catchment have (non-linear) quadratic characteristics.

  4. Agglomeration multigrid methods with implicit Runge-Kutta smoothers applied to aerodynamic simulations on unstructured grids

    NASA Astrophysics Data System (ADS)

    Langer, Stefan

    2014-11-01

    For unstructured finite volume methods an agglomeration multigrid with an implicit multistage Runge-Kutta method as a smoother is developed for solving the compressible Reynolds averaged Navier-Stokes (RANS) equations. The implicit Runge-Kutta method is interpreted as a preconditioned explicit Runge-Kutta method. The construction of the preconditioner is based on an approximate derivative. The linear systems are solved approximately with a symmetric Gauss-Seidel method. To significantly improve this solution method grid anisotropy is treated within the Gauss-Seidel iteration in such a way that the strong couplings in the linear system are resolved by tridiagonal systems constructed along these directions of strong coupling. The agglomeration strategy is adapted to this procedure by taking into account exactly these anisotropies in such a way that a directional coarsening is applied along these directions of strong coupling. Turbulence effects are included by a Spalart-Allmaras model, and the additional transport-type equation is approximately solved in a loosely coupled manner with the same method. For two-dimensional and three-dimensional numerical examples and a variety of differently generated meshes we show the wide range of applicability of the solution method. Finally, we exploit the GMRES method to determine approximate spectral information of the linearized RANS equations. This approximate spectral information is used to discuss and compare characteristics of multistage Runge-Kutta methods.

  5. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model

    PubMed Central

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V.

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139

  6. The Role of Applied Epidemiology Methods in the Disaster Management Cycle

    PubMed Central

    Malilay, Josephine; Heumann, Michael; Perrotta, Dennis; Wolkin, Amy F.; Schnall, Amy H.; Podgornik, Michelle N.; Cruz, Miguel A.; Horney, Jennifer A.; Zane, David; Roisman, Rachel; Greenspan, Joel R.; Thoroughman, Doug; Anderson, Henry A.; Wells, Eden V.; Simms, Erin F.

    2015-01-01

    Disaster epidemiology (i.e., applied epidemiology in disaster settings) presents a source of reliable and actionable information for decision-makers and stakeholders in the disaster management cycle. However, epidemiological methods have yet to be routinely integrated into disaster response and fully communicated to response leaders. We present a framework consisting of rapid needs assessments, health surveillance, tracking and registries, and epidemiological investigations, including risk factor and health outcome studies and evaluation of interventions, which can be practiced throughout the cycle. Applying each method can result in actionable information for planners and decision-makers responsible for preparedness, response, and recovery. Disaster epidemiology, once integrated into the disaster management cycle, can provide the evidence base to inform and enhance response capability within the public health infrastructure. PMID:25211748

  7. REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.

    SciTech Connect

    UMEDA, T.; MATSUFURU, H.

    2005-07-25

    We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.

  8. Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.

    SciTech Connect

    Lestelle, Lawrence C.; Mobrand, Lars E.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.

  9. Improving HIV Surveillance Data for Public Health Action in Washington, DC: A Novel Multiorganizational Data-Sharing Method

    PubMed Central

    Smart, JC

    2016-01-01

    Background The National HIV/AIDS Strategy calls for active surveillance programs for human immunodeficiency virus (HIV) to more accurately measure access to and retention in care across the HIV care continuum for persons living with HIV within their jurisdictions and to identify persons who may need public health services. However, traditional public health surveillance methods face substantial technological and privacy-related barriers to data sharing. Objective This study developed a novel data-sharing approach to improve the timeliness and quality of HIV surveillance data in three jurisdictions where persons may often travel across the borders of the District of Columbia, Maryland, and Virginia. Methods A deterministic algorithm of approximately 1000 lines was developed, including a person-matching system with Enhanced HIV/AIDS Reporting System (eHARS) variables. Person matching was defined in categories (from strongest to weakest): exact, very high, high, medium high, medium, medium low, low, and very low. The algorithm was verified using conventional component testing methods, manual code inspection, and comprehensive output file examination. Results were validated by jurisdictions using internal review processes. Results Of 161,343 uploaded eHARS records from District of Columbia (N=49,326), Maryland (N=66,200), and Virginia (N=45,817), a total of 21,472 persons were matched across jurisdictions over various strengths in a matching process totaling 21 minutes and 58 seconds in the privacy device, leaving 139,871 uniquely identified with only one jurisdiction. No records matched as medium low or low. Over 80% of the matches were identified as either exact or very high matches. Three separate validation methods were conducted for this study, and they all found ≥90% accuracy between records matched by this novel method and traditional matching methods. Conclusions This study illustrated a novel data-sharing approach that may facilitate timelier and better

  10. Some Recent Advances of Ultrasonic Diagnostic Methods Applied to Materials and Structures (Including Biological Ones)

    NASA Astrophysics Data System (ADS)

    Nobile, Lucio; Nobile, Stefano

    This paper gives an overview of some recent advances of ultrasonic methods applied to materials and structures (including biological ones), exploring typical applications of these emerging inspection technologies to civil engineering and medicine. In confirmation of this trend, some results of an experimental research carried out involving both destructive and non-destructive testing methods for the evaluation of structural performance of existing reinforced concrete (RC) structures are discussed in terms of reliability. As a result, Ultrasonic testing can usefully supplement coring thus permitting less expensive and more representative evaluation of the concrete strength throughout the whole structure under examination.

  11. Applying simulation model to uniform field space charge distribution measurements by the PEA method

    SciTech Connect

    Liu, Y.; Salama, M.M.A.

    1996-12-31

    Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished from space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.

  12. Evaluation of a Wobbling Method Applied to Correcting Defective Pixels of CZT Detectors in SPECT Imaging.

    PubMed

    Xie, Zhaoheng; Li, Suying; Yang, Kun; Xu, Baixuan; Ren, Qiushi

    2016-01-01

    In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging. PMID:27240368

  13. Lessons learned applying CASE methods/tools to Ada software development projects

    NASA Technical Reports Server (NTRS)

    Blumberg, Maurice H.; Randall, Richard L.

    1993-01-01

    This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.

  14. Evaluation of a Wobbling Method Applied to Correcting Defective Pixels of CZT Detectors in SPECT Imaging

    PubMed Central

    Xie, Zhaoheng; Li, Suying; Yang, Kun; Xu, Baixuan; Ren, Qiushi

    2016-01-01

    In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging. PMID:27240368

  15. Verification, Validation, and Solution Quality in Computational Physics: CFD Methods Applied to Ice Sheet Physics

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    2005-01-01

    Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.

  16. Method to integrate clinical guidelines into the electronic health record (EHR) by applying the archetypes approach.

    PubMed

    Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro

    2013-01-01

    Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline. PMID:23920682

  17. A Sensorless Speed Control System for DC Motor Drives

    NASA Astrophysics Data System (ADS)

    Georgiev, Tsolo; Mikhov, Mikho

    2009-01-01

    An approach to sensorless speed control of permanent magnet DC motor drives is presented in this paper. The motor speed has been estimated indirectly by the respective back EMF voltage. Using a discrete vector-matrix description of the controlled object, an optimal modal state observer has been synthesized, as well as an optimal modal controller. The results obtained show that the applied control method can ensure good performance.

  18. De-Aliasing Through Over-Integration Applied to the Flux Reconstruction and Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.

    2015-01-01

    High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).

  19. The method of averaging applied to pharmacokinetic/pharmacodynamic indirect response models.

    PubMed

    Dunne, Adrian; de Winter, Willem; Hsu, Chyi-Hung; Mariam, Shiferaw; Neyens, Martine; Pinheiro, José; Woot de Trixhe, Xavier

    2015-08-01

    The computational effort required to fit the pharmacodynamic (PD) part of a pharmacokinetic/pharmacodynamic (PK/PD) model can be considerable if the differential equations describing the model are solved numerically. This burden can be greatly reduced by applying the method of averaging (MAv) in the appropriate circumstances. The MAv gives an approximate solution, which is expected to be a good approximation when the PK profile is periodic (i.e. repeats its values in regular intervals) and the rate of change of the PD response is such that it is approximately constant over a single period of the PK profile. This paper explains the basis of the MAv by means of a simple mathematical derivation. The NONMEM® implementation of the MAv using the abbreviated FORTRAN function FUNCA is described and explained. The application of the MAv is illustrated by means of an example involving changes in glycated hemoglobin (HbA1c%) following administration of canagliflozin, a selective sodium glucose co-transporter 2 inhibitor. The PK/PD model applied to these data is fitted with NONMEM® using both the MAv and the standard method using a numerical differential equation solver (NDES). Both methods give virtually identical results but the NDES method takes almost 8 h to run both the estimation and covariance steps, whilst the MAv produces the same results in less than 30 s. An outline of the NONMEM® control stream and the FORTRAN code for the FUNCA function is provided in the appendices. PMID:26142076

  20. Non-destructive research methods applied on materials for the new generation of nuclear reactors

    NASA Astrophysics Data System (ADS)

    Bartošová, I.; Slugeň, V.; Veterníková, J.; Sojak, S.; Petriska, M.; Bouhaddane, A.

    2014-06-01

    The paper is aimed on non-destructive experimental techniques applied on materials for the new generation of nuclear reactors (GEN IV). With the development of these reactors, also materials have to be developed in order to guarantee high standard properties needed for construction. These properties are high temperature resistance, radiation resistance and resistance to other negative effects. Nevertheless the changes in their mechanical properties should be only minimal. Materials, that fulfil these requirements, are analysed in this work. The ferritic-martensitic (FM) steels and ODS steels are studied in details. Microstructural defects, which can occur in structural materials and can be also accumulated during irradiation due to neutron flux or alpha, beta and gamma radiation, were analysed using different spectroscopic methods as positron annihilation spectroscopy and Barkhausen noise, which were applied for measurements of three different FM steels (T91, P91 and E97) as well as one ODS steel (ODS Eurofer).

  1. Single trial EEG classification applied to a face recognition experiment using different feature extraction methods.

    PubMed

    Li, Yudu; Ma, Sen; Hu, Zhongze; Chen, Jiansheng; Su, Guangda; Dou, Weibei

    2015-08-01

    Research on brain machine interface (BMI) has been developed very fast in recent years. Numerous feature extraction methods have successfully been applied to electroencephalogram (EEG) classification in various experiments. However, little effort has been spent on EEG based BMI systems regarding familiarity of human faces cognition. In this work, we have implemented and compared the classification performances of four common feature extraction methods, namely, common spatial pattern, principal component analysis, wavelet transform and interval features. High resolution EEG signals were collected from fifteen healthy subjects stimulated by equal number of familiar and novel faces. Principal component analysis outperforms other methods with average classification accuracy reaching 94.2% leading to possible real life applications. Our findings thereby may contribute to the BMI systems for face recognition. PMID:26737964

  2. A reflective lens: applying critical systems thinking and visual methods to ecohealth research.

    PubMed

    Cleland, Deborah; Wyborn, Carina

    2010-12-01

    Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research. PMID:21207106

  3. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  4. Condition monitoring methods applied to degradation of chlorosulfonated polyethylene cable jacketing materials.

    SciTech Connect

    Assink, Roger Alan; Gillen, Kenneth Todd; Bernstein, Robert; Celina, Mathias Christopher

    2005-05-01

    Three promising polymer material condition monitoring (CM) methods were applied to eight commercial chlorosulfonated polyethylene cable jacket materials aged under both elevated temperature and high-energy radiation conditions. The CM methods examined, cross-sectional modulus profiling, solvent uptake and NMR T{sub 2} relaxation time measurements of solvent-swelled samples, are closely related since they are all strongly influenced by the changes in overall crosslink density of the materials. Each approach was found to correlate well with ultimate tensile elongation measurements, the most widely used method for following degradation of elastomeric materials. In addition approximately universal failure criteria were found to be applicable for the modulus profiling and solvent uptake measurements, independent of the CSPE material examined and its degradation environment. For an arbitrarily assumed elongation 'failure' criterion of 50% absolute, the CSPE materials typically reached 'failure' when the modulus increased to {approx}35 MPa and the uptake factor in p-xylene decreased to {approx}1.6.

  5. Relativistic convergent close-coupling method applied to electron scattering from mercury

    SciTech Connect

    Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor

    2010-08-15

    We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) {sup 3}P{sub 0,1,2} states. These cross sections are associated with the formation of negative ion (Hg{sup -}) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.

  6. Finite volume and finite element methods applied to 3D laminar and turbulent channel flows

    SciTech Connect

    Louda, Petr; Příhoda, Jaromír; Sváček, Petr; Kozel, Karel

    2014-12-10

    The work deals with numerical simulations of incompressible flow in channels with rectangular cross section. The rectangular cross section itself leads to development of various secondary flow patterns, where accuracy of simulation is influenced by numerical viscosity of the scheme and by turbulence modeling. In this work some developments of stabilized finite element method are presented. Its results are compared with those of an implicit finite volume method also described, in laminar and turbulent flows. It is shown that numerical viscosity can cause errors of same magnitude as different turbulence models. The finite volume method is also applied to 3D turbulent flow around backward facing step and good agreement with 3D experimental results is obtained.

  7. High resolution digital holographic synthetic aperture applied to deformation measurement and extended depth of field method.

    PubMed

    Claus, Daniel

    2010-06-01

    This paper discusses the potential of the synthetic-aperture method in digital holography to increase the resolution, to perform high accuracy deformation measurement, and to obtain a three-dimensional topology map. The synthetic aperture method is realized by moving the camera with a motorized x-y stage. In this way a greater sensor area can be obtained resulting in a larger numerical aperture (NA). A larger NA enables a more detailed reconstruction combined with a smaller depth of field. The depth of field can be increased by applying the extended depth of field method, which yields an in-focus reconstruction of all longitudinal object regions. Moreover, a topology map of the object can be obtained. PMID:20517390

  8. A Low-Waste Electrospray Method for Applying Chemicals and Finishing Agents to Textiles Zh

    SciTech Connect

    Alexander, D.A.; Zhang, X.

    1999-08-01

    This electrospray technology works by applying the desired chemicals onto a substrate as electrically generated, charged sprays. By imposing a potential difference between the application nozzle and the target, it is possible to precisely direct and control the spray. This electrospray method of application gives a small droplet size and a relatively uniform size distribution, with the added advantage of an easily controllable spray angle. It potentially offers substantial improvement over traditional methods in the area of application uniformity, resulting in improved product quality. Additionally, since the chemicals are electrically directed straight onto the fiber with a minimum of overspray, the electrospray method holds promise in the area of waste reduction, resulting in lowered production cost.

  9. Critical conditions of saddle-node bifurcations in switching DC-DC converters

    NASA Astrophysics Data System (ADS)

    Fang, Chung-Chieh

    2013-08-01

    Although existence of multiple periodic orbits in some DC-DC converters have been known for decades, linking the multiple periodic orbits with the saddle-node bifurcation (SNB) is rarely reported. The SNB occurs in popular DC-DC converters, but it is generally reported as a strange instability. Recently, design-oriented instability critical conditions are of great interest. In this article, average, sampled-data and harmonic balance analyses are applied and they lead to equivalent results. Many new critical conditions are derived. They facilitate future research on the instability associated with multiple periodic orbits, sudden voltage jumps or disappearances of periodic orbits observed in DC-DC converters. The effects of various converter parameters on the instability can be readily seen from the derived critical conditions. New Nyquist-like plots are also proposed to predict or prevent the occurrence of the instability.

  10. The piecewise linear discontinuous finite element method applied to the RZ and XYZ transport equations

    NASA Astrophysics Data System (ADS)

    Bailey, Teresa S.

    In this dissertation we discuss the development, implementation, analysis and testing of the Piecewise Linear Discontinuous Finite Element Method (PWLD) applied to the particle transport equation in two-dimensional cylindrical (RZ) and three-dimensional Cartesian (XYZ) geometries. We have designed this method to be applicable to radiative-transfer problems in radiation-hydrodynamics systems for arbitrary polygonal and polyhedral meshes. For RZ geometry, we have implemented this method in the Capsaicin radiative-transfer code being developed at Los Alamos National Laboratory. In XYZ geometry, we have implemented the method in the Parallel Deterministic Transport code being developed at Texas A&M University. We discuss the importance of the thick diffusion limit for radiative-transfer problems, and perform a thick diffusion-limit analysis on our discretized system for both geometries. This analysis predicts that the PWLD method will perform well in this limit for many problems of physical interest with arbitrary polygonal and polyhedral cells. Finally, we run a series of test problems to determine some useful properties of the method and verify the results of our thick diffusion limit analysis. Finally, we test our method on a variety of test problems and show that it compares favorably to existing methods. With these test problems, we also show that our method performs well in the thick diffusion limit as predicted by our analysis. Based on PWLD's solid finite-element foundation, the desirable properties it shows under analysis, and the excellent performance it demonstrates on test problems even with highly distorted spatial grids, we conclude that it is an excellent candidate for radiative-transfer problems that need a robust method that performs well in thick diffusive problems or on distorted grids.

  11. Radiation effects on DC-DC Converters

    NASA Technical Reports Server (NTRS)

    Zhang, Dexin; Attia, John O.; Kankam, Mark D. (Technical Monitor)

    2000-01-01

    DC-DC switching converters are circuits that can be used to convert a DC voltage of one value to another by switching action. They are increasing being used in space systems. Most of the popular DC-DC switching converters utilize power MOSFETs. However power MOSFETs, when subjected to radiation, are susceptible to degradation of device characteristics or catastrophic failure. This work focuses on the effects of total ionizing dose on converter performance. Four fundamental switching converters (buck converter, buck-boost converter, cuk converter, and flyback converter) were built using Harris IRF250 power MOSFETs. These converters were designed for converting an input of 60 volts to an output of about 12 volts with a switching frequency of 100 kHz. The four converters were irradiated with a Co-60 gamma source at dose rate of 217 rad/min. The performances of the four converters were examined during the exposure to the radiation. The experimental results show that the output voltage of the converters increases as total dose increases. However, the increases of the output voltage were different for the four different converters, with the buck converter and cuk converter the highest and the flyback converter the lowest. We observed significant increases in output voltage for cuk converter at a total dose of 24 krad (si).

  12. River basin soil-vegetation condition assessment applying mathematic simulation methods

    NASA Astrophysics Data System (ADS)

    Mishchenko, Natalia; Trifonova, Tatiana; Shirkin, Leonid

    2013-04-01

    Meticulous attention paid nowadays to the problem of vegetation cover productivity changes is connected also to climate global transformation. At the same time ecosystems anthropogenic transformation, basically connected to the changes of land use structure and human impact on soil fertility, is developing to a great extent independently from climatic processes and can seriously influence vegetation cover productivity not only at the local and regional levels but also globally. Analysis results of land use structure and soil cover condition influence on river basin ecosystems productive potential is presented in the research. The analysis is carried out applying integrated characteristics of ecosystems functioning, space images processing results and mathematic simulation methods. The possibility of making permanent functional simulator defining connection between macroparameters of "phytocenosis-soil" system condition on the basis of basin approach is shown. Ecosystems of river catchment basins of various degrees located in European part of Russia were chosen as research objects. For the integrated assessment of ecosystems soil and vegetation conditions the following characteristics have been applied: 1. Soil-productional potential, characterizing the ability of natural and natural-anthropogenic ecosystem in certain soil-bioclimatic conditions for long term reproduction. This indicator allows for specific phytomass characteristics and ecosystem produce, humus content in soil and bioclimatic parameters. 2. Normalized difference vegetation index (NDVI) has been applied as an efficient, remotely defined, monitoring indicator characterizing spatio-temporal unsteadiness of soil-productional potential. To design mathematic simulator functional simulation methods and principles on the basis of regression, correlation and factor analysis have been applied in the research. Coefficients values defining in the designed static model of phytoproductivity distribution has been

  13. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    NASA Technical Reports Server (NTRS)

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The

  14. Methods for evaluating the biological impact of potentially toxic waste applied to soils

    SciTech Connect

    Neuhauser, E.F.; Loehr, R.C.; Malecki, M.R.

    1985-12-01

    The study was designed to evaluate two methods that can be used to estimate the biological impact of organics and inorganics that may be in wastes applied to land for treatment and disposal. The two methods were the contact test and the artificial soil test. The contact test is a 48 hr test using an adult worm, a small glass vial, and filter paper to which the test chemical or waste is applied. The test is designed to provide close contact between the worm and a chemical similar to the situation in soils. The method provides a rapid estimate of the relative toxicity of chemicals and industrial wastes. The artificial soil test uses a mixture of sand, kaolin, peat, and calcium carbonate as a representative soil. Different concentrations of the test material are added to the artificial soil, adult worms are added and worm survival is evaluated after two weeks. These studies have shown that: earthworms can distinguish between a wide variety of chemicals with a high degree of accuracy.

  15. Homotopic approach and pseudospectral method applied jointly to low thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Guo, Tieding; Jiang, Fanghua; Li, Junfeng

    2012-02-01

    The homotopic approach and the pseudospectral method are two popular techniques for low thrust trajectory optimization. A hybrid scheme is proposed in this paper by combining the above two together to cope with various difficulties encountered when they are applied separately. Explicitly, a smooth energy-optimal problem is first discretized by the pseudospectral method, leading to a nonlinear programming problem (NLP). Costates, especially their initial values, are then estimated from Karush-Kuhn-Tucker (KKT) multipliers of this NLP. Based upon these estimated initial costates, homotopic procedures are initiated efficiently and the desirable non-smooth fuel-optimal results are finally obtained by continuing the smooth energy-optimal results through a homotopic algorithm. Two main difficulties, one due to absence of reasonable initial costates when the homotopic procedures are being initiated and the other due to discontinuous bang-bang controls when the pseudospectral method is applied to the fuel-optimal problem, are both resolved successfully. Numerical results of two scenarios are presented in the end, demonstrating feasibility and well performance of this hybrid technique.

  16. A new method to improve network topological similarity search: applied to fold recognition

    PubMed Central

    Lhota, John; Hauptman, Ruth; Hart, Thomas; Ng, Clara; Xie, Lei

    2015-01-01

    Motivation: Similarity search is the foundation of bioinformatics. It plays a key role in establishing structural, functional and evolutionary relationships between biological sequences. Although the power of the similarity search has increased steadily in recent years, a high percentage of sequences remain uncharacterized in the protein universe. Thus, new similarity search strategies are needed to efficiently and reliably infer the structure and function of new sequences. The existing paradigm for studying protein sequence, structure, function and evolution has been established based on the assumption that the protein universe is discrete and hierarchical. Cumulative evidence suggests that the protein universe is continuous. As a result, conventional sequence homology search methods may be not able to detect novel structural, functional and evolutionary relationships between proteins from weak and noisy sequence signals. To overcome the limitations in existing similarity search methods, we propose a new algorithmic framework—Enrichment of Network Topological Similarity (ENTS)—to improve the performance of large scale similarity searches in bioinformatics. Results: We apply ENTS to a challenging unsolved problem: protein fold recognition. Our rigorous benchmark studies demonstrate that ENTS considerably outperforms state-of-the-art methods. As the concept of ENTS can be applied to any similarity metric, it may provide a general framework for similarity search on any set of biological entities, given their representation as a network. Availability and implementation: Source code freely available upon request Contact: lxie@iscb.org PMID:25717198

  17. Applying a fuzzy-set-based method for robust estimation of coupling loss factors

    NASA Astrophysics Data System (ADS)

    Nunes, R. F.; Ahmida, K. M.; Arruda, J. R. F.

    2007-10-01

    Finite element models have been used by many authors to provide accurate estimations of coupling loss factors. Although much progress has been achieved in this area, little attention has been paid to the influence of uncertain parameters in the finite element model used to estimate these factors. It is well known that, in the mid-frequency range, uncertainty is a major issue. In this context, a spectral element method combined with a special implementation of a fuzzy-set-based method, which is called the transformation method, is proposed as an alternative to compute coupling loss factors. The proposed technique is applied to a frame-type junction, which can consist of two beams connected at an arbitrary angle. In this context, two problems are investigated. In the first one, the influence of the confidence intervals of the coupling loss factors on the estimated energy envelopes assuming a unit power input is considered. In the other problem the influence of the envelope of the input power obtained considering the confidence intervals of the coupling loss factors is also taken into account. The estimates of the intervals are obtained by using the spectral element method combined with a fuzzy-set-based method. Results using a Monte Carlo analysis for the estimation of the coupling loss factors under the influence of uncertain parameters are shown for comparison and verification of the fuzzy method.

  18. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.

    PubMed

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  19. Evaluation of Methods for Sampling, Recovery, and Enumeration of Bacteria Applied to the Phylloplane

    PubMed Central

    Donegan, Katherine; Matyac, Carl; Seidler, Ramon; Porteous, Arlene

    1991-01-01

    Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. Several experiments were performed to examine quantitative recovery methods that are commonly used or that have potential applications. In these experiments, Erwinia herbicola and Enterobacter cloacae were applied in greenhouses to Blue Lake bush beans (Phaseolus vulgaris) and Cayuse oats (Avena sativa). Sampling indicated that the variance in bacterial counts among leaves increased over time and that this increase caused an overestimation of the mean population size by bulk leaf samples relative to single leaf samples. An increase in the number of leaves in a bulk sample, above a minimum number, did not significantly reduce the variance between samples. Experiments evaluating recovery methods demonstrated that recovery of bacteria from leaves was significantly better with stomacher blending, than with blending, sonication, or washing and that the recovery efficiency was constant over a range of sample inoculum densities. Delayed processing of leaf samples, by storage in a freezer, did not significantly lower survival and recovery of microorganisms when storage was short term and leaves were not stored in buffer. The drop plate technique for enumeration of bacteria did not significantly differ from the spread plate method. Results of these sampling, recovery, and enumeration experiments indicate a need for increased development and standardization of methods used by researchers as there are significant differences among, and also important limitations to, some of the methods used. PMID:16348404

  20. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  1. Cork-resin ablative insulation for complex surfaces and method for applying the same

    NASA Technical Reports Server (NTRS)

    Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)

    1980-01-01

    A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.

  2. Method for applying a photoresist layer to a substrate having a preexisting topology

    DOEpatents

    Morales, Alfredo M.; Gonzales, Marcela

    2004-01-20

    The present invention describes a method for preventing a photoresist layer from delaminating, peeling, away from the surface of a substrate that already contains an etched three dimensional structure such as a hole or a trench. The process comprises establishing a saturated vapor phase of the solvent media used to formulate the photoresist layer, above the surface of the coated substrate as the applied photoresist is heated in order to "cure" or drive off the retained solvent constituent within the layer. By controlling the rate and manner in which solvent is removed from the photoresist layer the layer is stabilized and kept from differentially shrinking and peeling away from the substrate.

  3. The effects of subsampling gene trees on coalescent methods applied to ancient divergences.

    PubMed

    Simmons, Mark P; Sloan, Daniel B; Gatesy, John

    2016-04-01

    Gene-tree-estimation error is a major concern for coalescent methods of phylogenetic inference. We sampled eight empirical studies of ancient lineages with diverse numbers of taxa and genes for which the original authors applied one or more coalescent methods. We found that the average pairwise congruence among gene trees varied greatly both between studies and also often within a study. We recommend that presenting plots of pairwise congruence among gene trees in a dataset be treated as a standard practice for empirical coalescent studies so that readers can readily assess the extent and distribution of incongruence among gene trees. ASTRAL-based coalescent analyses generally outperformed MP-EST and STAR with respect to both internal consistency (congruence between analyses of subsamples of genes with the complete dataset of all genes) and congruence with the concatenation-based topology. We evaluated the approach of subsampling gene trees that are, on average, more congruent with other gene trees as a method to reduce artifacts caused by gene-tree-estimation errors on coalescent analyses. We suggest that this method is well suited to testing whether gene-tree-estimation error is a primary cause of incongruence between concatenation- and coalescent-based results, to reconciling conflicting phylogenetic results based on different coalescent methods, and to identifying genes affected by artifacts that may then be targeted for reciprocal illumination. We provide scripts that automate the process of calculating pairwise gene-tree incongruence and subsampling trees while accounting for differential taxon sampling among genes. Finally, we assert that multiple tree-search replicates should be implemented as a standard practice for empirical coalescent studies that apply MP-EST. PMID:26768112

  4. What methods are used to apply positive deviance within healthcare organisations? A systematic review

    PubMed Central

    Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca

    2016-01-01

    Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly applied within healthcare organisations, although limited guidance exists and different methods, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the methods used within them, including the extent to which staff and patients are involved. Methods Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently applied within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the methods used often required extensive resources. Conclusion Further research is required to develop high quality yet practical methods which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency

  5. A New Evaluation Method for Groundwater Quality Applied in Guangzhou Region, China: Using Fuzzy Method Combining Toxicity Index.

    PubMed

    Liu, Fan; Huang, Guanxing; Sun, Jichao; Jing, Jihong; Zhang, Ying

    2016-02-01

    Groundwater quality assessment is essential for drinking from a security point of view. In this paper, a new evaluation method called toxicity combined fuzzy evaluation (TCFE) has been put forward, which is based on the fuzzy synthetic evaluation (FSE) method and the toxicity data from Agency for Toxic Substances and Disease Registry. The comparison of TCFE and FSE in the groundwater quality assessment of Guangzhou region also has been done. The assessment results are divided into 5 water quality levels; level I is the best while level V is the worst. Results indicate that the proportion of level I, level II, and level III used by the FSE method was 69.33% in total. By contrast, this proportion rose to 81.33% after applying the TCFE method. In addition, 66.7% of level IV samples in the FSE method became level I (50%), level II (25%), and level III (25%) in the TCFE method and 29.41% of level V samples became level I (50%) and level III (50%). This trend was caused by the weight change after the combination of toxicity index. By analyzing the changes of different indicators' weight, it could be concluded that the better-changed samples mainly exceeded the corresponding standards of regular indicators and the deteriorated samples mainly exceeded the corresponding standards of toxic indicators. The comparison between the two results revealed that the TCFE method could represent the health implications of toxic indicators reasonably. As a result, the TCFE method is more scientific in view of drinking safety. PMID:26803098

  6. DC-to-DC switching converter

    NASA Technical Reports Server (NTRS)

    Cuk, Slobodan M. (Inventor); Middlebrook, Robert D. (Inventor)

    1980-01-01

    A dc-to-dc converter having nonpulsating input and output current uses two inductances, one in series with the input source, the other in series with the output load. An electrical energy transferring device with storage, namely storage capacitance, is used with suitable switching means between the inductances to DC level conversion. For isolation between the source and load, the capacitance may be divided into two capacitors coupled by a transformer, and for reducing ripple, the inductances may be coupled. With proper design of the coupling between the inductances, the current ripple can be reduced to zero at either the input or the output, or the reduction achievable in that way may be divided between the input and output.

  7. Development of an in situ calibration method for current-to-voltage converters for high-accuracy SI-traceable low dc current measurements

    NASA Astrophysics Data System (ADS)

    Eppeldauer, George P.; Yoon, Howard W.; Jarrett, Dean G.; Larason, Thomas C.

    2013-10-01

    For photocurrent measurements with low uncertainties, wide dynamic range reference current-to-voltage converters and a new converter calibration method have been developed at the National Institute of Standards and Technology (NIST). The high-feedback resistors of a reference converter were in situ calibrated on a high-resistivity, printed circuit board placed in an electrically shielded box electrically isolated from the operational amplifier using jumpers. The feedback resistors, prior to their installation, were characterized, selected and heat treated. The circuit board was cleaned with solvents, and the in situ resistors were calibrated using measurement systems for 10 kΩ to 10 GΩ standard resistors. We demonstrate that dc currents from 1 nA to 100 µA can be measured with uncertainties of 55 × 10-6 (k = 2) or lower, which are lower in uncertainties than any commercial device by factors of 10 to 30 at the same current setting. The internal (NIST) validations of the reference converter are described.

  8. Radiation Effects on DC-DC Converters

    NASA Technical Reports Server (NTRS)

    Zhang, De-Xin; AbdulMazid, M. D.; Attia, John O.; Kankam, Mark D. (Technical Monitor)

    2001-01-01

    In this work, several DC-DC converters were designed and built. The converters are Buck Buck-Boost, Cuk, Flyback, and full-bridge zero-voltage switched. The total ionizing dose radiation and single event effects on the converters were investigated. The experimental results for the TID effects tests show that the voltages of the Buck Buck-Boost, Cuk, and Flyback converters increase as total dose increased when using power MOSFET IRF250 as a switching transistor. The change in output voltage with total dose is highest for the Buck converter and the lowest for Flyback converter. The trend of increase in output voltages with total dose in the present work agrees with those of the literature. The trends of the experimental results also agree with those obtained from PSPICE simulation. For the full-bridge zero-voltage switch converter, it was observed that the dc-dc converter with IRF250 power MOSFET did not show a significant change of output voltage with total dose. In addition, for the dc-dc converter with FSF254R4 radiation-hardened power MOSFET, the output voltage did not change significantly with total dose. The experimental results were confirmed by PSPICE simulation that showed that FB-ZVS converter with IRF250 power MOSFET's was not affected with the increase in total ionizing dose. Single Event Effects (SEE) radiation tests were performed on FB-ZVS converters. It was observed that the FB-ZVS converter with the IRF250 power MOSFET, when the device was irradiated with Krypton ion with ion-energy of 150 MeV and LET of 41.3 MeV-square cm/mg, the output voltage increased with the increase in fluence. However, for Krypton with ion-energy of 600 MeV and LET of 33.65 MeV-square cm/mg, and two out of four transistors of the converter were permanently damaged. The dc-dc converter with FSF254R4 radiation hardened power MOSFET's did not show significant change at the output voltage with fluence while being irradiated by Krypton with ion energy of 1.20 GeV and LET of 25

  9. A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies

    PubMed Central

    Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.

    2012-01-01

    Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571

  10. Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.

    2009-01-01

    The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.

  11. SANS contrast variation method applied in experiments on ferrofluids at MURN instrument of IBR-2 reactor

    NASA Astrophysics Data System (ADS)

    Balasoiu, Maria; Kuklin, Alexander

    2012-03-01

    Separate determination of the nuclear and magnetic contributions to the scattering intensity by means of a contrast variation method applied in a small angle neutron scattering experiment of nonpolarized neutrons in ferrofluids in early 90 's at the MURN instrument is reviewed. The nuclear scattering contribution gives the features of the colloidal particle dimensions, surfactant shell structure and the solvent degree penetration to the macromolecular layer. The magnetic scattering part is compatible to the models where is supposed that the particle surface has a nonmagnetic layer. Details on experimental "Grabcev method" in obtaining separate nuclear and magnetic contributions to the small angle neutron scattering intensity of unpolarized neutrons are emphasized for the case of a high quality ultrastabile benzene-based ferrofluid with magnetite nanoparticles.

  12. Proposal of a checking parameter in the simulated annealing method applied to the spin glass model

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Chiaki

    2016-02-01

    We propose a checking parameter utilizing the breaking of the Jarzynski equality in the simulated annealing method using the Monte Carlo method. This parameter is based on the Jarzynski equality. By using this parameter, to detect that the system is in global minima of the free energy under gradual temperature reduction is possible. Thus, by using this parameter, one is able to investigate the efficiency of annealing schedules. We apply this parameter to the ± J Ising spin glass model. The application to the Gaussian Ising spin glass model is also mentioned. We discuss that the breaking of the Jarzynski equality is induced by the system being trapped in local minima of the free energy. By performing Monte Carlo simulations of the ± J Ising spin glass model and a glassy spin model proposed by Newman and Moore, we show the efficiency of the use of this parameter.

  13. Errors associated with standard nodal diffusion methods as applied to mixed oxide fuel problems

    SciTech Connect

    Brantley, P. S., LLNL

    1998-07-24

    The evaluation of the disposition of plutonium using light water reactors is receiving increased attention. However, mixed-oxide (MOX) fuel assemblies possess much higher absorption and fission cross- sections when compared to standard UO2 assemblies. Those properties yield very high thermal flux gradients at the interfaces between MOX and UO2 assemblies. It has already been reported that standard flux reconstruction methods (that recover the homogeneous intranodal flux shape using the converged nodal solution) yield large errors in the presence of MOX assemblies. In an accompanying paper, we compare diffusion and simplified PN calculations of a mixed-oxide benchmark problem to a reference transport calculation. In this paper, we examine the errors associated with standard nodal diffusion methods when applied to the same benchmark problem. Our results show that a large portion of the error is associated with the quadratic leakage approximation (QLA) that is commonly used in the standard nodal codes.

  14. Data processing method applying principal component analysis and spectral angle mapper for imaging spectroscopic sensors

    NASA Astrophysics Data System (ADS)

    García-Allende, P. B.; Conde, O. M.; Mirapeix, J.; Cubillas, A. M.; López-Higuera, J. M.

    2007-07-01

    A data processing method for hyperspectral images is presented. Each image contains the whole diffuse reflectance spectra of the analyzed material for all the spatial positions along a specific line of vision. This data processing method is composed of two blocks: data compression and classification unit. Data compression is performed by means of Principal Component Analysis (PCA) and the spectral interpretation algorithm for classification is the Spectral Angle Mapper (SAM). This strategy of classification applying PCA and SAM has been successfully tested on the raw material on-line characterization in the tobacco industry. In this application case the desired raw material (tobacco leaves) should be discriminated from other unwanted spurious materials, such as plastic, cardboard, leather, candy paper, etc. Hyperspectral images are recorded by a spectroscopic sensor consisting of a monochromatic camera and a passive Prism- Grating-Prism device. Performance results are compared with a spectral interpretation algorithm based on Artificial Neural Networks (ANN).

  15. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    PubMed Central

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  16. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    PubMed

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  17. Experimental validation of applied strain sensors: importance, methods and still unsolved challenges

    NASA Astrophysics Data System (ADS)

    Habel, Wolfgang R.; Schukar, Vivien G.; Mewis, Franziska; Kohlhoff, Harald

    2013-09-01

    Fiber-optic strain sensors are increasingly used in very different technical fields. Sensors are provided with specifications defined by the manufacturer or ascertained by the interested user. If deformation sensors are to be used to evaluate the long-term behavior of safety-relevant structures or to monitor critical structure components, their performance and signal stability must be of high quality to enable reliable data recording. The measurement system must therefore be validated according to established technical rules and standards before its application and after. In some cases, not all details of the complex characteristic and performance of applied fiber-optic sensors are sufficiently understood, or can be validated because of a lack of knowledge and methods to check the sensors' behavior. This contribution focusses therefore on the importance of serious validation in avoiding a decrease or even deterioration of the sensors' function. Methods for validation of applied sensors are discussed and should reveal weaknesses in validation of embedded or integrated fiber-optic deformation and/or strain sensors. An outlook to some research work that has to be carried out to ensure a well-accepted practical use of fiber-optic sensors is given.

  18. Variational method applied to two-component Ginzburg-Landau theory

    NASA Astrophysics Data System (ADS)

    Romaguera, Antonio R. de C.; Silva, K. J. S.

    2013-09-01

    In this paper, we apply a variational method to two-component superconductors, as in the MgB2 materials, using the two-component Ginzburg-Landau (GL) theory. We expand the order parameter in a series of eigenfunctions containing one or two terms in each component. We also assume azimuthal symmetry to the set of eigenfunctions used in the mathematical procedure. The extension of the GL theory to two components leads to the quantization of the magnetic flux in fractions of ϕ0. We consider two kinds of component interaction potentials: Γ1|ΨI|2|ΨII|2 and Γ _2(Ψ _I^*Ψ _{II}+Ψ _IΨ _{II}^*). The simplicity of the method allows one to implement it in a broad range of physical systems, such as hybrid magnetic-superconducting mesoscopic systems, texturized thin films, metallic hydrogen superfluid, and mesoscopic superconductors near inhomogeneous magnetic fields, simply by replacing the vector potential by its corresponding expression. As an example, we apply our results to a disk of radius R and thickness t.

  19. Nonlinear Phenomena and Resonant Parametric Perturbation Control in QR-ZCS Buck DC-DC Converters

    NASA Astrophysics Data System (ADS)

    Hsieh, Fei-Hu; Liu, Feng-Shao; Hsieh, Hui-Chang

    The purpose of this study is to investigate the chaotic phenomena and to control in current-mode controlled quasi-resonant zero-current-switching (QR-ZCS) DC-DC buck converters, and to present control of chaos by resonant parametric perturbation control methods. First of all, MATLAB/SIMULINK is used to derive a mathematical model for QR-ZCS DC-DC buck converters, and to simulate the converters to observe the waveform of output voltages, inductance currents and phase-plane portraits from the period-doubling bifurcation to chaos by changing the load resistances. Secondly, using resonant parametric perturbation control in QR-ZCS buck DC-DC converters, the simulation results of the chaotic converter form chaos state turn into stable state period 1, and improve ripple amplitudes of converters under the chaos, to verify the validity of the proposes method.

  20. Integrated Geophysical Methods Applied to Geotechnical and Geohazard Engineering: From Qualitative to Quantitative Analysis and Interpretation

    NASA Astrophysics Data System (ADS)

    Hayashi, K.

    2014-12-01

    . Engineers need more quantitative information. In order to apply geophysical methods to engineering design works, quantitative interpretation is very important. The presentation introduces several case studies from different countries around the world (Fig. 2) from the integrated and quantitative points of view.

  1. Regulation of a lightweight high efficiency capacitator diode voltage multiplier dc-dc converter

    NASA Technical Reports Server (NTRS)

    Harrigill, W. T., Jr.; Myers, I. T.

    1976-01-01

    A method for the regulation of a capacitor diode voltage multiplier dc-dc converter has been developed which has only minor penalties in weight and efficiency. An auxiliary inductor is used, which only handles a fraction of the total power, to control the output voltage through a pulse width modulation method in a buck boost circuit.

  2. Regulation of a lightweight high efficiency capacitor diode voltage multiplier dc-dc converter

    NASA Technical Reports Server (NTRS)

    Harrigill, W. T., Jr.; Myers, I. T.

    1976-01-01

    A method for the regulation of a capacitor diode voltage multiplier dc-dc converter has been developed which has only minor penalties in weight and efficiency. An auxiliary inductor is used, which only handles a fraction of the total power, to control the output voltage through a pulse width modulation method in a buck boost circuit.

  3. A ``local observables'' method for wave mechanics applied to atomic hydrogen

    NASA Astrophysics Data System (ADS)

    Bowman, Peter J.

    2008-12-01

    An alternative method of deriving the values of the observables of atomic systems is presented. Rather than using operators and eigenvalues the local variables method uses the continuity equation together with current densities derived from wave functions that are solutions of the Dirac or Pauli equation. The method is applied to atomic hydrogen using the usual language of quantum mechanics rather than that of geometric algebra with which the method is often associated. The picture of the atom that emerges is one in which the electron density as a whole is rotating about a central axis. The results challenge some assumptions of conventional quantum mechanics. Electron spin is shown to be a property of the dynamical motion of the electron and not an intrinsic property of the electron, the ground state of hydrogen is shown to have an orbital angular momentum of ℏ, and excited states are shown to have angular momenta that are different from the eigenvalues of the usual quantum mechanical operators. The uncertainty relations are found not to be applicable to the orthogonal components of the angular momentum. No double electron spin gyromagnetic ratio is required to account for the observed magnetic moments, and the behavior of the atom in a magnetic field is described entirely in kinetic terms.

  4. A Novel Microaneurysms Detection Method Based on Local Applying of Markov Random Field.

    PubMed

    Ganjee, Razieh; Azmi, Reza; Moghadam, Mohsen Ebrahimi

    2016-03-01

    Diabetic Retinopathy (DR) is one of the most common complications of long-term diabetes. It is a progressive disease and by damaging retina, it finally results in blindness of patients. Since Microaneurysms (MAs) appear as a first sign of DR in retina, early detection of this lesion is an essential step in automatic detection of DR. In this paper, a new MAs detection method is presented. The proposed approach consists of two main steps. In the first step, the MA candidates are detected based on local applying of Markov random field model (MRF). In the second step, these candidate regions are categorized to identify the correct MAs using 23 features based on shape, intensity and Gaussian distribution of MAs intensity. The proposed method is evaluated on DIARETDB1 which is a standard and publicly available database in this field. Evaluation of the proposed method on this database resulted in the average sensitivity of 0.82 for a confidence level of 75 as a ground truth. The results show that our method is able to detect the low contrast MAs with the background while its performance is still comparable to other state of the art approaches. PMID:26779642

  5. Applying an optical space-time coding method to enhance light scattering signals in microfluidic devices.

    PubMed

    Mei, Zhe; Wu, Tsung-Feng; Pion-Tonachini, Luca; Qiao, Wen; Zhao, Chao; Liu, Zhiwen; Lo, Yu-Hwa

    2011-09-01

    An "optical space-time coding method" was applied to microfluidic devices to detect the forward and large angle light scattering signals for unlabelled bead and cell detection. Because of the enhanced sensitivity by this method, silicon pin photoreceivers can be used to detect both forward scattering (FS) and large angle (45-60°) scattering (LAS) signals, the latter of which has been traditionally detected by a photomultiplier tube. This method yields significant improvements in coefficients of variation (CV), producing CVs of 3.95% to 10.05% for FS and 7.97% to 26.12% for LAS with 15 μm, 10 μm, and 5 μm beads. These are among the best values ever demonstrated with microfluidic devices. The optical space-time coding method also enables us to measure the speed and position of each particle, producing valuable information for the design and assessment of microfluidic lab-on-a-chip devices such as flow cytometers and complete blood count devices. PMID:21915241

  6. KRON's Method Applied to the Study of Electromagnetic Interference Occurring in Aerospace Systems

    NASA Astrophysics Data System (ADS)

    Leman, S.; Reineix, A.; Hoeppe, F.; Poiré, Y.; Mahoudi, M.; Démoulin, B.; Üstüner, F.; Rodriquez, V. P.

    2012-05-01

    In this paper, spacecraft and aircraft mock-ups are used to simulate the performance of KRON based tools applied to the simulation of large EMC systems. These tools aim to assist engineers in the design phase of complex systems. This is done by effectively evaluating the EM disturbances between antennas, electronic equipment, and Portable Electronic Devices found in large systems. We use a topological analysis of the system to model independent sub-volumes such as antennas, cables, equipments, PED and cavity walls. Each of these sub- volumes is modelled by an appropriate method which can be based on, for example, analytical expressions, transmission line theory or other numerical tools such as the full wave FDFD method. This representation associated with the electrical tensorial method of G.KRON leads to reasonable simulation times (typically a few minutes) and accurate results. Because equivalent sub-models are built separately, the main originality of this method is that each sub- volume can be easily replaced by another one without rebuilding the entire system. Comparisons between measurements and simulations will be also presented.

  7. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. PMID:12820130

  8. High sensitivity ancilla assisted nanoscale DC magnetometry

    NASA Astrophysics Data System (ADS)

    Liu, Yixiang; Ajoy, Ashok; Marseglia, Luca; Saha, Kasturi; Cappellaro, Paola

    2016-05-01

    Sensing slowly varying magnetic fields are particularly relevant to many real world scenarios, where the signals of interest are DC or close to static. Nitrogen Vacancy (NV) centers in diamond are a versatile platform for such DC magnetometry on nanometer length scales. Using NV centers, the standard technique for measuring DC magnetic fields is via the Ramsey protocol, where sensitivities can approach better than 1 μ T/vHz, but are limited by the sensor fast dephasing time T2*. In this work we instead present a method of sensing DC magnetic fields that is intrinsically limited by the much longer T2 coherence time. The method exploits a strongly-coupled ancillary nuclear spin to achieve high DC field sensitivities potentially exceeding that of the Ramsey method. In addition, through this method we sense the perpendicular component of the DC magnetic field, which in conjunction with the parallel component sensed by the Ramsey method provides a valuable tool for vector DC magnetometry at the nanoscale.

  9. A Method for Selecting Structure-switching Aptamers Applied to a Colorimetric Gold Nanoparticle Assay

    PubMed Central

    Martin, Jennifer A.; Smith, Joshua E.; Warren, Mercedes; Chávez, Jorge L.; Hagen, Joshua A.; Kelley-Loughnane, Nancy

    2015-01-01

    Small molecules provide rich targets for biosensing applications due to their physiological implications as biomarkers of various aspects of human health and performance. Nucleic acid aptamers have been increasingly applied as recognition elements on biosensor platforms, but selecting aptamers toward small molecule targets requires special design considerations. This work describes modification and critical steps of a method designed to select structure-switching aptamers to small molecule targets. Binding sequences from a DNA library hybridized to complementary DNA capture probes on magnetic beads are separated from nonbinders via a target-induced change in conformation. This method is advantageous because sequences binding the support matrix (beads) will not be further amplified, and it does not require immobilization of the target molecule. However, the melting temperature of the capture probe and library is kept at or slightly above RT, such that sequences that dehybridize based on thermodynamics will also be present in the supernatant solution. This effectively limits the partitioning efficiency (ability to separate target binding sequences from nonbinders), and therefore many selection rounds will be required to remove background sequences. The reported method differs from previous structure-switching aptamer selections due to implementation of negative selection steps, simplified enrichment monitoring, and extension of the length of the capture probe following selection enrichment to provide enhanced stringency. The selected structure-switching aptamers are advantageous in a gold nanoparticle assay platform that reports the presence of a target molecule by the conformational change of the aptamer. The gold nanoparticle assay was applied because it provides a simple, rapid colorimetric readout that is beneficial in a clinical or deployed environment. Design and optimization considerations are presented for the assay as proof-of-principle work in buffer to

  10. DISCO-SCA and Properly Applied GSVD as Swinging Methods to Find Common and Distinctive Processes

    PubMed Central

    Van Deun, Katrijn; Van Mechelen, Iven; Thorrez, Lieven; Schouteden, Martijn; De Moor, Bart; van der Werf, Mariët J.; De Lathauwer, Lieven; Smilde, Age K.; Kiers, Henk A. L.

    2012-01-01

    Background In systems biology it is common to obtain for the same set of biological entities information from multiple sources. Examples include expression data for the same set of orthologous genes screened in different organisms and data on the same set of culture samples obtained with different high-throughput techniques. A major challenge is to find the important biological processes underlying the data and to disentangle therein processes common to all data sources and processes distinctive for a specific source. Recently, two promising simultaneous data integration methods have been proposed to attain this goal, namely generalized singular value decomposition (GSVD) and simultaneous component analysis with rotation to common and distinctive components (DISCO-SCA). Results Both theoretical analyses and applications to biologically relevant data show that: (1) straightforward applications of GSVD yield unsatisfactory results, (2) DISCO-SCA performs well, (3) provided proper pre-processing and algorithmic adaptations, GSVD reaches a performance level similar to that of DISCO-SCA, and (4) DISCO-SCA is directly generalizable to more than two data sources. The biological relevance of DISCO-SCA is illustrated with two applications. First, in a setting of comparative genomics, it is shown that DISCO-SCA recovers a common theme of cell cycle progression and a yeast-specific response to pheromones. The biological annotation was obtained by applying Gene Set Enrichment Analysis in an appropriate way. Second, in an application of DISCO-SCA to metabolomics data for Escherichia coli obtained with two different chemical analysis platforms, it is illustrated that the metabolites involved in some of the biological processes underlying the data are detected by one of the two platforms only; therefore, platforms for microbial metabolomics should be tailored to the biological question. Conclusions Both DISCO-SCA and properly applied GSVD are promising integrative methods for

  11. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  12. Evaluation of cleaning methods applied in home environments after renovation and remodeling activities.

    PubMed

    Yiin, Lih-Ming; Lu, Shou-En; Sannoh, Sulaiman; Lim, Benjamin S; Rhoads, George G

    2004-10-01

    We conducted a cleaning trial in 40 northern New Jersey homes where home renovation and remodeling (R&R) activities were undertaken. Two cleaning protocols were used in the study: a specific method recommended by the US Department of Housing and Urban Development (HUD), in the 1995 "Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing," using a high-efficiency particulate air (HEPA)-filtered vacuum cleaner and a tri-sodium phosphate solution (TSP); and an alternative method using a household vacuum cleaner and a household detergent. Eligible homes were built before the 1970s with potential lead-based paint and had recent R&R activities without thorough cleaning. The two cleaning protocols were randomly assigned to the participants' homes and followed the HUD-recommended three-step procedure: vacuuming, wet washing, and repeat vacuuming. Wipe sampling was conducted on floor surfaces or windowsills before and after cleaning to evaluate the efficacy. All floor and windowsill data indicated that both methods (TSP/HEPA and non-TSP/non-HEPA) were effective in reducing lead loading on the surfaces (P < 0.001). When cleaning was applied to surfaces with initial lead loading above the clearance standards, the reductions were even greater, above 95% for either cleaning method. The mixed-effect model analysis showed no significant difference between the two methods. Baseline lead loading was found to be associated with lead loading reduction significantly on floors (P < 0.001) and marginally on windowsills (P = 0.077). Such relations were different between the two cleaning methods significantly on floors (P < 0.001) and marginally on windowsills (P = 0.066), with the TSP/HEPA method being favored for higher baseline levels and the non-TSP/non-HEPA method for lower baseline levels. For the 10 homes with lead abatement, almost all post-cleaning lead loadings were below the standards using either cleaning method. Based on our results, we recommend

  13. Applying the seismic interferometry method to vertical seismic profile data using tunnel excavation noise as source

    NASA Astrophysics Data System (ADS)

    Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos

    2013-04-01

    In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission

  14. Resampling method for applying density-dependent habitat selection theory to wildlife surveys.

    PubMed

    Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel

    2015-01-01

    Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large

  15. Resampling Method for Applying Density-Dependent Habitat Selection Theory to Wildlife Surveys

    PubMed Central

    Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel

    2015-01-01

    Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large

  16. Applying the Taguchi method to optimize sumatriptan succinate niosomes as drug carriers for skin delivery.

    PubMed

    González-Rodríguez, Maria Luisa; Mouram, Imane; Cózar-Bernal, Ma Jose; Villasmil, Sheila; Rabasco, Antonio M

    2012-10-01

    Niosomes formulated from different nonionic surfactants (Span® 60, Brij® 72, Span® 80, or Eumulgin® B 2) with cholesterol (CH) molar ratios of 3:1 or 4:1 with respect to surfactant were prepared with different sumatriptan amount (10 and 15 mg) and stearylamine (SA). Thin-film hydration method was employed to produce the vesicles, and the time lapsed to hydrate the lipid film (1 or 24 h) was introduced as variable. These factors were selected as variables and their levels were introduced into two L18 orthogonal arrays. The aim was to optimize the manufacturing conditions by applying Taguchi methodology. Response variables were vesicle size, zeta potential (Z), and drug entrapment. From Taguchi analysis, drug concentration and the time until the hydration were the most influencing parameters on size, being the niosomes made with Span® 80 the smallest vesicles. The presence of SA into the vesicles had a relevant influence on Z values. All the factors except the surfactant-CH ratio had an influence on the encapsulation. Formulations were optimized by applying the marginal means methodology. Results obtained showed a good correlation between mean and signal-to-noise ratio parameters, indicating the feasibility of the robust methodology to optimize this formulation. Also, the extrusion process exerted a positive influence on the drug entrapment. PMID:22806266

  17. Combustion reaction kinetics of guarana seed residue applying isoconversional methods and consecutive reaction scheme.

    PubMed

    Lopes, Fernanda Cristina Rezende; Tannous, Katia; Rueda-Ordóñez, Yesid Javier

    2016-11-01

    This work aims the study of decomposition kinetics of guarana seed residue using thermogravimetric analyzer under synthetic air atmosphere applying heating rates of 5, 10, and 15°C/min, from room temperature to 900°C. Three thermal decomposition stages were identified: dehydration (25.1-160°C), oxidative pyrolysis (240-370°C), and combustion (350-650°C). The activation energies, reaction model, and pre-exponential factor were determined through four isoconversional methods, master plots, and linearization of the conversion rate equation, respectively. A scheme of two-consecutive reactions was applied validating the kinetic parameters of first-order reaction and two-dimensional diffusion models for the oxidative pyrolysis stage (149.57kJ/mol, 6.97×10(10)1/s) and for combustion stage (77.98kJ/mol, 98.611/s), respectively. The comparison between theoretical and experimental conversion and conversion rate showed good agreement with average deviation lower than 2%, indicating that these results could be used for modeling of guarana seed residue. PMID:27513645

  18. A new mechanical characterization method for microactuators applied to shape memory films

    SciTech Connect

    Ackler, H D; Krulevitch, P; Ramsey, P B; Seward, K P

    1999-03-01

    We present a new technique for the mechanical characterization of microactuators and apply it to shape memory alloy (SMA) thin films. A test instrument was designed which utilizes a spring-loaded transducer to measure displacements with resolution of 1.5 mm and forces with resolution of 0.2 mN. Employing an out- of-plane loading method for SMA thin films, strain resolution of 30 me and stress resolution of 2.5 MPa were achieved. Four mm long, 2 {micro}m thick NiTiCu ligaments suspended across open windows were bulk micromachined for use in the out-of-plane stress and strain measurements. Static analysis showed that 63% of the applied strain was recovered while ligaments were subjected to tensile stresses of 870 MPa. This corresponds to 280 mm of actual displacement against a load of 52 mN. Fatigue analysis of the ligaments showed 33% degradation in recoverable strain (from 0.3% to 0.2%) with 2 {+-} 10{sup 4} cycles for an initial strain of 2.8%.

  19. A general optimization method applied to a vdW-DF functional for water

    NASA Astrophysics Data System (ADS)

    Fritz, Michelle; Soler, Jose M.; Fernandez-Serra, Marivi

    In particularly delicate systems, like liquid water, ab initio exchange and correlation functionals are simply not accurate enough for many practical applications. In these cases, fitting the functional to reference data is a sensible alternative to empirical interatomic potentials. However, a global optimization requires functional forms that depend on many parameters and the usual trial and error strategy becomes cumbersome and suboptimal. We have developed a general and powerful optimization scheme called data projection onto parameter space (DPPS) and applied it to the optimization of a van der Waals density functional (vdW-DF) for water. In an arbitrarily large parameter space, DPPS solves for vector of unknown parameters for a given set of known data, and poorly sampled subspaces are determined by the physically-motivated functional shape of ab initio functionals using Bayes' theory. We present a new GGA exchange functional that has been optimized with the DPPS method for 1-body, 2-body, and 3-body energies of water systems and results from testing the performance of the optimized functional when applied to the calculation of ice cohesion energies and ab initio liquid water simulations. We found that our optimized functional improves the description of both liquid water and ice when compared to other versions of GGA exchange.

  20. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research.

    PubMed

    Bamberger, Katharine T

    2016-03-01

    The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM. PMID:26541560

  1. Photonic simulation method applied to the study of structural color in Myxomycetes.

    PubMed

    Dolinko, Andrés; Skigin, Diana; Inchaussandague, Marina; Carmaran, Cecilia

    2012-07-01

    We present a novel simulation method to investigate the multicolored effect of the Diachea leucopoda (Physarales order, Myxomycetes class), which is a microorganism that has a characteristic pointillistic iridescent appearance. It was shown that this appearance is of structural origin, and is produced within the peridium -protective layer that encloses the mass of spores-, which is basically a corrugated sheet of a transparent material. The main characteristics of the observed color were explained in terms of interference effects using a simple model of homogeneous planar slab. In this paper we apply a novel simulation method to investigate the electromagnetic response of such structure in more detail, i.e., taking into account the inhomogeneities of the biological material within the peridium and its curvature. We show that both features, which could not be considered within the simplified model, affect the observed color. The proposed method is of great potential for the study of biological structures, which present a high degree of complexity in the geometrical shapes as well as in the materials involved. PMID:22772212

  2. Impact of gene patents on diagnostic testing: a new patent landscaping method applied to spinocerebellar ataxia

    PubMed Central

    Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui

    2011-01-01

    Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization method to determine which gene patents could indeed be problematic. The method is applied to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing methods and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called ‘gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing. PMID:21811306

  3. IBA-Europhysics Prize in Applied Nuclear Science and Nuclear Methods in Medicine

    NASA Astrophysics Data System (ADS)

    MacGregor, I. J. Douglas

    2014-03-01

    The Nuclear Physics Board of the European Physical Society is pleased to announce that the 2013 IBA-Europhysics Prize in Applied Nuclear Science and Nuclear Methods in Medicine is awarded to Prof. Marco Durante, Director of the Biophysics Department at GSI Helmholtz Center (Darmstadt, Germany); Professor at the Technical University of Darmstadt (Germany) and Adjunct Professor at the Temple University, Philadelphia, USA. The prize was presented in the closing Session of the INPC 2013 conference by Mr. Thomas Servais, R&D Manager for Accelerator Development at the IBA group, who sponsor the IBA Europhysics Prize. The Prize Diploma was presented by Dr. I J Douglas MacGregor, Chair-elect of the EPS Nuclear Physics Division and Chair of the IBA Prize committee.

  4. Applying knowledge-anchored hypothesis discovery methods to advance clinical and translational research: the OAMiner project

    PubMed Central

    Jackson, Rebecca D; Best, Thomas M; Borlawsky, Tara B; Lai, Albert M; James, Stephen; Gurcan, Metin N

    2012-01-01

    The conduct of clinical and translational research regularly involves the use of a variety of heterogeneous and large-scale data resources. Scalable methods for the integrative analysis of such resources, particularly when attempting to leverage computable domain knowledge in order to generate actionable hypotheses in a high-throughput manner, remain an open area of research. In this report, we describe both a generalizable design pattern for such integrative knowledge-anchored hypothesis discovery operations and our experience in applying that design pattern in the experimental context of a set of driving research questions related to the publicly available Osteoarthritis Initiative data repository. We believe that this ‘test bed’ project and the lessons learned during its execution are both generalizable and representative of common clinical and translational research paradigms. PMID:22647689

  5. Borehole-to-borehole geophysical methods applied to investigations of high level waste repository sites

    SciTech Connect

    Ramirez, A.L.

    1983-01-01

    This discussion focuses on the use of borehole to borehole geophysical measurements to detect geological discontinuities in High Level Waste (HLW) repository sites. The need for these techniques arises from: (a) the requirement that a HLW repository's characteristics and projected performance be known with a high degree of confidence; and (b) the inadequacy of other geophysical methods in mapping fractures. Probing configurations which can be used to characterize HLW sites are described. Results from experiments in which these techniques were applied to problems similar to those expected at repository sites are briefly discussed. The use of a procedure designed to reduce uncertainty associated with all geophysical exploration techniques is proposed; key components of the procedure are defined.

  6. A Technique of Two-Stage Clustering Applied to Environmental and Civil Engineering and Related Methods of Citation Analysis.

    ERIC Educational Resources Information Center

    Miyamoto, S.; Nakayama, K.

    1983-01-01

    A method of two-stage clustering of literature based on citation frequency is applied to 5,065 articles from 57 journals in environmental and civil engineering. Results of related methods of citation analysis (hierarchical graph, clustering of journals, multidimensional scaling) applied to same set of articles are compared. Ten references are…

  7. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    NASA Astrophysics Data System (ADS)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  8. Pattern recognition method applied to the forecast of strong earthquakes in South American seismic prone areas

    SciTech Connect

    Benavidez, A.

    1986-01-01

    The pattern recognition method is applied to the Andean seismic region that extends from southern latitudes 2 to 27 in the South American continent, to set a criterion for the prediction of the potential sites of strong earthquakes epicenters in the zone. It is assumed that two hypothesis hold. First, the strong earthquake epicenters typically cluster around the intersection of morphostructural lineaments. Second, the rules of recognition obtained for neighboring zones which exhibit distinctive neotectonic evolution, state of stress, spatial earthquake distribution and geological development, may be different in spite of the fact that the morphostructural zoning does not reflect a separation between them. Hence, the region is divided into two broad-scale tectonic segments located above slabs of similar scale in the Nazca plate in which subduction takes place almost subhorizontally (dipping at an angle of about 10) between latitudes 2S and 15S, and at a steeper angle (of approximately 30) within latitudes 15S to 27S. The morphostructural zoning is carried out for both zones with the determination of the lineaments and the corresponding disjunctive knots which are defined as the objects of recognition when applying the pattern recognition method. The Cora-3 algorithm is used as the computational procedure for the search of the rule of recognition of dangerous and non-dangerous sites for each zone. The set criteria contain in each case several characteristic features that represent the topography, geology and tectonics of each region. Also, it is shown that they have a physical meaning that mostly reflects the style of tectonic deformation in the related regions.

  9. Tester periodically registers dc amplifier characteristics

    NASA Technical Reports Server (NTRS)

    Cree, D.; Wenzel, G. E.

    1966-01-01

    Motor-driven switcher-recorder periodically registers the zero drift and gain drift signals of a dc amplifier subjected to changes in environment. A time coding method is used since several measurements are shared on a single recorder trace.

  10. Analysis of a class of pulse modulated dc-to-dc power converters

    NASA Technical Reports Server (NTRS)

    Burger, P.

    1975-01-01

    The basic operational characteristics of dc-to-dc converters are analyzed. The basic physical characteristics of power converters are identified. A simple class of dc-to-dc power converters is chosen which could satisfy any set of operating requirements. Three different controlling methods in this class are described in detail. Necessary conditions for the stability of these converters are measured through analog computer simulation. These curves are related to other operational characteristics, such as ripple and regulation. Finally, further research is suggested for the solution of the physical design of absolutely stable, reliable, and efficient power converters of this class.

  11. Analysis of dolines using multiple methods applied to airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Bauer, Christian

    2015-12-01

    Delineating dolines is not a straightforward process especially in densely vegetated areas. This paper deals quantitatively with the surface karst morphology of a Miocene limestone occurrence in the Styrian Basin, Austria. The study area is an isolated karst mountain with a smooth morphology (former planation surface of Pliocene age), densely vegetated (mixed forest) and with a surface area of 1.3 km2. The study area is located near the city of Wildon and is named "Wildoner Buchkogel". The aim of this study was to test three different approaches in order to automatically delineate dolines. The data basis for this was a high resolution digital terrain model (DTM) derived from airborne laser scanning (ALS) and with a raster resolution of 1 × 1 m. The three different methods for doline boundary delineation are: (a) the "traditional" method based on the outermost closed contour line; (b) boundary extraction based on a drainage correction algorithm (filling up pits), and (c) boundary extraction based on hydrologic modelling (watershed). Extracted features are integrated in a GIS environment and analysed statistically regarding spatial distribution, shape geometry, elongation direction and volume. The three methods lead to different doline boundaries and therefore investigated parameters show significant variations. The applied methods have been compared with respect to their application purpose. Depending on delineation process, between 118 and 189 dolines could be defined. The high density of surface karst features demonstrates that solutional processes are major factors in the landscape development of the Wildoner Buchkogel. Furthermore the correlation to the landscape evolution of the Grazer Bergland is discussed.

  12. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    SciTech Connect

    Nikhil V. Bhagwat

    2005-12-17

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.

  13. Applying Seismic Methods to National Security Problems: Matched Field Processing With Geological Heterogeneity

    SciTech Connect

    Myers, S; Larsen, S; Wagoner, J; Henderer, B; McCallen, D; Trebes, J; Harben, P; Harris, D

    2003-10-29

    Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization of full three-dimensional (3D) finite difference modeling, as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project, in support of LLNL's national-security mission, benefits the U.S. military and intelligence community. Fiscal year (FY) 2003 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A three-seismic-array vehicle tracking testbed was installed on site

  14. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  15. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  16. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    SciTech Connect

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%.

  17. A new feature extraction method for signal classification applied to cord dorsum potential detection

    NASA Astrophysics Data System (ADS)

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  18. Independent Identification Method applied to EDMOND and SonotaCo databases

    NASA Astrophysics Data System (ADS)

    Rudawska, R.; Matlovic, P.; Toth, J.; Kornos, L.; Hajdukova, M.

    2015-10-01

    In recent years, networks of low-light-level video cameras have contributed many new meteoroid orbits. As a result of cooperation and data sharing among national networks and International Meteor Organization Video Meteor Database (IMO VMDB), European Video Meteor Network Database (EDMOND; [2, 3]) has been created. Its current version contains 145 830 orbits collected from 2001 to 2014. Another productive camera network has been that of the Japanese SonotaCo consortium [5], which at present made available 168 030 meteoroid orbits collected from 2007 to 2013. In our survey we used EDMOND database with SonotaCo database together, in order to identify existing meteor showers in both databases (Figure 1 and 2). For this purpose we applied recently intoduced independed identification method [4]. In the first step of the survey we used criterion based on orbital parameters (e, q, i, !, and) to find groups around each meteor within the similarity threshold. Mean parameters of the groups were calculated usingWelch method [6], and compared using a new function based on geocentric parameters (#, #, #, and Vg). Similar groups were merged into final clusters (representing meteor showers), and compared with the IAU Meteor Data Center list of meteor showers [1]. This poster presents the results obtained by the proposed methodology.

  19. A Hamiltonian replica exchange method for building protein-protein interfaces applied to a leucine zipper

    NASA Astrophysics Data System (ADS)

    Cukier, Robert I.

    2011-01-01

    Leucine zippers consist of alpha helical monomers dimerized (or oligomerized) into alpha superhelical structures known as coiled coils. Forming the correct interface of a dimer from its monomers requires an exploration of configuration space focused on the side chains of one monomer that must interdigitate with sites on the other monomer. The aim of this work is to generate good interfaces in short simulations starting from separated monomers. Methods are developed to accomplish this goal based on an extension of a previously introduced [Su and Cukier, J. Phys. Chem. B 113, 9595, (2009)] Hamiltonian temperature replica exchange method (HTREM), which scales the Hamiltonian in both potential and kinetic energies that was used for the simulation of dimer melting curves. The new method, HTREM_MS (MS designates mean square), focused on interface formation, adds restraints to the Hamiltonians for all but the physical system, which is characterized by the normal molecular dynamics force field at the desired temperature. The restraints in the nonphysical systems serve to prevent the monomers from separating too far, and have the dual aims of enhancing the sampling of close in configurations and breaking unwanted correlations in the restrained systems. The method is applied to a 31-residue truncation of the 33-residue leucine zipper (GCN4-p1) of the yeast transcriptional activator GCN4. The monomers are initially separated by a distance that is beyond their capture length. HTREM simulations show that the monomers oscillate between dimerlike and monomerlike configurations, but do not form a stable interface. HTREM_MS simulations result in the dimer interface being faithfully reconstructed on a 2 ns time scale. A small number of systems (one physical and two restrained with modified potentials and higher effective temperatures) are sufficient. An in silico mutant that should not dimerize because it lacks charged residues that provide electrostatic stabilization of the dimer

  20. Revising the spectral method as applied to the mantle dynamics modeling.

    NASA Astrophysics Data System (ADS)

    Petrunin, A. G.; Kaban, M. K.; Rogozhina, I.; Trubytsyn, V. P.

    2012-04-01

    The spectral method is widely used for modeling instantaneous flow and stress field distribution in a spherical shell. This method provides a high accuracy semi-analytical solution of the Navier-Stokes and Poisson equations when the viscosity is only depth- (radial-) dependent. However, the distribution of viscosity in the real Earth is essentially three-dimensional. In this case, non-linear coupling of different spherical harmonic modes does not allow obtaining a straightforward semi-analytical solution. In this study, we present a numerical approach, built on substantially revised method originally proposed by Zhang and Christensen (1993) for solving the Navier-Stokes equation in a spectral domain in case if lateral variations of viscosity (LVV) are present. We demonstrate a number of numerical algorithms allowing to efficiently calculate instantaneous Stokes flow in a sphere taking into account the effects of LVV, self-gravitation and compressibility. In particular, the Newton-Raphson procedure applied to the shooting method shows the ability to solve the boundary value problem, necessary for cross-linking solutions on spheres. In contrast to the traditionally used propagator method, our approach suggests continuous integration over depth without introducing internal interfaces. The Clenshaw-based recursion algorithms for computing associated Legendre functions and the Horner's scheme for computing partial sums allow avoiding the problems in the Poles vicinity typical for the spherical harmonic methods and obtaining a fast and robust solution on a sphere for high degree and order. Since the benchmarking technique of 3-D spherical codes is not developed substantially, we employ different approaches to test the proposed numerical algorithm. First, we show that the algorithm produces correct results for radially symmetric viscosity distribution. Second, an iterative scheme for the LVV case is validated by comparing the solution for the tetrahedral symmetric (l=3,m

  1. Forback DC-to-DC converter

    NASA Technical Reports Server (NTRS)

    Lukemire, Alan T. (Inventor)

    1995-01-01

    A pulse-width modulated DC-to-DC power converter including a first inductor, i.e. a transformer or an equivalent fixed inductor equal to the inductance of the secondary winding of the transformer, coupled across a source of DC input voltage via a transistor switch which is rendered alternately conductive (ON) and nonconductive (OFF) in accordance with a signal from a feedback control circuit is described. A first capacitor capacitively couples one side of the first inductor to a second inductor which is connected to a second capacitor which is coupled to the other side of the first inductor. A circuit load shunts the second capacitor. A semiconductor diode is additionally coupled from a common circuit connection between the first capacitor and the second inductor to the other side of the first inductor. A current sense transformer generating a current feedback signal for the switch control circuit is directly coupled in series with the other side of the first inductor so that the first capacitor, the second inductor and the current sense transformer are connected in series through the first inductor. The inductance values of the first and second inductors, moreover, are made identical. Such a converter topology results in a simultaneous voltsecond balance in the first inductance and ampere-second balance in the current sense transformer.

  2. Forback DC-to-DC converter

    NASA Technical Reports Server (NTRS)

    Lukemire, Alan T. (Inventor)

    1993-01-01

    A pulse-width modulated DC-to-DC power converter including a first inductor, i.e. a transformer or an equivalent fixed inductor equal to the inductance of the secondary winding of the transformer, coupled across a source of DC input voltage via a transistor switch which is rendered alternately conductive (ON) and nonconductive (OFF) in accordance with a signal from a feedback control circuit is described. A first capacitor capacitively couples one side of the first inductor to a second inductor which is connected to a second capacitor which is coupled to the other side of the first inductor. A circuit load shunts the second capacitor. A semiconductor diode is additionally coupled from a common circuit connection between the first capacitor and the second inductor to the other side of the first inductor. A current sense transformer generating a current feedback signal for the switch control circuit is directly coupled in series with the other side of the first inductor so that the first capacitor, the second inductor and the current sense transformer are connected in series through the first inductor. The inductance values of the first and second inductors, moreover, are made identical. Such a converter topology results in a simultaneous voltsecond balance in the first inductance and ampere-second balance in the current sense transformer.

  3. Applying Sequential Analytic Methods to Self-Reported Information to Anticipate Care Needs

    PubMed Central

    Bayliss, Elizabeth A.; Powers, J. David; Ellis, Jennifer L.; Barrow, Jennifer C.; Strobel, MaryJo; Beck, Arne

    2016-01-01

    Purpose: Identifying care needs for newly enrolled or newly insured individuals is important under the Affordable Care Act. Systematically collected patient-reported information can potentially identify subgroups with specific care needs prior to service use. Methods: We conducted a retrospective cohort investigation of 6,047 individuals who completed a 10-question needs assessment upon initial enrollment in Kaiser Permanente Colorado (KPCO), a not-for-profit integrated delivery system, through the Colorado State Individual Exchange. We used responses from the Brief Health Questionnaire (BHQ), to develop a predictive model for cost for receiving care in the top 25 percent, then applied cluster analytic techniques to identify different high-cost subpopulations. Per-member, per-month cost was measured from 6 to 12 months following BHQ response. Results: BHQ responses significantly predictive of high-cost care included self-reported health status, functional limitations, medication use, presence of 0–4 chronic conditions, self-reported emergency department (ED) use during the prior year, and lack of prior insurance. Age, gender, and deductible-based insurance product were also predictive. The largest possible range of predicted probabilities of being in the top 25 percent of cost was 3.5 percent to 96.4 percent. Within the top cost quartile, examples of potentially actionable clusters of patients included those with high morbidity, prior utilization, depression risk and financial constraints; those with high morbidity, previously uninsured individuals with few financial constraints; and relatively healthy, previously insured individuals with medication needs. Conclusions: Applying sequential predictive modeling and cluster analytic techniques to patient-reported information can identify subgroups of individuals within heterogeneous populations who may benefit from specific interventions to optimize initial care delivery. PMID:27563684

  4. The expanding photosphere method applied to SN 1992am AT cz = 14 600 km/s

    NASA Technical Reports Server (NTRS)

    Schmidt, Brian P.; Kirshner, Robert P.; Eastman, Ronald G.; Hamuy, Mario; Phillips, Mark M.; Suntzeff, Nicholas B.; Maza, Jose; Filippenko, Alexei V.; Ho, Luis C.; Matheson, Thomas

    1994-01-01

    We present photometry and spectroscopy of Supernova (SN) 1992am for five months following its discovery by the Calan Cerro-Tololo Inter-American Observatory (CTIO) SN search. These data show SN 1992am to be type II-P, displaying hydrogen in its spectrum and the typical shoulder in its light curve. The photometric data and the distance from our own analysis are used to construct the supernova's bolometric light curve. Using the bolometric light curve, we estimate SN 1992am ejected approximately 0.30 solar mass of Ni-56, an amount four times larger than that of other well studied SNe II. SN 1992am's; host galaxy lies at a redshift of cz = 14 600 km s(exp -1), making it one of the most distant SNe II discovered, and an important application of the Expanding Photsphere Method. Since z = 0.05 is large enough for redshift-dependent effects to matter, we develop the technique to derive luminosity distances with the Expanding Photosphere Method at any redshift, and apply this method to SN 1992am. The derived distance, D = 180(sub -25) (sup +30) Mpc, is independent of all other rungs in the extragalactic distance ladder. The redshift of SN 1992am's host galaxy is sufficiently large that uncertainties due to perturbations in the smooth Hubble flow should be smaller than 10%. The Hubble ratio derived from the distance and redshift of this single object is H(sub 0) = 81(sub -15) (sup +17) km s(exp -1) Mpc(exp -1). In the future, with more of these distant objects, we hope to establish an independent and statistically robust estimate of H(sub 0) based solely on type II supernovae.

  5. A new method to identify earthquake swarms applied to seismicity near the San Jacinto Fault, California

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Shearer, Peter M.

    2016-02-01

    Understanding earthquake clustering in space and time is important but also challenging because of complexities in earthquake patterns and the large and diverse nature of earthquake catalogs. Swarms are of particular interest because they likely result from physical changes in the crust, such as slow slip or fluid flow. Both swarms and clusters resulting from aftershock sequences can span a wide range of spatial and temporal scales. Here we test and implement a new method to identify seismicity clusters of varying sizes and discriminate them from randomly occurring background seismicity. Our method searches for the closest neighboring earthquakes in space and time and compares the number of neighbors to the background events in larger space/time windows. Applying our method to California's San Jacinto Fault Zone (SJFZ), we find a total of 89 swarm-like groups. These groups range in size from 0.14 to 7.23 km and last from 15 minutes to 22 days. The most striking spatial pattern is the larger fraction of swarms at the northern and southern ends of the SJFZ than its central segment, which may be related to more normal-faulting events at the two ends. In order to explore possible driving mechanisms, we study the spatial migration of events in swarms containing at least 20 events by fitting with both linear and diffusion migration models. Our results suggest that SJFZ swarms are better explained by fluid flow because their estimated linear migration velocities are far smaller than those of typical creep events while large values of best-fitting hydraulic diffusivity are found.

  6. Development of Crop Yield Estimation Method by Applying Seasonal Climate Prediction in Asia-Pacific Region

    NASA Astrophysics Data System (ADS)

    Shin, Y.; Lee, E.

    2015-12-01

    Under the influence of recent climate change, abnormal weather condition such as floods and droughts has issued frequently all over the world. The occurrence of abnormal weather in major crop production areas leads to soaring world grain prices because it influence the reduction of crop yield. Development of crop yield estimation method is important means to accommodate the global food crisis caused by abnormal weather. However, due to problems with the reliability of the seasonal climate prediction, application research on agricultural productivity has not been much progress yet. In this study, it is an object to develop long-term crop yield estimation method in major crop production countries worldwide using multi seasonal climate prediction data collected by APEC Climate Center. There are 6-month lead seasonal predictions produced by six state-of-the-art global coupled ocean-atmosphere models(MSC_CANCM3, MSC_CANCM4, NASA, NCEP, PNU, POAMA). First of all, we produce a customized climate data through temporal and spatial downscaling methods for use as a climatic input data to the global scale crop model. Next, we evaluate the uncertainty of climate prediction by applying multi seasonal climate prediction in the crop model. Because rice is the most important staple food crop in the Asia-Pacific region, we assess the reliability of the rice yields using seasonal climate prediction for main rice production countries. RMSE(Root Mean Squire Error) and TCC(Temporal Correlation Coefficient) analysis is performed in Asia-Pacific countries, major 14 rice production countries, to evaluate the reliability of the rice yield according to the climate prediction models. We compare the rice yield data obtained from FAOSTAT and estimated using the seasonal climate prediction data in Asia-Pacific countries. In addition, we show that the reliability of seasonal climate prediction according to the climate models in Asia-Pacific countries where rice cultivation is being carried out.

  7. A new method to identify earthquake swarms applied to seismicity near the San Jacinto Fault, California

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Shearer, Peter M.

    2016-05-01

    Understanding earthquake clustering in space and time is important but also challenging because of complexities in earthquake patterns and the large and diverse nature of earthquake catalogues. Swarms are of particular interest because they likely result from physical changes in the crust, such as slow slip or fluid flow. Both swarms and clusters resulting from aftershock sequences can span a wide range of spatial and temporal scales. Here we test and implement a new method to identify seismicity clusters of varying sizes and discriminate them from randomly occurring background seismicity. Our method searches for the closest neighbouring earthquakes in space and time and compares the number of neighbours to the background events in larger space/time windows. Applying our method to California's San Jacinto Fault Zone (SJFZ), we find a total of 89 swarm-like groups. These groups range in size from 0.14 to 7.23 km and last from 15 min to 22 d. The most striking spatial pattern is the larger fraction of swarms at the northern and southern ends of the SJFZ than its central segment, which may be related to more normal-faulting events at the two ends. In order to explore possible driving mechanisms, we study the spatial migration of events in swarms containing at least 20 events by fitting with both linear and diffusion migration models. Our results suggest that SJFZ swarms are better explained by fluid flow because their estimated linear migration velocities are far smaller than those of typical creep events while large values of best-fitting hydraulic diffusivity are found.

  8. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    NASA Astrophysics Data System (ADS)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  9. Dissecting trait heterogeneity: a comparison of three clustering methods applied to genotypic data

    PubMed Central

    Thornton-Wells, Tricia A; Moore, Jason H; Haines, Jonathan L

    2006-01-01

    Background Trait heterogeneity, which exists when a trait has been defined with insufficient specificity such that it is actually two or more distinct traits, has been implicated as a confounding factor in traditional statistical genetics of complex human disease. In the absence of detailed phenotypic data collected consistently in combination with genetic data, unsupervised computational methodologies offer the potential for discovering underlying trait heterogeneity. The performance of three such methods – Bayesian Classification, Hypergraph-Based Clustering, and Fuzzy k-Modes Clustering – appropriate for categorical data were compared. Also tested was the ability of these methods to detect trait heterogeneity in the presence of locus heterogeneity and/or gene-gene interaction, which are two other complicating factors in discovering genetic models of complex human disease. To determine the efficacy of applying the Bayesian Classification method to real data, the reliability of its internal clustering metrics at finding good clusterings was evaluated using permutation testing. Results Bayesian Classification outperformed the other two methods, with the exception that the Fuzzy k-Modes Clustering performed best on the most complex genetic model. Bayesian Classification achieved excellent recovery for 75% of the datasets simulated under the simplest genetic model, while it achieved moderate recovery for 56% of datasets with a sample size of 500 or more (across all simulated models) and for 86% of datasets with 10 or fewer nonfunctional loci (across all simulated models). Neither Hypergraph Clustering nor Fuzzy k-Modes Clustering achieved good or excellent cluster recovery for a majority of datasets even under a restricted set of conditions. When using the average log of class strength as the internal clustering metric, the false positive rate was controlled very well, at three percent or less for all three significance levels (0.01, 0.05, 0.10), and the false

  10. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    PubMed

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-01

    necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. PMID:26553956

  11. Gliding Box method applied to trace element distribution of a geochemical data set

    NASA Astrophysics Data System (ADS)

    Paz González, Antonio; Vidal Vázquez, Eva; Rosario García Moreno, M.; Paz Ferreiro, Jorge; Saa Requejo, Antonio; María Tarquis, Ana

    2010-05-01

    The application of fractal theory to process geochemical prospecting data can provide useful information for evaluating mineralization potential. A geochemical survey was carried out in the west area of Coruña province (NW Spain). Major elements and trace elements were determined by standard analytical techniques. It is well known that there are specific elements or arrays of elements, which are associated with specific types of mineralization. Arsenic has been used to evaluate the metallogenetic importance of the studied zone. Moreover, as can be considered as a pathfinder of Au, as these two elements are genetically associated. The main objective of this study was to use multifractal analysis to characterize the distribution of three trace elements, namely Au, As, and Sb. Concerning the local geology, the study area comprises predominantly acid rocks, mainly alkaline and calcalkaline granites, gneiss and migmatites. The most significant structural feature of this zone is the presence of a mylonitic band, with an approximate NE-SW orientation. The data set used in this study comprises 323 samples collected, with standard geochemical criteria, preferentially in the B horizon of the soil. Occasionally where this horizon was not present, samples were collected from the C horizon. Samples were taken in a rectilinear grid. The sampling lines were perpendicular to the NE-SW tectonic structures. Frequency distributions of the studied elements departed from normal. Coefficients of variation ranked as follows: Sb < As < Au. Significant correlation coefficients between Au, Sb, and As were found, even if these were low. The so-called ‘gliding box' algorithm (GB) proposed originally for lacunarity analysis has been extended to multifractal modelling and provides an alternative to the ‘box-counting' method for implementing multifractal analysis. The partitioning method applied in GB algorithm constructs samples by gliding a box of certain size (a) over the grid map in all

  12. Statistical Track-Before-Detect Methods Applied to Faint Optical Observations of Resident Space Objects

    NASA Astrophysics Data System (ADS)

    Fujimoto, K.; Yanagisawa, T.; Uetsuhara, M.

    Automated detection and tracking of faint objects in optical, or bearing-only, sensor imagery is a topic of immense interest in space surveillance. Robust methods in this realm will lead to better space situational awareness (SSA) while reducing the cost of sensors and optics. They are especially relevant in the search for high area-to-mass ratio (HAMR) objects, as their apparent brightness can change significantly over time. A track-before-detect (TBD) approach has been shown to be suitable for faint, low signal-to-noise ratio (SNR) images of resident space objects (RSOs). TBD does not rely upon the extraction of feature points within the image based on some thresholding criteria, but rather directly takes as input the intensity information from the image file. Not only is all of the available information from the image used, TBD avoids the computational intractability of the conventional feature-based line detection (i.e., "string of pearls") approach to track detection for low SNR data. Implementation of TBD rooted in finite set statistics (FISST) theory has been proposed recently by Vo, et al. Compared to other TBD methods applied so far to SSA, such as the stacking method or multi-pass multi-period denoising, the FISST approach is statistically rigorous and has been shown to be more computationally efficient, thus paving the path toward on-line processing. In this paper, we intend to apply a multi-Bernoulli filter to actual CCD imagery of RSOs. The multi-Bernoulli filter can explicitly account for the birth and death of multiple targets in a measurement arc. TBD is achieved via a sequential Monte Carlo implementation. Preliminary results with simulated single-target data indicate that a Bernoulli filter can successfully track and detect objects with measurement SNR as low as 2.4. Although the advent of fast-cadence scientific CMOS sensors have made the automation of faint object detection a realistic goal, it is nonetheless a difficult goal, as measurements

  13. Review of methods used by chiropractors to determine the site for applying manipulation

    PubMed Central

    2013-01-01

    Background With the development of increasing evidence for the use of manipulation in the management of musculoskeletal conditions, there is growing interest in identifying the appropriate indications for care. Recently, attempts have been made to develop clinical prediction rules, however the validity of these clinical prediction rules remains unclear and their impact on care delivery has yet to be established. The current study was designed to evaluate the literature on the validity and reliability of the more common methods used by doctors of chiropractic to inform the choice of the site at which to apply spinal manipulation. Methods Structured searches were conducted in Medline, PubMed, CINAHL and ICL, supported by hand searches of archives, to identify studies of the diagnostic reliability and validity of common methods used to identify the site of treatment application. To be included, studies were to present original data from studies of human subjects and be designed to address the region or location of care delivery. Only English language manuscripts from peer-reviewed journals were included. The quality of evidence was ranked using QUADAS for validity and QAREL for reliability, as appropriate. Data were extracted and synthesized, and were evaluated in terms of strength of evidence and the degree to which the evidence was favourable for clinical use of the method under investigation. Results A total of 2594 titles were screened from which 201 articles met all inclusion criteria. The spectrum of manuscript quality was quite broad, as was the degree to which the evidence favoured clinical application of the diagnostic methods reviewed. The most convincing favourable evidence was for methods which confirmed or provoked pain at a specific spinal segmental level or region. There was also high quality evidence supporting the use, with limitations, of static and motion palpation, and measures of leg length inequality. Evidence of mixed quality supported the use

  14. DC-Compensated Current Transformer †

    PubMed Central

    Ripka, Pavel; Draxler, Karel; Styblíková, Renata

    2016-01-01

    Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830

  15. Applying Automated MR-Based Diagnostic Methods to the Memory Clinic: A Prospective Study

    PubMed Central

    Klöppel, Stefan; Peter, Jessica; Ludl, Anna; Pilatus, Anne; Maier, Sabrina; Mader, Irina; Heimbach, Bernhard; Frings, Lars; Egger, Karl; Dukart, Juergen; Schroeter, Matthias L.; Perneczky, Robert; Häussermann, Peter; Vach, Werner; Urbach, Horst; Teipel, Stefan; Hüll, Michael; Abdulkadir, Ahmed

    2015-01-01

    Abstract Several studies have demonstrated that fully automated pattern recognition methods applied to structural magnetic resonance imaging (MRI) aid in the diagnosis of dementia, but these conclusions are based on highly preselected samples that significantly differ from that seen in a dementia clinic. At a single dementia clinic, we evaluated the ability of a linear support vector machine trained with completely unrelated data to differentiate between Alzheimer’s disease (AD), frontotemporal dementia (FTD), Lewy body dementia, and healthy aging based on 3D-T1 weighted MRI data sets. Furthermore, we predicted progression to AD in subjects with mild cognitive impairment (MCI) at baseline and automatically quantified white matter hyperintensities from FLAIR-images. Separating additionally recruited healthy elderly from those with dementia was accurate with an area under the curve (AUC) of 0.97 (according to Fig. 4). Multi-class separation of patients with either AD or FTD from other included groups was good on the training set (AUC >  0.9) but substantially less accurate (AUC = 0.76 for AD, AUC = 0.78 for FTD) on 134 cases from the local clinic. Longitudinal data from 28 cases with MCI at baseline and appropriate follow-up data were available. The computer tool discriminated progressive from stable MCI with AUC = 0.73, compared to AUC = 0.80 for the training set. A relatively low accuracy by clinicians (AUC = 0.81) illustrates the difficulties of predicting conversion in this heterogeneous cohort. This first application of a MRI-based pattern recognition method to a routine sample demonstrates feasibility, but also illustrates that automated multi-class differential diagnoses have to be the focus of future methodological developments and application studies PMID:26401773

  16. Applying new data-entropy and data-scatter methods for optical digital signal analysis

    NASA Astrophysics Data System (ADS)

    McMillan, N. D.; Egan, J.; Denieffe, D.; Riedel, S.; Tiernan, K.; McGowan, G.; Farrell, G.

    2005-06-01

    This paper introduces for the first time a numerical example of the data-entropy 'quality-budget' method. The paper builds on an earlier theoretical investigation into the application of this information theory approach for opto-electronic system engineering. Currently the most widely used way of analysing such a system is with the power budget. This established method cannot however integrate noise of different generic types. The traditional power budget approach is not capable of allowing analysis of a system with different noise types and specifically providing a measure of signal quality. The data-entropy budget first introduced by McMillan and Reidel on the other hand is able to handle diverse forms of noise. This is achieved by applying the dimensionless 'bit measure' in a quality-budget to integrate the analysis of all types of losses. This new approach therefore facilitates the assessment of both signal quality and power issues in a unified way. The software implementation of data-entropy has been utilised for testing on a fiber optic network. The results of various new quantitative data-entropy measures on the digital system are given and their utility discussed. A new data mining technique known as data-scatter also introduced by McMillan and Reidel provides a useful visualisation of the relationships between data sets and is discussed. The paper ends by giving some perspective on future work in which the data-entropy technique, providing the objective difference measure on the signals, and data-scatter technique, providing qualitative information on the signals, are integrated together for optical communication applications.

  17. Applying Automated MR-Based Diagnostic Methods to the Memory Clinic: A Prospective Study.

    PubMed

    Klöppel, Stefan; Peter, Jessica; Ludl, Anna; Pilatus, Anne; Maier, Sabrina; Mader, Irina; Heimbach, Bernhard; Frings, Lars; Egger, Karl; Dukart, Juergen; Schroeter, Matthias L; Perneczky, Robert; Häussermann, Peter; Vach, Werner; Urbach, Horst; Teipel, Stefan; Hüll, Michael; Abdulkadir, Ahmed

    2015-01-01

    Several studies have demonstrated that fully automated pattern recognition methods applied to structural magnetic resonance imaging (MRI) aid in the diagnosis of dementia, but these conclusions are based on highly preselected samples that significantly differ from that seen in a dementia clinic. At a single dementia clinic, we evaluated the ability of a linear support vector machine trained with completely unrelated data to differentiate between Alzheimer's disease (AD), frontotemporal dementia (FTD), Lewy body dementia, and healthy aging based on 3D-T1 weighted MRI data sets. Furthermore, we predicted progression to AD in subjects with mild cognitive impairment (MCI) at baseline and automatically quantified white matter hyperintensities from FLAIR-images. Separating additionally recruited healthy elderly from those with dementia was accurate with an area under the curve (AUC) of 0.97 (according to Fig. 4). Multi-class separation of patients with either AD or FTD from other included groups was good on the training set (AUC >  0.9) but substantially less accurate (AUC = 0.76 for AD, AUC = 0.78 for FTD) on 134 cases from the local clinic. Longitudinal data from 28 cases with MCI at baseline and appropriate follow-up data were available. The computer tool discriminated progressive from stable MCI with AUC = 0.73, compared to AUC = 0.80 for the training set. A relatively low accuracy by clinicians (AUC = 0.81) illustrates the difficulties of predicting conversion in this heterogeneous cohort. This first application of a MRI-based pattern recognition method to a routine sample demonstrates feasibility, but also illustrates that automated multi-class differential diagnoses have to be the focus of future methodological developments and application studies. PMID:26401773

  18. An alternative method of gas boriding applied to the formation of borocarburized layer

    SciTech Connect

    Kulka, M. Makuch, N.; Pertek, A.; Piasecki, A.

    2012-10-15

    The borocarburized layers were produced by tandem diffusion processes: carburizing followed by boriding. An alternative method of gas boriding was proposed. Two-stage gas boronizing in N{sub 2}-H{sub 2}-BCl{sub 3} atmosphere was applied to the formation of iron borides on a carburized substrate. This process consisted in two stages, which were alternately repeated: saturation by boron and diffusion annealing. The microstructure and microhardness of produced layer were compared to those-obtained in case of continuous gas boriding in H{sub 2}-BCl{sub 3} atmosphere, earlier used. The first objective of two-stage boronizing, consisting in acceleration of boron diffusion, has been efficiently implemented. Despite the lower temperature and shorter duration of boronizing, about 1.5 times larger iron borides' zone has been formed on carburized steel. Second objective, the absolute elimination of brittle FeB phase, has failed. However, the amount of FeB phase has been considerably limited. Longer diffusion annealing should provide the boride layer with single-phase microstructure, without FeB phase. - Highlights: Black-Right-Pointing-Pointer Alternative method of gas boriding in H{sub 2}-N{sub 2}-BCl{sub 3} atmosphere was proposed. Black-Right-Pointing-Pointer The process consisted in two stages: saturation by boron and diffusion annealing. Black-Right-Pointing-Pointer These stages of short duration were alternately repeated. Black-Right-Pointing-Pointer The acceleration of boron diffusion was efficiently implemented. Black-Right-Pointing-Pointer The amount of FeB phase in the boride zone was limited.

  19. Intelligent dc-dc Converter Technology Developed and Tested

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2001-01-01

    The NASA Glenn Research Center and the Cleveland State University have developed a digitally controlled dc-dc converter to research the benefits of flexible, digital control on power electronics and systems. Initial research and testing has shown that conventional dc-dc converters can benefit from improved performance by using digital-signal processors and nonlinear control algorithms.

  20. Finite element method for a class of viscoelastic flows in deforming domains applied to jet breakup

    NASA Astrophysics Data System (ADS)

    Keunings, R.

    1984-05-01

    A numerical method for solving a class of transient viscoelastic flows in domains with free boundaries which is based on a Galerkin finite element technique combined with a predictor/corrector scheme that allows for the prediction of stress field, velocity field and flow domain as a function of time is presented. The numerical procedure is applied to the analysis of surface tension driven breakup of liquid jets. The nonlinear growth of a periodic disturbance imposed on an infinitely long jet and leading to breakup was studied. It is predicted that in the Newtonian case the birth of satellite drops when inertia forces are present. It is shown that elasticity accelerates the breakup process at short times for an Oldroyd fluid which is consistent with linear stability analyses. This tendency however, is reversed at later times when a pattern of drops connected by stable filaments is obtained. The stabilizing effect of elastic forces, known experimentally for any years, and are predicted shown it is that the breakup mechanism of a viscoelastic jet cannot be described by linearized dynamics.

  1. Finite element method for a class of viscoelastic flows in deforming domains applied to jet breakup

    SciTech Connect

    Keunings, R.

    1984-05-01

    A numerical method for solving a class of transient viscoelastic flows in domains with free boundaries is based on a Galerkin/Finite Element technique combined with a predictor-corrector scheme that allows for the prediction of stress field, velocity field and flow domain as a function of time. The numerical procedure is applied to the analysis of surface-tension-driven breakup of liquid jets. We study the nonlinear growth of a periodic disturbance imposed on an infinitely long jet and leading to breakup. In the Newtonian case, we predict the birth of satellite drops when inertia forces are present. Results for an Oldroyd fluid show that elasticity accelerates the breakup process at short times which is consistent with linear stability analyses. However, this tendency is dramatically reversed at later times when a pattern of drops connected by remarkably stable filaments is obtained. We thus predict the stabilizing effect of elastic forces, known experimentally for many years, and show that the breakup mechanism of a viscoelastic jet cannot be described by linearized dynamics.

  2. Kinetics-based phase change approach for VOF method applied to boiling flow

    NASA Astrophysics Data System (ADS)

    Cifani, Paolo; Geurts, Bernard; Kuerten, Hans

    2014-11-01

    Direct numerical simulations of boiling flows are performed to better understand the interaction of boiling phenomena with turbulence. The multiphase flow is simulated by solving a single set of equations for the whole flow field according to the one-fluid formulation, using a VOF interface capturing method. Interface terms, related to surface tension, interphase mass transfer and latent heat, are added at the phase boundary. The mass transfer rate across the interface is derived from kinetic theory and subsequently coupled with the continuum representation of the flow field. The numerical model was implemented in OpenFOAM and validated against 3 cases: evaporation of a spherical uniformly heated droplet, growth of a spherical bubble in a superheated liquid and two dimensional film boiling. The computational model will be used to investigate the change in turbulence intensity in a fully developed channel flow due to interaction with boiling heat and mass transfer. In particular, we will focus on the influence of the vapor bubble volume fraction on enhancing heat and mass transfer. Furthermore, we will investigate kinetic energy spectra in order to identify the dynamics associated with the wakes of vapor bubbles. Department of Applied Mathematics, 7500 AE Enschede, NL.

  3. Variational multiparticle-multihole configuration mixing method applied to pairing correlations in nuclei

    SciTech Connect

    Pillet, N.; Berger, J.-F.; Caurier, E.

    2008-08-15

    Applying a variational multiparticle-multihole configuration mixing method whose purpose is to include correlations beyond the mean field in a unified way without particle number and Pauli principle violations, we investigate pairing-like correlations in the ground states of {sup 116}Sn, {sup 106}Sn, and {sup 100}Sn. The same effective nucleon-nucleon interaction, namely, the D1S parametrization of the Gogny force, is used to derive both the mean field and correlation components of nuclear wave functions. Calculations are performed using an axially symmetric representation. The structure of correlated wave functions, their convergence with respect to the number of particle-hole excitations, and the influence of correlations on single-particle level spectra and occupation probabilities are analyzed and compared with results obtained with the same two-body effective interaction from BCS, Hartree-Fock-Bogoliubov, and particle number projected after variation BCS approaches. Calculations of nuclear radii and the first theoretical excited 0{sup +} states are compared with experimental data.

  4. Electrohydrodynamic instabilities in thin viscoelastic films: AC and DC fields

    NASA Astrophysics Data System (ADS)

    Espin, Leonardo; Corbett, Andrew; Kumar, Satish; Kumar Research Group Team

    2012-11-01

    Electrohydrodynamic instabilities in thin liquid films are a promising route for the self-assembly of well-defined topographical features on the surfaces of materials. Here, we study the effect of viscoelasticity on these instabilities under the influence of AC and DC electric fields. Viscoelasticity is incorporated via a Jeffreys model and both perfect and leaky dielectric materials are considered. In the case of DC fields, asymptotic methods are employed to shed light on the nature of a singularity that arises when solvent viscosity is neglected (i.e., the Maxwell-fluid limit). In the case of AC fields, we apply a numerical procedure based on Floquet theory to determine the maximum growth rate and corresponding wavenumber as a function of the oscillation amplitude and frequency. Elasticity is found to increase both the maximum growth rate and the corresponding wavenumber, with the effects being the most pronounced when the oscillation period is comparable to the fluid relaxation time.

  5. Output feedback trajectory stabilization of the uncertainty DC servomechanism system.

    PubMed

    Aguilar-Ibañez, Carlos; Garrido-Moctezuma, Ruben; Davila, Jorge

    2012-11-01

    This work proposes a solution for the output feedback trajectory-tracking problem in the case of an uncertain DC servomechanism system. The system consists of a pendulum actuated by a DC motor and subject to a time-varying bounded disturbance. The control law consists of a Proportional Derivative controller and an uncertain estimator that allows compensating the effects of the unknown bounded perturbation. Because the motor velocity state is not available from measurements, a second-order sliding-mode observer permits the estimation of this variable in finite time. This last feature allows applying the Separation Principle. The convergence analysis is carried out by means of the Lyapunov method. Results obtained from numerical simulations and experiments in a laboratory prototype show the performance of the closed loop system. PMID:22884179

  6. Applying Terzaghi's method of slope characterization to the recognition of Holocene land slippage

    NASA Astrophysics Data System (ADS)

    Rogers, J. David; Chung, Jae-won

    2016-07-01

    -placed trenches across headscarp grabens can provide more detailed structure of old landslides and are usually a cost-effective approach. Additional subsurface exploration can often be employed to characterize landslides. Small diameter borings are usually employed for geotechnical investigations but can easily be applied to landslides, depending on the mean particle size diameter (D50). Downhole logging of large diameter holes is the best method to evaluate complex subsurface conditions, such as dormant bedrock landslides.

  7. Efficient Nonnegative Matrix Factorization by DC Programming and DCA.

    PubMed

    Le Thi, Hoai An; Vo, Xuan Thanh; Dinh, Tao Pham

    2016-06-01

    In this letter, we consider the nonnegative matrix factorization (NMF) problem and several NMF variants. Two approaches based on DC (difference of convex functions) programming and DCA (DC algorithm) are developed. The first approach follows the alternating framework that requires solving, at each iteration, two nonnegativity-constrained least squares subproblems for which DCA-based schemes are investigated. The convergence property of the proposed algorithm is carefully studied. We show that with suitable DC decompositions, our algorithm generates most of the standard methods for the NMF problem. The second approach directly applies DCA on the whole NMF problem. Two algorithms-one computing all variables and one deploying a variable selection strategy-are proposed. The proposed methods are then adapted to solve various NMF variants, including the nonnegative factorization, the smooth regularization NMF, the sparse regularization NMF, the multilayer NMF, the convex/convex-hull NMF, and the symmetric NMF. We also show that our algorithms include several existing methods for these NMF variants as special versions. The efficiency of the proposed approaches is empirically demonstrated on both real-world and synthetic data sets. It turns out that our algorithms compete favorably with five state-of-the-art alternating nonnegative least squares algorithms. PMID:27136704

  8. A novel wireless power and data transmission AC to DC converter for an implantable device.

    PubMed

    Liu, Jhao-Yan; Tang, Kea-Tiong

    2013-01-01

    This article presents a novel AC to DC converter implemented by standard CMOS technology, applied for wireless power transmission. This circuit combines the functions of the rectifier and DC to DC converter, rather than using the rectifier to convert AC to DC and then supplying the required voltage with regulator as in the transitional method. This modification can reduce the power consumption and the area of the circuit. This circuit also transfers the loading condition back to the external circuit by the load shift keying(LSK), determining if the input power is not enough or excessive, which increases the efficiency of the total system. The AC to DC converter is fabricated with the TSMC 90nm CMOS process. The circuit area is 0.071mm(2). The circuit can produce a 1V DC voltage with maximum output current of 10mA from an AC input ranging from 1.5V to 2V, at 1MHz to 10MHz. PMID:24110077

  9. Different spectrophotometric methods applied for the analysis of binary mixture of flucloxacillin and amoxicillin: A comparative study.

    PubMed

    Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed

    2016-05-15

    Three different spectrophotometric methods were applied for the quantitative analysis of flucloxacillin and amoxicillin in their binary mixture, namely, ratio subtraction, absorbance subtraction and amplitude modulation. A comparative study was done listing the advantages and the disadvantages of each method. All the methods were validated according to the ICH guidelines and the obtained accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can be used for the routine analysis of flucloxacillin and amoxicillin in their binary mixtures. PMID:26950503

  10. Different spectrophotometric methods applied for the analysis of binary mixture of flucloxacillin and amoxicillin: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-05-01

    Three different spectrophotometric methods were applied for the quantitative analysis of flucloxacillin and amoxicillin in their binary mixture, namely, ratio subtraction, absorbance subtraction and amplitude modulation. A comparative study was done listing the advantages and the disadvantages of each method. All the methods were validated according to the ICH guidelines and the obtained accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can be used for the routine analysis of flucloxacillin and amoxicillin in their binary mixtures.

  11. Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Wilton, D. R.; Dobbins, J. A.

    2002-01-01

    In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required

  12. Joint inversion of TEM and DC in roadway advanced detection based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Jiulong; Li, Fei; Peng, Suping; Sun, Xiaoyun; Zheng, Jing; Jia, Jizhe

    2015-12-01

    Transient electromagnetic method (TEM)and direct current method (DC)are two key widely applied methods for practical roadway detection, but both have their limitations. To take the advantage of each method, a synchronous nonlinear joint inversion method is proposed based on TEM and DC by using particle swarm optimization (PSO)algorithm. Firstly, a model with double low resistance anomaly and interference is constructed to test the performance of the method. Then the independent inversion and joint inversion are calculated by using the model built above. It is demonstrated that the joint inversion helped in improving the interpretation of the data to get better results. It is because that the suppression of interference and separation of the resistivity anomalies ahead and the back of the roadway working face using the proposed method. Finally, the proposed method was successfully used in a coalmine in Huainan coalfield in east China to demonstrate its practical usefulness.

  13. RISK D/C

    NASA Technical Reports Server (NTRS)

    Dias, W. C.

    1994-01-01

    RISK D/C is a prototype program which attempts to do program risk modeling for the Space Exploration Initiative (SEI) architectures proposed in the Synthesis Group Report. Risk assessment is made with respect to risk events, their probabilities, and the severities of potential results. The program allows risk mitigation strategies to be proposed for an exploration program architecture and to be ranked with respect to their effectiveness. RISK D/C allows for the fact that risk assessment in early planning phases is subjective. Although specific to the SEI in its present form, RISK D/C can be used as a framework for developing a risk assessment program for other specific uses. RISK D/C is organized into files, or stacks, of information, including the architecture, the hazard, and the risk event stacks. Although predefined, all stacks can be upgraded by a user. The architecture stack contains information concerning the general program alternatives, which are subsequently broken down into waypoints, missions, and mission phases. The hazard stack includes any background condition which could result in a risk event. A risk event is anything unfavorable that could happen during the course of a specific point within an architecture, and the risk event stack provides the probabilities, consequences, severities, and any mitigation strategies which could be used to reduce the risk of the event, and how much the risk is reduced. RISK D/C was developed for Macintosh series computers. It requires HyperCard 2.0 or later, as well as 2Mb of RAM and System 6.0.8 or later. A Macintosh II series computer is recommended due to speed concerns. The standard distribution medium for this package is one 3.5 inch 800K Macintosh format diskette. RISK D/C was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Macintosh and HyperCard are trademarks of Apple Computer, Inc.

  14. The range split-spectrum method for ionosphere estimation applied to the 2008 Kyrgyzstan earthquake

    NASA Astrophysics Data System (ADS)

    Gomba, Giorgio; Eineder, Michael

    2015-04-01

    L-band remote sensing systems, like the future Tandem-L mission, are disrupted by the ionized upper part of the atmosphere called ionosphere. The ionosphere is a region of the upper atmosphere composed by gases that are ionized by the solar radiation. The extent of the effects induced on a SAR measurement is given by the electron density integrated along the radio-wave paths and on its spatial variations. The main effect of the ionosphere on microwaves is to cause an additional delay, which introduces a phase difference between SAR measurements modifying the interferometric phase. The objectives of the Tandem-L mission are the systematic monitoring of dynamic Earth processes like Earth surface deformations, vegetation structure, ice and glacier changes and ocean surface currents. The scientific requirements regarding the mapping of surface deformation due to tectonic processes, earthquakes, volcanic cycles and anthropogenic factors demand deformation measurements; namely one, two or three dimensional displacement maps with resolutions of a few hundreds of meters and accuracies of centimeter to millimeter level. Ionospheric effects can make impossible to produce deformation maps with such accuracy and must therefore be estimated and compensated. As an example of this process, the implementation of the range split-spectrum method proposed in [1,2] will be presented and applied to an example dataset. The 2008 Kyrgyzstan Earthquake of October 5 is imaged by an ALOS PALSAR interferogram; a part from the earthquake, many fringes due to strong ionospheric variations can also be seen. The compensated interferogram shows how the ionosphere-related fringes were successfully estimated and removed. [1] Rosen, P.A.; Hensley, S.; Chen, C., "Measurement and mitigation of the ionosphere in L-band Interferometric SAR data," Radar Conference, 2010 IEEE , vol., no., pp.1459,1463, 10-14 May 2010 [2] Brcic, R.; Parizzi, A.; Eineder, M.; Bamler, R.; Meyer, F., "Estimation and

  15. Shear-Sensitive Liquid Crystal Coating Method Applied Through Transparent Test Surfaces

    NASA Technical Reports Server (NTRS)

    Reda, Daniel C.; Wilder, Michael C.

    1999-01-01

    Research conducted at NASA Ames Research Center has shown that the color-change response of a shear-sensitive liquid crystal coating (SSLCC) to aerodynamic shear depends on both the magnitude of the local shear vector and its direction relative to the observer's in-plane line of sight. In conventional applications, the surface of the SSLCC exposed to aerodynamic shear is illuminated with white light from the normal direction and observed from an oblique above-plane view angle of order 30 deg. In this top-light/top-view mode, shear vectors with components directed away from the observer cause the SSLCC to exhibit color-change responses. At any surface point, the maximum color change (measured from the no-shear red or orange color) always occurs when the local vector is aligned with, and directed away from, the observer. The magnitude of the color change at this vector-observer-aligned orientation scales directly with shear stress magnitude. Conversely, any surface point exposed to a shear vector with a component directed toward the observer exhibits a non-color-change response, always characterized by a rusty-red or brown color, independent of both shear magnitude and direction. These unique, highly directional color-change responses of SSLCCs to aerodynamic shear allow for the full-surface visualization and measurement of continuous shear stress vector distributions. The objective of the present research was to investigate application of the SSLCC method through a transparent test surface. In this new back-light/back-view mode, the exposed surface of the SSLCC would be subjected to aerodynamic shear stress while the contact surface between the SSLCC and the solid, transparent wall would be illuminated and viewed in the same geometrical arrangement as applied in conventional applications. It was unknown at the outset whether or not color-change responses would be observable from the contact surface of the SSLCC, and, if seen, how these color-change responses might

  16. DC Breakdown Experiments

    SciTech Connect

    Calatroni, S.; Descoeudres, A.; Levinsen, Y.; Taborelli, M.; Wuensch, W.

    2009-01-22

    In the context of the CLIC (Compact Linear Collider) project investigations of DC breakdown in ultra high vacuum are carried out in parallel with high power RF tests. From the point of view of saturation breakdown field the best material tested so far is stainless steel, followed by titanium. Copper shows a four times weaker breakdown field than stainless steel. The results indicate clearly that the breakdown events are initiated by field emission current and that the breakdown field is limited by the cathode. In analogy to RF, the breakdown probability has been measured in DC and the data show similar behaviour as a function of electric field.

  17. Report on Non-Contact DC Electric Field Sensors

    SciTech Connect

    Miles, R; Bond, T; Meyer, G

    2009-06-16

    This document reports on methods used to measure DC electrostatic fields in the range of 100 to 4000 V/m using a non-contact method. The project for which this report is written requires this capability. Non-contact measurements of DC fields is complicated by the effect of the accumulation of random space-charges near the sensors which interfere with the measurement of the field-of-interest and consequently, many forms of field measurements are either limited to AC measurements or use oscillating devices to create pseudo-AC fields. The intent of this document is to report on methods discussed in the literature for non-contact measurement of DC fields. Electric field meters report either the electric field expressed in volts per distance or the voltage measured with respect to a ground reference. Common commercial applications for measuring static (DC) electric fields include measurement of surface charge on materials near electronic equipment to prevent arcing which can destroy sensitive electronic components, measurement of the potential for lightning to strike buildings or other exposed assets, measurement of the electric fields under power lines to investigate potential health risks from exposure to EM fields and measurement of fields emanating from the brain for brain diagnostic purposes. Companies that make electric field sensors include Trek (Medina, NY), MKS Instruments, Boltek, Campbell Systems, Mission Instruments, Monroe Electronics, AlphaLab, Inc. and others. In addition to commercial vendors, there are research activities continuing in the MEMS and optical arenas to make compact devices using the principles applied to the larger commercial sensors.

  18. [Protein assay by the modified Dumas method applied to preparations of plasma proteins].

    PubMed

    Blondel, P; Vian, L

    1993-01-01

    Quantify protein according Pharmacopoeia method, based on Kjeldahl method, needs a long time to do. The development of an automaton which used the modified Dumas method divide the analysis time by 15 (6 minutes versus over 90 minutes). The results show no statistical differences between official method and this one. PMID:8154798

  19. The Intertextual Method for Art Education Applied in Japanese Paper Theatre--A Study on Discovering Intercultural Differences

    ERIC Educational Resources Information Center

    Paatela-Nieminen, Martina

    2008-01-01

    In art education we need methods for studying works of art and visual culture interculturally because there are many multicultural art classes and little consensus as to how to interpret art in different cultures. In this article my central aim was to apply the intertextual method that I developed in my doctoral thesis for Western art education to…

  20. 187Re - 187Os Nuclear Geochronometry: A New Dating Method Applied to Old Ores

    NASA Astrophysics Data System (ADS)

    Roller, Goetz

    2015-04-01

    187Re - 187Os nuclear geochronometry is a newly developed dating method especially (but not only) for PGE hosting magmatic ore deposits. It combines ideas of nuclear astrophysics with geochronology. For this, the concept of sudden nucleosynthesis [1-3] is used to calculate so-called nucleogeochronometric Rhenium-Osmium two-point-isochrone (TPI) ages. Here, the method is applied to the Sudbury Igneous Complex (SIC) and the Stillwater Complex (SC), using a set of two nuclear geochronometers. They are named the BARBERTON ( Re/Os = 0.849, 187Os/186Os = 10.04 ± 0.015 [4]) and the IVREA (Re/Os = 0.951, 187Os/186Os = 1.9360 ± 0.0015 [5]) nuclear geochronometer. Calculated TPI ages are consistent with results from Sm-Nd geochronology, a previously published Re-Os Molybdenum age of 2740 ± 80 Ma for the G-chromitite of the SC [6] and a Re-Os isochrone age of 1689 ± 160 Ma for the Strathcona ores of the SIC [7]. This leads to an alternative explanation of the peculiar and enigmatic 187Os/186Osi isotopic signatures reported from both ore deposits. For example, for a TPI age of 2717 ± 100 Ma the Ultramafic Series of the SC contains both extremely low (subchrondritic) 187Os/186Osi ratios (187Os/186Osi = 0.125 ± 0.067) and extremely radiogenic isotopic signatures (187Os/186Osi = 6.55 ± 1.7, [6]) in mineral separates (chromites) and whole rock samples, respectively. Within the Strathcona ores of the SIC, even more pronounced radiogenic 187Os/186Os initial ratios can be calculated for TPI ages between 1586 ± 63 Ma (187Os/186Osi = 8.998 ± 0.045) and 1733 ± 84 Ma (187Os/186Osi = 8.901 ± 0.059). These results are in line with the recalculated Re-Os isochrone age of 1689 ± 160 Ma (187Os/186Osi = 8.8 ± 2.3 [7]). In the light of nuclear geochronometry, the occurrence of such peculiar isotopic 187Os/186Osi signatures within one and the same lithological horizon are plausible if explained by mingling of the two nucleogeochronometric (BARBERTON and IVREA) reservoirs containing

  1. Novel methods for physical mapping of the human genome applied to the long arm of chromosome 5

    SciTech Connect

    McClelland, M.

    1991-01-01

    Our objective was to develop novel methods for mapping of the human genome. The techniques to assessed were: (1) three methods for the production of unique sequence clones from the region of interest; (2) novel methods for the production and separation of multi-megabase DNA fragments; (3) methods for the production of physical linking clones'' that contain rare restriction sites; (4) application of these methods and available resources to map the region of interest. We have also concentrated on developing rare-cleavage tools based on restriction enzymes and methylases. In the last year we: studied the effect of methylation on enzymes used for genome mapping; collaborated on characterization of two new isoschizomers; developed a reliable way to produce partial digests of DNA in agarose and applied it to the human genome; and applied a method to double the apparent specificity of rare-cutter'' endonucleases. 35 refs., 1 tab.

  2. DC arc weld starter

    DOEpatents

    Campiotti, Richard H.; Hopwood, James E.

    1990-01-01

    A system for starting an arc for welding uses three DC power supplies, a high voltage supply for initiating the arc, an intermediate voltage supply for sustaining the arc, and a low voltage welding supply directly connected across the gap after the high voltage supply is disconnected.

  3. DYLOS DC110

    EPA Science Inventory

    The Dylos DC1100 air quality monitor measures particulate matter (PM) to provide a continuous assessment of indoor air quality. The unit counts particles in two size ranges: large and small. According to the manufacturer, large particles have diameters between 2.5 and 10 micromet...

  4. Method of protecting the surface of a substrate. [by applying aluminide coating

    NASA Technical Reports Server (NTRS)

    Gedwill, M. A. (Inventor); Grisaffe, S. J.

    1974-01-01

    The surface of a metallic base system is initially coated with a metallic alloy layer that is ductile and oxidation resistant. An aluminide coating is then applied to the metallic alloy layer. The chemistry of the metallic alloy layer is such that the oxidation resistance of the subsequently aluminized outermost layer is not seriously degraded.

  5. Nutrient runoff losses from liquid dairy manure applied with low-disturbance methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Manure applied to cropland is a source of P and N in surface runoff and can contribute to impairment of surface waters. Immediate tillage incorporates manure into the soil, which may reduce nutrient loss in runoff, as well as N loss via NH3 volatilization. But tillage also incorporates crop residue,...

  6. Nutrient runoff losses from liquid dairy manure applied with low-disturbance methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Manure applied to cropland is a source of phosphorus and nitrogen in surface runoff and can contribute to impairment of surface waters. Immediate tillage incorporates manure into the soil, thus reducing nutrient loss in runoff, as well as nitrogen loss via ammonia volatilization. But tillage also in...

  7. Methods in (Applied) Folk Linguistics: Getting into the Minds of the Folk

    ERIC Educational Resources Information Center

    Preston, Dennis R.

    2011-01-01

    This paper deals with data gathering and interpretation in folk linguistics, but, as the parenthetical title suggests, it is not limited to any prejudged notion of what approaches or techniques might be most relevant to the wide variety of concerns encompassed by applied linguistics. In this article, the author conceives of folk linguistics…

  8. Simulation of rarefied gas flows on the basis of the Boltzmann kinetic equation solved by applying a conservative projection method

    NASA Astrophysics Data System (ADS)

    Dodulad, O. I.; Kloss, Yu. Yu.; Potapov, A. P.; Tcheremissine, F. G.; Shuvalov, P. V.

    2016-06-01

    Flows of a simple rarefied gas and gas mixtures are computed on the basis of the Boltzmann kinetic equation, which is solved by applying various versions of the conservative projection method, namely, a two-point method for a simple gas and gas mixtures with a small difference between the molecular masses and a multipoint method in the case of a large mass difference. Examples of steady and unsteady flows are computed in a wide range of Mach and Knudsen numbers.

  9. Integrating basic and applied research and the utility of Lattal and Perone's Handbook of research methods in human operant behavior.

    PubMed Central

    Stromer, R

    2000-01-01

    Lattal and Perone's Handbook of methods used in human operant research on behavioral processes will be a valuable resource for researchers who want to bridge laboratory developments with applied study. As a supplemental resource, investigators are also encouraged to examine the series of papers in the Journal of Applied Behavior Analysis that discuss basic research and its potential for application. Increased knowledge of behavioral processes in laboratory research could lead to innovative solutions to practical problems addressed by applied behavior analysts in the home, classroom, clinic, and community. PMID:10738963

  10. DC-DC powering for the CMS pixel upgrade

    NASA Astrophysics Data System (ADS)

    Feld, Lutz; Fleck, Martin; Friedrichs, Marcel; Hensch, Richard; Karpinski, Waclaw; Klein, Katja; Rittich, David; Sammet, Jan; Wlochal, Michael

    2013-12-01

    The CMS experiment plans to replace its silicon pixel detector with a new one with improved rate capability and an additional detection layer at the end of 2016. In order to cope with the increased number of detector modules the new pixel detector will be powered via DC-DC converters close to the sensitive detector volume. This paper reviews the DC-DC powering scheme and reports on the ongoing R&D program to develop converters for the pixel upgrade. Design choices are discussed and results from the electrical and thermal characterisation of converter prototypes are shown. An emphasis is put on system tests with up to 24 converters. The performance of pixel modules powered by DC-DC converters is compared to conventional powering. The integration of the DC-DC powering scheme into the pixel detector is described and system design issues are reviewed.

  11. Regulated dc-to-dc converter features low power drain

    NASA Technical Reports Server (NTRS)

    Thornwall, J.

    1968-01-01

    A regulated dc-to-dc converter requires negligible standby power for the operation of critical electronic equipment. The main operating circuitry consumes power intermittently according to load conditions, rather than constantly.

  12. The Null method applied to GNSS three-carrier phase ambiguity resolution

    NASA Astrophysics Data System (ADS)

    Fernández-Plazaola, U.; Martín-Guerrero, T. M.; Entrambasaguas-Muñoz, J. T.; Martín-Neira, M.

    2004-09-01

    The Null method is a technique to fix the ambiguity in L1 phase measurements of the global positioning system (GPS). The method is adapted to new global navigation satellite systems (GNSS) which offer phase measurements at three frequencies. In order to validate the efficiency of the adapted method, results obtained using a software simulator and an emulator are presented. The results are then compared to those obtained with the least-squares ambiguity decorrelation adjustment (LAMBDA) method. Good performance of the Null method in new GNSS systems is shown.

  13. Generalizing Observational Study Results: Applying Propensity Score Methods to Complex Surveys

    PubMed Central

    DuGoff, Eva H; Schuler, Megan; Stuart, Elizabeth A

    2014-01-01

    ObjectiveTo provide a tutorial for using propensity score methods with complex survey data. Data SourcesSimulated data and the 2008 Medical Expenditure Panel Survey. Study DesignUsing simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. Principal FindingsIn general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. ConclusionsPropensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher’s goal. PMID:23855598

  14. A new, safer method of applying antimetabolites during glaucoma filtering surgery.

    PubMed

    Melo, António B; Spaeth, George L

    2010-01-01

    Blebs resulting from glaucoma filtration surgery tend to result in lower intraocular pressure and to be associated with fewer complications when they are diffuse and spread over the globe rather than localized to the area over the scleral flap. One way to achieve this type of bleb morphology is by applying the antimetabolite to a larger area than the one usually used in the past, in which the antimetabolite was placed only over the area of the scleral flap. In this article, the authors present a safe and inexpensive technique, which consists of using sponges with long, colored tails. This allows applying antimetabolite as far under the Tenon's capsule as desired without the risk of losing the sponges in the sub-Tenon's space. PMID:20507025

  15. Learning and applying new quality improvement methods to the school health setting.

    PubMed

    Elik, Laurel L

    2013-11-01

    A school health registered nurse identified medication administration documentation errors by unlicensed assistive personnel (UAP) in a system of school health clinics in an urban setting. This nurse applied the Lean Six Sigma Define, Measure, Analyze, Improve, Control process of improvement methodology to effectively improve the process. The UAP of medication administration documentation error rate improved from 68% to 35%. This methodology may be used by school nurses to collaboratively look at ways to improve processes at the point of care. PMID:24386696

  16. Enhancement of Voltage Stability of DC Smart Grid During Islanded Mode by Load Shedding Scheme

    NASA Astrophysics Data System (ADS)

    Nassor, Thabit Salim; Senjyu, Tomonobu; Yona, Atsushi

    2015-10-01

    This paper presents the voltage stability of a DC smart grid based on renewable energy resources during grid connected and isolated modes. During the islanded mode the load shedding, based on the state of charge of the battery and distribution line voltage, was proposed for voltage stability and reservation of critical load power. The analyzed power system comprises a wind turbine, a photovoltaic generator, storage battery as controllable load, DC loads, and power converters. A fuzzy logic control strategy was applied for power consumption control of controllable loads and the grid-connected dual active bridge series resonant converters. The proposed DC Smart Grid operation has been verified by simulation using MATLAB® and PLECS® Blockset. The obtained results show the effectiveness of the proposed method.

  17. Synthesis of Few-Layer Graphene Using DC PE-CVD

    NASA Astrophysics Data System (ADS)

    Kim, Jeong Hyuk; Castro, Edward Joseph D.; Hwang, Yong Gyoo; Lee, Choong Hun

    2011-12-01

    Few layer graphene (FLG) had been successfully grown on polycrystalline Ni films or foils on a large scale using DC Plasma Enhanced Chemical Vapor Deposition (DC PE-CVD) as a result of the Raman spectra drawn out of the sample. The size of graphene films is dependent on the area of the Ni film as well as the DC PE-CVD chamber size. Synthesis time has an effect on the quality of graphene produced. However, further analysis and experiments must be pursued to further identify the optimum settings and conditions of producing better quality graphene. Applied plasma voltage on the other hand, had an influence on the minimization of defects in the graphene grown. It has also presented a method of producing a free standing PMMA/graphene membrane on a FeCl3(aq) solution which could then be transferred to a desired substrate.

  18. Purists need not apply: the case for pragmatism in mixed methods research.

    PubMed

    Florczak, Kristine L

    2014-10-01

    The purpose of this column is to describe several different ways of conducting mixed method research. The paradigms that underpin both qualitative and quantitative research are also considered along with a cursory review of classical pragmatism as it relates conducting mixed methods studies. Finally, the idea of loosely coupled systems as a means to support mixed methods studies is proposed along with several caveats to researchers who desire to use this new way of obtaining knowledge. PMID:25248767

  19. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  20. Applying Upwind Godunov Methods to Calculate Two—Phase Mixture Conservation Laws

    NASA Astrophysics Data System (ADS)

    Zeidan, D.

    2010-09-01

    This paper continues a previous work (ICNAAM 2009; AIP Conference Proceedings, 1168, 601-604) on solving a hyperbolic conservative model for compressible gas—solid mixture flow using upwind Godunov methods. The numerical resolution of the model from Godunov first—order upwind and MUSCL—Hancock methods are reported. Both methods are based on the HLL Riemann solver in the framework of finite volume techniques. Calculation results are presented for a series of one—dimensional test problems. The results show that upwind Godunov methods are accurate and robust enough for two—phase mixture conservation laws.

  1. A combined TLD/emulsion method of sampling dosimetry applied to Apollo missions

    NASA Technical Reports Server (NTRS)

    Schaefer, H. J.

    1979-01-01

    A system which simplifies the complex monitoring methods used to measure the astronaut's radiation exposure in space is proposed. The excess dose equivalents of trapped protons and secondary neutrons, protons, and alpha particles from local nuclear interactions are determined and a combined thermoluminescent dosimeter (TLD)/nuclear emulsion method which measures the absorbed dose with thermoluminescent dosimeter chips is presented.

  2. Applying Formal Methods and Object-Oriented Design to Existing Flight Software

    NASA Technical Reports Server (NTRS)

    Cheng, Betty; Auernheimer, Brent

    1993-01-01

    This paper describes a project appling formal methods to a portion of the shuttle on-orbit digital autopilot (DAP). Three objectives of the project were to: demonstrate the use of formal methods on a shuttle application, facilitate the incorporation and validation of new requirements for the system, and verify the safety-critical properties to be exhibited by the software.

  3. A New Machine Classification Method Applied to Human Peripheral Blood Leukocytes.

    ERIC Educational Resources Information Center

    Rorvig, Mark E.; And Others

    1993-01-01

    Discusses pattern classification of images by computer and describes the Two Domain Method in which expert knowledge is acquired using multidimensional scaling of judgments of dissimilarities and linear mapping. An application of the Two Domain Method that tested its power to discriminate two patterns of human blood leukocyte distribution is…

  4. A novel method for applying reduced graphene oxide directly to electronic textiles from yarns to fabrics.

    PubMed

    Yun, Yong Ju; Hong, Won G; Kim, Wan-Joong; Jun, Yongseok; Kim, Byung Hoon

    2013-10-25

    Conductive, flexible, and durable reduced RGO textiles with a facile preparation method are presented. BSA proteins serve as universal adhesives for improving the adsorption of GO onto any textile, irrespective of the materials and the surface conditions. Using this method, we successfully prepared various RGO textiles based on nylon-6 yarns, cotton yarns, polyester yarns, and nonwoven fabrics. PMID:23946273

  5. The quasi-exactly solvable potentials method applied to the three-body problem

    SciTech Connect

    Chafa, F.; Chouchaoui, A. . E-mail: akchouchaoui@yahoo.fr; Hachemane, M.; Ighezou, F.Z.

    2007-05-15

    The quasi-exactly solved potentials method is used to determine the energies and the corresponding exact eigenfunctions for three families of potentials playing an important role in the description of interactions occurring between three particles of equal mass. The obtained results may also be used as a test in evaluating the performance of numerical methods.

  6. Applying Research Methods to a Gerontological Population: Matching Data Collection to Characteristics of Older Persons

    ERIC Educational Resources Information Center

    Weil, Joyce

    2015-01-01

    As Baby Boomers reach 65 years of age and methods of studying older populations are becoming increasingly varied (e.g., including mixed methods designs, on-line surveys, and video-based environments), there is renewed interest in evaluating methodologies used to collect data with older persons. The goal of this article is to examine…

  7. Applying cognitive developmental psychology to middle school physics learning: The rule assessment method

    NASA Astrophysics Data System (ADS)

    Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.

    2013-01-01

    Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.

  8. The cost of applying current helicopter external noise reduction methods while maintaining realistic vehicle performance

    NASA Technical Reports Server (NTRS)

    Bowes, M. A.

    1978-01-01

    Analytical methods were developed and/or adopted for calculating helicopter component noise, and these methods were incorporated into a unified total vehicle noise calculation model. Analytical methods were also developed for calculating the effects of noise reduction methodology on helicopter design, performance, and cost. These methods were used to calculate changes in noise, design, performance, and cost due to the incorporation of engine and main rotor noise reduction methods. All noise reduction techniques were evaluated in the context of an established mission performance criterion which included consideration of hovering ceiling, forward flight range/speed/payload, and rotor stall margin. The results indicate that small, but meaningful, reductions in helicopter noise can be obtained by treating the turbine engine exhaust duct. Furthermore, these reductions do not result in excessive life cycle cost penalties. Currently available main rotor noise reduction methodology, however, is shown to be inadequate and excessively costly.

  9. Applying the stochastic Galerkin method to epidemic models with uncertainty in the parameters.

    PubMed

    Harman, David B; Johnston, Peter R

    2016-07-01

    Parameters in modelling are not always known with absolute certainty. In epidemic modelling, this is true of many of the parameters. It is important for this uncertainty to be included in any model. This paper looks at using the stochastic Galerkin method to solve an SIR model with uncertainty in the parameters. The results obtained from the stochastic Galerkin method are then compared with results obtained through Monte Carlo sampling. The computational cost of each method is also compared. It is shown that the stochastic Galerkin method produces good results, even at low order expansions, that are much less computationally expensive than Monte Carlo sampling. It is also shown that the stochastic Galerkin method does not always converge and this non-convergence is explored. PMID:27091743

  10. An analysis method for asymmetric resonator transmission applied to superconducting devices

    NASA Astrophysics Data System (ADS)

    Khalil, M. S.; Stoutimore, M. J. A.; Wellstood, F. C.; Osborn, K. D.

    2012-03-01

    We examine the transmission through nonideal microwave resonant circuits. The general analytical resonance line shape is derived for both inductive and capacitive coupling with mismatched input and output transmission impedances, and it is found that, for certain non-ideal conditions, the line shape is asymmetric. We describe an analysis method for extracting an accurate internal quality factor (Qi), the diameter correction method (DCM), and compare it to the conventional method used for millikelvin resonator measurements, the φ rotation method (φRM). We analytically find that the φRM deterministically overestimates Qi when the asymmetry of the resonance line shape is high, and that this error is eliminated with the DCM. A consistent discrepancy between the two methods is observed when they are used to analyze both simulations from a numerical linear solver and data from asymmetric coplanar superconducting thin-film resonators.

  11. Applying computational methods to interpret experimental results in tribology and enantioselective catalysis

    NASA Astrophysics Data System (ADS)

    Garvey, Michael T.

    Computational methods are rapidly becoming a mainstay in the field of chemistry. Advances in computational methods (both theory and implementation), increasing availability of computational resources and the advancement of parallel computing are some of the major forces driving this trend. It is now possible to perform density functional theory (DFT) calculations with chemical accuracy for model systems that can be interrogated experimentally. This allows computational methods to supplement or complement experimental methods. There are even cases where DFT calculations can give insight into processes and interactions that cannot be interrogated directly by current experimental methods. This work presents several examples of the application of computational methods to the interpretation and analysis of experimentally obtained results. First, triobological systems were investigated primarily with full-potential linearized augmented plane wave (FLAPW) method DFT calculations. Second, small organic molecules adsorbed on Pd(111) were studied using projector-augmented wave (PAW) method DFT calculations and scanning tunneling microscopy (STM) image simulations to investigate molecular interactions involved in enantioselective heterogeneous catalysis. A method for method for calculating pressure-dependent shear properties of model boundary-layer lubricants is demonstrated. The calculated values are compared with experimentally obtained results. For the case of methyl pyruvate adsorbed on Pd(111), DFT-calculated adsorption energies and structures are used along with STM simulations to identify species observed by STM imaging. A previously unobserved enol species is discovered to be present along with the expected keto species. The information about methyl pyruvate species on Pd(111) is combined with previously published studies of S-alpha-(1-naphthyl)-ethylamine (NEA) to understand the nature of their interaction upon coadsorption on Pd(111). DFT calculated structures and

  12. A dc magnetic field distribution transducer (abstract)

    NASA Astrophysics Data System (ADS)

    Hristoforou, E.

    1991-04-01

    A new way of measuring magnetic field distribution is proposed, based on the change of the response of a magnetostrictive delay line (MDL) to varying dc magnetic field. The principal idea runs as follows: an array of wires Ci, transmitting pulsed current Ie, crosses at 45° an array of MDL Lj. The resulting pulsed field at the crossing points Pij excites an acoustic pulses in the lines, detected by short coils placed close to one end, in terms of voltage Voij. If a dc magnetic field Hdc is applied at the point Pij, the acoustic pulse and hence Voij change. Experimental results are given, showing the dependence of V0 on the applied dc field under various values of Ie for the case of a 1 mm wide Metglas 2605SC MDL. The function of Vom vs Hdc under various values of Ie is also given, where Vom is the maximum value of the absolute positive and negative peaks of V0. The first derivative of this function equals zero for two values of Hdc, corresponding to approximately equal positive and negative peaks of V0. So, having divided this function in 4 parts, comparison of these two peaks and experimental data are used to find the orientation and magnitude of the dc field on the MDL axis. It was also found that V0, corresponding to an Hdc applied at an angle v to the MDL equals the response of a dc field having a magnitude Hdc cos(v) and applied along the length of the line. So, by having another array of delay lines L'i identical but orthogonal to the previous MDL array Lj and crossing in 45° the conducting wires array Ci, we can keep the same number of crossing points. Hence, measurements from two delay lines Li and L'i corresponding to Pij, give a 2-d vector of the dc magnetic field applied at this point. The uniformity and the resolution of such a transducer can be improved by using the recently developed FeSiB wires after stress annealing. Future work is to be done to increase the frequency and the range of the measurable dc field.

  13. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  14. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  15. Evaluation of the Levy Method as Applied to Vibrations of a 45 deg Delta Wing

    NASA Technical Reports Server (NTRS)

    Kruszewski, Edwin T.; Waner, Paul G., Jr.

    1959-01-01

    The Levy method which deals with an idealized structure was used to obtain the natural modes and frequencies of a large-scale built-up 45 deg. delta wing. The results from this approach, both with and without the effects of transverse shear, were compared with the results obtained experimentally and also with those calculated by the Stein-Sanders method. From these comparisons it was concluded that the method as proposed by Levy gives excellent results for thin-skin delta wings, provided that corrections are made for the effect of transverse shear.

  16. Applying Convolution-Based Processing Methods To A Dual-Channel, Large Array Artificial Olfactory Mucosa

    NASA Astrophysics Data System (ADS)

    Taylor, J. E.; Che Harun, F. K.; Covington, J. A.; Gardner, J. W.

    2009-05-01

    Our understanding of the human olfactory system, particularly with respect to the phenomenon of nasal chromatography, has led us to develop a new generation of novel odour-sensitive instruments (or electronic noses). This novel instrument is in need of new approaches to data processing so that the information rich signals can be fully exploited; here, we apply a novel time-series based technique for processing such data. The dual-channel, large array artificial olfactory mucosa consists of 3 arrays of 300 sensors each. The sensors are divided into 24 groups, with each group made from a particular type of polymer. The first array is connected to the other two arrays by a pair of retentive columns. One channel is coated with Carbowax 20 M, and the other with OV-1. This configuration partly mimics the nasal chromatography effect, and partly augments it by utilizing not only polar (mucus layer) but also non-polar (artificial) coatings. Such a device presents several challenges to multi-variate data processing: a large, redundant dataset, spatio-temporal output, and small sample space. By applying a novel convolution approach to this problem, it has been demonstrated that these problems can be overcome. The artificial mucosa signals have been classified using a probabilistic neural network and gave an accuracy of 85%. Even better results should be possible through the selection of other sensors with lower correlation.

  17. Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1989-01-01

    An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.

  18. A Generalized Weizsacker-Williams Method Applied to Pion Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Ahern, Sean C.; Poyser, William J.; Norbury, John W.; Tripathi, R. K.

    2002-01-01

    A new "Generalized" Weizsacker-Williams method (GWWM) is used to calculate approximate cross sections for relativistic peripheral proton-proton collisions. Instead of a mass less photon mediator, the method allows for the mediator to have mass for short range interactions. This method generalizes the Weizsacker-Williams method (WWM) from Coulomb interactions to GWWM for strong interactions. An elastic proton-proton cross section is calculated using GWWM with experimental data for the elastic p+p interaction, where the mass p+ is now the mediator. The resulting calculated cross sections is compared to existing data for the elastic proton-proton interaction. A good approximate fit is found between the data and the calculation.

  19. EVALUATION OF METHODS FOR SAMPLING, RECOVERY, AND ENUMERATION OF BACTERIA APPLIED TO THE PHYLLOPANE

    EPA Science Inventory

    Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. everal experiments were performed to examine quantitative recovery met...

  20. Computational performance of Free Mesh Method applied to continuum mechanics problems.

    PubMed

    Yagawa, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  1. Computational performance of Free Mesh Method applied to continuum mechanics problems

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  2. DC current monitor

    NASA Technical Reports Server (NTRS)

    Canter, Stanley (Inventor)

    1991-01-01

    A non-intrusive DC current monitor is presented which emulates the theoretical operation of an AC transformer. A conductor, carrying the current to be measured, acts as the primary of a DC current transformer. This current is passed through the center of a secondary coil, and core positioned thereabout, and produces a magnetic flux which induces a current in the secondary proportional to the current flowing in the primary. Means are provided to periodically reset the transformer core such that the measurement inaccuracies associated with core saturation are obviated. A reset current is caused to periodically flow through the secondary coil which produces a magnetic flux oppositely polarized to the flux created by the current in the primary, thus allowing ongoing measurements to be made.

  3. [Applying method of correlative adaptometry for evaluating of treatment efficiency of obese patients].

    PubMed

    Vasil'ev, A V; Mal'tsev, G Iu; Khrushcheva, Iu V; Razzhevaĭkin, V N; Shpitonkov, M I

    2007-01-01

    By the method of correlative adaptometry was perfomed a treatment of many physiological and biochemical data from patients with different degree of obese and in during dietotherapy treatment. It was shown that weight of correlative graphs of more informative parameters was originally high and parallel with the heave of disease and was decreases during a dietotherapy. It was concluded, that correlative adaptometry is the promising method of evaluation nutrition status and quality of dietotherapy. PMID:17561653

  4. Development of direct-inverse 3-D methods for applied aerodynamic design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1988-01-01

    Several inverse methods have been compared and initial results indicate that differences in results are primarily due to coordinate systems and fuselage representations and not to design procedures. Further, results from a direct-inverse method that includes 3-D wing boundary layer effects, wake curvature, and wake displacement are presented. These results show that boundary layer displacements must be included in the design process for accurate results.

  5. Comparative advantages and limitations of the basic metrology methods applied to the characterization of nanomaterials.

    PubMed

    Linkov, Pavel; Artemyev, Mikhail; Efimov, Anton E; Nabiev, Igor

    2013-10-01

    Fabrication of modern nanomaterials and nanostructures with specific functional properties is both scientifically promising and commercially profitable. The preparation and use of nanomaterials require adequate methods for the control and characterization of their size, shape, chemical composition, crystalline structure, energy levels, pathways and dynamics of physical and chemical processes during their fabrication and further use. In this review, we discuss different instrumental methods for the analysis and metrology of materials and evaluate their advantages and limitations at the nanolevel. PMID:23934544

  6. Discontinuous Galerkin finite element method applied to the 1-D spherical neutron transport equation

    SciTech Connect

    Machorro, Eric . E-mail: machorro@amath.washington.edu

    2007-04-10

    Discontinuous Galerkin finite element methods are used to estimate solutions to the non-scattering 1-D spherical neutron transport equation. Various trial and test spaces are compared in the context of a few sample problems whose exact solution is known. Certain trial spaces avoid unphysical behaviors that seem to plague other methods. Comparisons with diamond differencing and simple corner-balancing are presented to highlight these improvements.

  7. Adaptive output voltage tracking controller for uncertain DC/DC boost converter

    NASA Astrophysics Data System (ADS)

    Lee, Byoung-Seoup; Kim, Seok-Kyoon; Park, Jin-Hyuk; Lee, Kyo-Beum

    2016-06-01

    This paper presents a cascade output voltage control strategy for an uncertain DC/DC boost converter adopting an adaptive current controller in its inner loop. Considering the non-linearity, load uncertainties and parameter uncertainties of the converter, the proposed controller is designed following the conventional cascade voltage controller design method. The proposed method makes the following three contributions. First, a coordinate transformation is introduced for the inner loop, enabling avoidance of the singularity problem caused by the estimates of uncertain parameters. Second, a slight modification to the adaptation law is performed to guarantee closed-loop stability in the presence of the time-varying component of the load current. Third, the outer-loop controller is devised such that its performance can be adjusted without any parameter information. The closed-loop performance is demonstrated through simulations and experiments using the DSP28335 with a 3 kW DC/DC boost converter.

  8. Properties of double-layered Ga-doped Al-zinc-oxide/titanium-doped indium-tin-oxide thin films prepared by dc magnetron sputtering applied for Si-based thin film solar cells

    SciTech Connect

    Wang, Chao-Chun; Wuu, Dong-Sing; Lin, Yang-Shih; Lien, Shui-Yang; Huang, Yung-Chuan; Liu, Chueh-Yang; Chen, Chia-Fu; Nautiyal, Asheesh; Lee, Shuo-Jen

    2011-11-15

    In this article, Ga-doped Al-zinc-oxide (GAZO)/titanium-doped indium-tin-oxide (ITIO) bi-layer films were deposited onto glass substrates by direct current (dc) magnetron sputtering. The bottom ITIO film, with a thickness of 200 nm, was sputtered onto the glass substrate. The ITIO film was post-annealed at 350 deg. C for 10-120 min as a seed layer. The effect of post-annealing conditions on the morphologies, electrical, and optical properties of ITIO films was investigated. A GAZO layer with a thickness of 1200 nm was continuously sputtered onto the ITIO bottom layer. The results show that the properties of the GAZO/ITIO films were strongly dependent on the post-annealed conditions. The spectral haze (T{sub diffuse}/T{sub total}) of the GAZO/ITIO bi-layer films increases upon increasing the post-annealing time. The haze and resistivity of the GAZO/ITIO bi-layer films were improved with the post-annealed process. After optimizing the deposition and annealing parameters, the GAZO/ITIO bi-layer film has an average transmittance of 83.20% at the 400-800 nm wavelengths, a maximum haze of 16%, and the lowest resistivity of 1.04 x 10{sup -3}{Omega} cm. Finally, the GAZO/ITIO bi-layer films, as a front electrode for silicon-based thin film solar cells, obtained a maximum efficiency of 7.10%. These encouraging experimental results have potential applications in GAZO/ITIO bi-layer film deposition by in-line sputtering without the wet-etching process and enable the production of highly efficient, low-cost thin film solar cells.

  9. A partly-contacted epitaxial lateral overgrowth method applied to GaN material

    NASA Astrophysics Data System (ADS)

    Xiao, Ming; Zhang, Jincheng; Duan, Xiaoling; Shan, Hengsheng; Yu, Ting; Ning, Jing; Hao, Yue

    2016-04-01

    We have discussed a new crystal epitaxial lateral overgrowth (ELO) method, partly-contacted ELO (PC-ELO) method, of which the overgrowth layer partly-contacts with underlying seed layer. The passage also illustrates special mask structures with and without lithography and provides three essential conditions to achieve the PC-ELO method. What is remarkable in PC-ELO method is that the tilt angle of overgrowth stripes could be eliminated by contacting with seed layer. Moreover, we report an improved monolayer microsphere mask method without lithography of PC-ELO method, which was used to grow GaN. From the results of scanning electron microscopy, cathodoluminescence, x-ray diffraction (XRD), transmission electron microscopy, and atomic force microscope (AFM), overgrowth layer shows no tilt angle relative to the seed layer and high quality coalescence front (with average linear dislocation density <6.4 × 103 cm‑1). Wing stripes peak splitting of the XRD rocking curve due to tilt is no longer detectable. After coalescence, surface steps of AFM show rare discontinuities due to the low misorientation of the overgrowth regions.

  10. A partly-contacted epitaxial lateral overgrowth method applied to GaN material

    PubMed Central

    Xiao, Ming; Zhang, Jincheng; Duan, Xiaoling; Shan, Hengsheng; Yu, Ting; Ning, Jing; Hao, Yue

    2016-01-01

    We have discussed a new crystal epitaxial lateral overgrowth (ELO) method, partly-contacted ELO (PC-ELO) method, of which the overgrowth layer partly-contacts with underlying seed layer. The passage also illustrates special mask structures with and without lithography and provides three essential conditions to achieve the PC-ELO method. What is remarkable in PC-ELO method is that the tilt angle of overgrowth stripes could be eliminated by contacting with seed layer. Moreover, we report an improved monolayer microsphere mask method without lithography of PC-ELO method, which was used to grow GaN. From the results of scanning electron microscopy, cathodoluminescence, x-ray diffraction (XRD), transmission electron microscopy, and atomic force microscope (AFM), overgrowth layer shows no tilt angle relative to the seed layer and high quality coalescence front (with average linear dislocation density <6.4 × 103 cm−1). Wing stripes peak splitting of the XRD rocking curve due to tilt is no longer detectable. After coalescence, surface steps of AFM show rare discontinuities due to the low misorientation of the overgrowth regions. PMID:27033154

  11. A partly-contacted epitaxial lateral overgrowth method applied to GaN material.

    PubMed

    Xiao, Ming; Zhang, Jincheng; Duan, Xiaoling; Shan, Hengsheng; Yu, Ting; Ning, Jing; Hao, Yue

    2016-01-01

    We have discussed a new crystal epitaxial lateral overgrowth (ELO) method, partly-contacted ELO (PC-ELO) method, of which the overgrowth layer partly-contacts with underlying seed layer. The passage also illustrates special mask structures with and without lithography and provides three essential conditions to achieve the PC-ELO method. What is remarkable in PC-ELO method is that the tilt angle of overgrowth stripes could be eliminated by contacting with seed layer. Moreover, we report an improved monolayer microsphere mask method without lithography of PC-ELO method, which was used to grow GaN. From the results of scanning electron microscopy, cathodoluminescence, x-ray diffraction (XRD), transmission electron microscopy, and atomic force microscope (AFM), overgrowth layer shows no tilt angle relative to the seed layer and high quality coalescence front (with average linear dislocation density <6.4 × 10(3) cm(-1)). Wing stripes peak splitting of the XRD rocking curve due to tilt is no longer detectable. After coalescence, surface steps of AFM show rare discontinuities due to the low misorientation of the overgrowth regions. PMID:27033154

  12. Methods for Smoothing Expectancy Tables Applied to the Prediction of Success in College. Research Report No. 79.

    ERIC Educational Resources Information Center

    Perrin, David W.; Whitney, Douglas R.

    Six methods for smoothing expectancy tables were compared using data for entering students at 86 colleges and universities. Linear regression analyses were applied to ACT scores and high school grades to obtain predicted first term grade point averages (FGPA's) for students entering each institution in 1969-70. Expectancy tables were constructed…

  13. Aiming for the Singing Teacher: An Applied Study on Preservice Kindergarten Teachers' Singing Skills Development within a Music Methods Course

    ERIC Educational Resources Information Center

    Neokleous, Rania

    2015-01-01

    This study examined the effects of a music methods course offered at a Cypriot university on the singing skills of 33 female preservice kindergarten teachers. To systematically measure and analyze student progress, the research design was both experimental and descriptive. As an applied study which was carried out "in situ," the normal…

  14. Comparison of 15 evaporation methods applied to a small mountain lake in the northeastern USA

    USGS Publications Warehouse

    Rosenberry, D.O.; Winter, T.C.; Buso, D.C.; Likens, G.E.

    2007-01-01

    Few detailed evaporation studies exist for small lakes or reservoirs in mountainous settings. A detailed evaporation study was conducted at Mirror Lake, a 0.15 km2 lake in New Hampshire, northeastern USA, as part of a long-term investigation of lake hydrology. Evaporation was determined using 14 alternate evaporation methods during six open-water seasons and compared with values from the Bowen-ratio energy-budget (BREB) method, considered the standard. Values from the Priestley-Taylor, deBruin-Keijman, and Penman methods compared most favorably with BREB-determined values. Differences from BREB values averaged 0.19, 0.27, and 0.20 mm d-1, respectively, and results were within 20% of BREB values during more than 90% of the 37 monthly comparison periods. All three methods require measurement of net radiation, air temperature, change in heat stored in the lake, and vapor pressure, making them relatively data intensive. Several of the methods had substantial bias when compared with BREB values and were subsequently modified to eliminate bias. Methods that rely only on measurement of air temperature, or air temperature and solar radiation, were relatively cost-effective options for measuring evaporation at this small New England lake, outperforming some methods that require measurement of a greater number of variables. It is likely that the atmosphere above Mirror Lake was affected by occasional formation of separation eddies on the lee side of nearby high terrain, although those influences do not appear to be significant to measured evaporation from the lake when averaged over monthly periods. ?? 2007 Elsevier B.V. All rights reserved.

  15. A fictitious domain method for fluid/solid coupling applied to the lithosphere/asthenosphere interaction.

    NASA Astrophysics Data System (ADS)

    Cerpa, Nestor; Hassani, Riad; Gerbault, Muriel

    2014-05-01

    A large variety of geodynamical problems can be viewed as a solid/fluid interaction problem coupling two bodies with different physics. In particular the lithosphere/asthenosphere mechanical interaction in subduction zones belongs to this kind of problem, where the solid lithosphere is embedded in the asthenospheric viscous fluid. In many fields (Industry, Civil Engineering,etc.), in which deformations of solid and fluid are "small", numerical modelers consider the exact discretization of both domains and fit as well as possible the shape of the interface between the two domains, solving the discretized physic problems by the Finite Element Method (FEM). Although, in a context of subduction, the lithosphere is submitted to large deformation, and can evolve into a complex geometry, thus leading to important deformation of the surrounding asthenosphere. To alleviate the precise meshing of complex geometries, numerical modelers have developed non-matching interface methods called Fictitious Domain Methods (FDM). The main idea of these methods is to extend the initial problem to a bigger (and simpler) domain. In our version of FDM, we determine the forces at the immersed solid boundary required to minimize (at the least square sense) the difference between fluid and solid velocities at this interface. This method is first-order accurate and the stability depends on the ratio between the fluid background mesh size and the interface discretization. We present the formulation and provide benchmarks and examples showing the potential of the method : 1) A comparison with an analytical solution of a viscous flow around a rigid body. 2) An experiment of a rigid sphere sinking in a viscous fluid (in two and three dimensional cases). 3) A comparison with an analog subduction experiment. Another presentation aims at describing the geodynamical application of this method to Andean subduction dynamics, studying cyclic slab folding on the 660 km discontinuity, and its relationship

  16. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    NASA Astrophysics Data System (ADS)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  17. Comparative analysis of optimisation methods applied to thermal cycle of a coal fired power plant

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Łukasz; Elsner, Witold

    2013-12-01

    The paper presents a thermodynamic optimization of 900MW power unit for ultra-supercritical parameters, modified according to AD700 concept. The aim of the study was to verify two optimisation methods, i.e., the finding the minimum of a constrained nonlinear multivariable function (fmincon) and the Nelder-Mead method with their own constrain functions. The analysis was carried out using IPSEpro software combined with MATLAB, where gross power generation efficiency was chosen as the objective function. In comparison with the Nelder-Mead method it was shown that using fmincon function gives reasonable results and a significant reduction of computational time. Unfortunately, with the increased number of decision parameters, the benefit measured by the increase in efficiency is becoming smaller. An important drawback of fmincon method is also a lack of repeatability by using different starting points. The obtained results led to the conclusion, that the Nelder-Mead method is a better tool for optimisation of thermal cycles with a high degree of complexity like the coal-fired power unit.

  18. Heterogeneity among violence-exposed women: applying person-oriented research methods.

    PubMed

    Nurius, Paula S; Macy, Rebecca J

    2008-03-01

    Variability of experience and outcomes among violence-exposed people pose considerable challenges toward developing effective prevention and treatment protocols. To address these needs, the authors present an approach to research and a class of methodologies referred to as person oriented. Person-oriented tools support assessment of meaningful patterns among people that distinguish one group from another, subgroups for whom different interventions are indicated. The authors review the conceptual base of person-oriented methods, outline their distinction from more familiar variable-oriented methods, present descriptions of selected methods as well as empirical applications of person-oriented methods germane to violence exposure, and conclude with discussion of implications for future research and translation between research and practice. The authors focus on violence against women as a population, drawing on stress and coping theory as a theoretical framework. However, person-oriented methods hold utility for investigating diversity among violence-exposed people's experiences and needs across populations and theoretical foundations. PMID:18245574

  19. Boundary element method applied to a gas-fired pin-fin-enhanced heat pipe

    SciTech Connect

    Andraka, C.E.; Knorovsky, G.A.; Drewien, C.A.

    1998-02-01

    The thermal conduction of a portion of an enhanced surface heat exchanger for a gas fired heat pipe solar receiver was modeled using the boundary element and finite element methods (BEM and FEM) to determine the effect of weld fillet size on performance of a stud welded pin fin. A process that could be utilized by others for designing the surface mesh on an object of interest, performing a conversion from the mesh into the input format utilized by the BEM code, obtaining output on the surface of the object, and displaying visual results was developed. It was determined that the weld fillet on the pin fin significantly enhanced the heat performance, improving the operating margin of the heat exchanger. The performance of the BEM program on the pin fin was measured (as computational time) and used as a performance comparison with the FEM model. Given similar surface element densities, the BEM method took longer to get a solution than the FEM method. The FEM method creates a sparse matrix that scales in storage and computation as the number of nodes (N), whereas the BEM method scales as N{sup 2} in storage and N{sup 3} in computation.

  20. Investigation to develop a method to apply diffusion barrier to high strength fibers

    NASA Technical Reports Server (NTRS)

    Veltri, R. D.; Paradis, R. D.; Douglas, F. C.

    1975-01-01

    A radio frequency powered ion plating process was used to apply the diffusion barriers of aluminum oxide, yttrium oxide, hafnium oxide and titanium carbide to a substrate tungsten fiber. Each of the coatings was examined as to its effect on both room temperature strength and tensile strength of the base tungsten fiber. The coated fibers were then overcoated with a nickel alloy to become single cell diffusion couples. These diffusion couples were exposed to 1093 C for 24 hours, cycled between room temperature and 1093 C, and given a thermal anneal for 100 hours at 1200 C. Tensile testing and metallographic examinations determined that the hafnium oxide coating produced the best high temperature diffusion barrier for tungsten of the four coatings.

  1. Advanced panel-type influence coefficient methods applied to subsonic flows

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Rubbert, P. E.

    1975-01-01

    An advanced technique for solving the linear integral equations of three-dimensional subsonic potential flows (steady, inviscid, irrotational and incompressible) about arbitrary configurations is presented. It involves assembling select, logically consistent networks whose construction comprises four tasks, which are described in detail: surface geometry definition; singularity strength definition; control point and boundary condition specification; and calculation of induced potential or velocity. The technique is applied to seven wing examples approached by four network types: source/analysis, doublet/analysis, source/design, and doublet/design. The results demonstrate the forgiveness of the model to irregular paneling and the practicality of combined analysis/design boundary conditions. The appearance of doublet strength mismatch is a valuable indicator of locally inadequate paneling.

  2. Fourier Transform Methods Applied To The Analysis Of Microphotometric Images Of Histologic Sections

    NASA Astrophysics Data System (ADS)

    Dytch, Harvey E.; Wied, George L.

    1989-06-01

    Implementation of one-dimensional Fourier transform techniques for the analysis of histologic sections is discussed, as is the motivation for their use. Features of the frequency domain representation derived by such a transform are shown to be related to several important diagnostic clues. The interpretation of the Fourier magnitude spectrum in histologically relevant terms is examined by means of Fourier transforms of idealized tissue simulations. Some of the perturbations of these ideal spectra produced by biologic reality are discussed. Three classic types of cervical epithelial tissue are modeled, and their representation as Fourier magnitude spectra interpreted in the light of the previous results: characteristic frequency domain signatures are obtained for each. It is concluded that these techniques may provide diagnostically important objective measures, and may be applied to otherwise intractable histologic specimens with crowded and overlapping nuclei.

  3. A combined evidence Bayesian method for human ancestry inference applied to Afro-Colombians.

    PubMed

    Rishishwar, Lavanya; Conley, Andrew B; Vidakovic, Brani; Jordan, I King

    2015-12-15

    Uniparental genetic markers, mitochondrial DNA (mtDNA) and Y chromosomal DNA, are widely used for the inference of human ancestry. However, the resolution of ancestral origins based on mtDNA haplotypes is limited by the fact that such haplotypes are often found to be distributed across wide geographical regions. We have addressed this issue here by combining two sources of ancestry information that have typically been considered separately: historical records regarding population origins and genetic information on mtDNA haplotypes. To combine these distinct data sources, we applied a Bayesian approach that considers historical records, in the form of prior probabilities, together with data on the geographical distribution of mtDNA haplotypes, formulated as likelihoods, to yield ancestry assignments from posterior probabilities. This combined evidence Bayesian approach to ancestry assignment was evaluated for its ability to accurately assign sub-continental African ancestral origins to Afro-Colombians based on their mtDNA haplotypes. We demonstrate that the incorporation of historical prior probabilities via this analytical framework can provide for substantially increased resolution in sub-continental African ancestry assignment for members of this population. In addition, a personalized approach to ancestry assignment that involves the tuning of priors to individual mtDNA haplotypes yields even greater resolution for individual ancestry assignment. Despite the fact that Colombia has a large population of Afro-descendants, the ancestry of this community has been understudied relative to populations with primarily European and Native American ancestry. Thus, the application of the kind of combined evidence approach developed here to the study of ancestry in the Afro-Colombian population has the potential to be impactful. The formal Bayesian analytical framework we propose for combining historical and genetic information also has the potential to be widely applied

  4. [New methods in training of paediatric emergencies: medical simulation applied to paediatrics].

    PubMed

    González Gómez, J M; Chaves Vinagre, J; Ocete Hita, E; Calvo Macías, C

    2008-06-01

    Patient safety constitutes one of the main objectives in health care. Among other recommendations, such as the creation of training centres and the development of patient safety programmes, of great importance is the creation of training programmes for work teams using medical simulation. Medical simulation is defined as "a situation or environment created to allow persons to experience a representation of a real event for the purpose of practice, learning, evaluation or to understand systems or human actions". In this way, abilities can be acquired in serious and uncommon situations with no risk of harm to the patient. This study revises the origins of medical simulation and the different types of simulation are classified. The main simulators currently used in Pediatrics are presented, and the design of a simulation course applied to the training of pediatric emergencies is described, detailing all its different phases. In the first non face-to-face stage, a new concept in medical training known as e-learning is applied. In the second phase, clinical cases are carried out using robotic simulation; this is followed by a debriefing session, which is a key element for acquiring abilities and skills. Lastly, the follow-up phase allows the student to connect with the teachers to consolidate the concepts acquired during the in-person phase. In this model, the aim is to improve scientific-technical abilities in addition to a series of related abilities such as controlling crisis situations, correct leadership of work teams, distribution of tasks, communication among the team members, etc., all of these within the present concept of excellence in care and medical professionalism. PMID:18559203

  5. A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data

    PubMed Central

    Hira, Zena M.; Gillies, Duncan F.

    2015-01-01

    We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834

  6. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems.

    PubMed

    Li, Zhilin; Song, Peng

    2013-06-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515-527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763

  7. Comparison of Modal Analysis Methods Applied to a Vibro-Acoustic Test Article

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn; Pappa, Richard; Buehrle, Ralph; Grosveld, Ferdinand

    2001-01-01

    Modal testing of a vibro-acoustic test article referred to as the Aluminum Testbed Cylinder (ATC) has provided frequency response data for the development of validated numerical models of complex structures for interior noise prediction and control. The ATC is an all aluminum, ring and stringer stiffened cylinder, 12 feet in length and 4 feet in diameter. The cylinder was designed to represent typical aircraft construction. Modal tests were conducted for several different configurations of the cylinder assembly under ambient and pressurized conditions. The purpose of this paper is to present results from dynamic testing of different ATC configurations using two modal analysis software methods: Eigensystem Realization Algorithm (ERA) and MTS IDEAS Polyreference method. The paper compares results from the two analysis methods as well as the results from various test configurations. The effects of pressurization on the modal characteristics are discussed.

  8. Mapped orthogonal functions method applied to acoustic waves-based devices

    NASA Astrophysics Data System (ADS)

    Lefebvre, J. E.; Yu, J. G.; Ratolojanahary, F. E.; Elmaimouni, L.; Xu, W. J.; Gryba, T.

    2016-06-01

    This work presents the modelling of acoustic wave-based devices of various geometries through a mapped orthogonal functions method. A specificity of the method, namely the automatic incorporation of boundary conditions into equations of motion through position-dependent physical constants, is presented in detail. Formulations are given for two classes of problems: (i) problems with guided mode propagation and (ii) problems with stationary waves. The method's interest is demonstrated by several examples, a seven-layered plate, a 2D rectangular resonator and a 3D cylindrical resonator, showing how it is easy to obtain either dispersion curves and field profiles for devices with guided mode propagation or electrical response for devices with stationary waves. Extensions and possible further developments are also given.

  9. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia

    PubMed Central

    Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  10. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  11. Analysis of the discontinuous Galerkin method applied to the European option pricing problem

    NASA Astrophysics Data System (ADS)

    Hozman, J.

    2013-12-01

    In this paper we deal with a numerical solution of a one-dimensional Black-Scholes partial differential equation, an important scalar nonstationary linear convection-diffusion-reaction equation describing the pricing of European vanilla options. We present a derivation of the numerical scheme based on the space semidiscretization of the model problem by the discontinuous Galerkin method with nonsymmetric stabilization of diffusion terms and with the interior and boundary penalty. The main attention is paid to the investigation of a priori error estimates for the proposed scheme. The appended numerical experiments illustrate the theoretical results and the potency of the method, consequently.

  12. Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.

    SciTech Connect

    Weirs, V. Gregory

    2014-03-01

    This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.

  13. The Constant Intensity Cut Method applied to the KASCADE-Grande muon data

    NASA Astrophysics Data System (ADS)

    Arteaga-Velázquez, J. C.; Apel, W. D.; Badea, F.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Brüggemann, M.; Buchholz, P.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuhrmann, D.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Klages, H. O.; Kolotaev, Y.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Navarra, G.; Nehls, S.; Oehlschläger, J.; Ostapchenko, S.; Over, S.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schröder, F.; Sima, O.; Stümpert, M.; Toma, G.; Trinchero, G.; Ulrich, H.; Walkowiak, W.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.

    2009-12-01

    The constant intensity cut method is a very useful tool to reconstruct the cosmic ray energy spectrum in order to combine or compare extensive air shower data measured for different attenuation depths independently of the MC model. In this contribution the method is used to explore the muon data of the KASCADE-Grande experiment. In particular, with this technique, the measured muon number spectra for different zenith angle ranges are compared and summed up to obtain a single muon spectrum for the measured showers. Preliminary results are presented, along with estimations of the systematic uncertainties associated with the analysis technique.

  14. Applying Qualitative Methods in Organizations: A Note for Industrial/Organizational Psychologists

    ERIC Educational Resources Information Center

    Ehigie, Benjamin Osayawe; Ehigie, Rebecca Ibhaguelo

    2005-01-01

    Early approach to research in industrial and organizational (I/O) psychology was oriented towards quantitative techniques as a result of influences from the social sciences. As the focus of I/O psychology expands from psychological test development to other personnel functions, there has been an inclusion of qualitative methods in I/O psychology…

  15. Estimating and Interpreting Latent Variable Interactions: A Tutorial for Applying the Latent Moderated Structural Equations Method

    ERIC Educational Resources Information Center

    Maslowsky, Julie; Jager, Justin; Hemken, Douglas

    2015-01-01

    Latent variables are common in psychological research. Research questions involving the interaction of two variables are likewise quite common. Methods for estimating and interpreting interactions between latent variables within a structural equation modeling framework have recently become available. The latent moderated structural equations (LMS)…

  16. 3D magnetospheric parallel hybrid multi-grid method applied to planet-plasma interactions

    NASA Astrophysics Data System (ADS)

    Leclercq, L.; Modolo, R.; Leblanc, F.; Hess, S.; Mancini, M.

    2016-03-01

    We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet-plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order to conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.

  17. The Feasibility of Applying PBL Teaching Method to Surgery Teaching of Chinese Medicine

    ERIC Educational Resources Information Center

    Tang, Qianli; Yu, Yuan; Jiang, Qiuyan; Zhang, Li; Wang, Qingjian; Huang, Mingwei

    2008-01-01

    The traditional classroom teaching mode is based on the content of the subject, takes the teacher as the center and gives priority to classroom instruction. While PBL (Problem Based Learning) teaching method breaches the traditional mode, combining the basic science with clinical practice and covering the process from discussion to self-study to…

  18. Damage detection using data-driven methods applied to moving-load responses

    NASA Astrophysics Data System (ADS)

    Cavadas, Filipe; Smith, Ian F. C.; Figueiras, Joaquim

    2013-08-01

    Developed economies depend on complex and extensive systems of infrastructure to maintain economic prosperity and quality of life. In recent years, the implementation of Structural health monitoring (SHM) systems on full-scale bridges has increased. The goal of these systems is to inform owners of the condition of structures, thereby supporting surveillance, maintenance and other management tasks. Data-driven methods, that involve tracking changes in signals only, are well-suited for analyzing measurements during continuous monitoring of structures. Also, information provided by the response of structures under moving loads is useful for condition assessment. This paper discusses the application of data-driven methods on moving-load responses in order to detect the occurrence and the location of damage. First, an approach for using moving-load responses as time series data is proposed. The work focuses on two data-driven methods - Moving principal component analysis (MPCA) and Robust regression analysis (RRA) - that have already been successful for damage detection during continuous monitoring. The performance of each method is assessed using data obtained by simulating the crossing of a point-load on a simple frame.

  19. Automatic and efficient methods applied to the binarization of a subway map

    NASA Astrophysics Data System (ADS)

    Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan

    2015-12-01

    The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.

  20. Quantification of Greenhouse Gas Emissions from Soil Applied Swine Effluent by Different Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Greenhouse gas (CO2, CH4, and N2O) emissions were measured from a field experiment in which pre-plant swine effluent application methods where evaluated for no-till corn grain production. The treatments included a control, an inorganic fertilizer treatment that received 179 kg N ha-1 as urea ammoni...

  1. Applying NAEP to Improve Mathematics Content and Methods Courses for Preservice Elementary and Middle School Teachers

    ERIC Educational Resources Information Center

    Goodson-Espy, Tracy; Cifarelli, Victor V.; Pugalee, David; Lynch-Davis, Kathleen; Morge, Shelby; Salinas, Tracie

    2014-01-01

    This study explored how mathematics content and methods courses for preservice elementary and middle school teachers could be improved through the integration of a set of instructional materials based on the National Assessment of Educational Progress (NAEP). A set of eight instructional modules was developed and tested. The study involved 7…

  2. Development of direct-inverse 3-D method for applied aerodynamic design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1987-01-01

    The primary tasks performed were the continued development of inverse design procedures for the TAWFIVE code, the development of corresponding relofting and trailing edge closure procedures, and the testing of the methods for a variety of cases. The period from July 1, 1986 through December 31, 1986 is covered.

  3. Nanoemulsions prepared by a low-energy emulsification method applied to edible films

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Catastrophic phase inversion (CPI) was used as a low-energy emulsification method to prepare oil-in-water (O/W) nanoemulsions in a lipid (Acetem)/water/nonionic surfactant (Tween 60) system. CPIs in which water-in-oil emulsions (W/O) are transformed into oil-in-water emulsions (O/W) were induced by ...

  4. Teaching to Think: Applying the Socratic Method outside the Law School Setting

    ERIC Educational Resources Information Center

    Peterson, Evan

    2009-01-01

    An active learning process has the potential to provide educational benefits above-and-beyond what they might receive from more traditional, passive approaches. The Socratic Method is a unique approach to passive learning that facilitates critical thinking, open-mindedness, and teamwork. By imposing a series of guided questions to students, an…

  5. Greenhouse gas emissions from swine effluent applied to soil by different methods.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Greenhouse gas (CO2, CH4, and N2O) emissions were measured from a field experiment evaluating pre-plant swine effluent application methods for no-till corn grain production. The treatments included a control, an inorganic fertilizer treatment receiving 179 kg N ha-1 as urea ammonium nitrate (UAN), ...

  6. Effect of the applied drying method on the physical properties of purple carrot pomace

    NASA Astrophysics Data System (ADS)

    Janiszewska, E.; Witrowa-Rajchert, D.; Kidoń, M.; Czapski, J.

    2013-03-01

    The aim of the study was to determine the effect of different drying methods on selected physical properties of pomace obtained from purple carrot cv. Deep Purple. Drying was performed using four methods: convective, microwave-convective, infrared-convective and freeze-drying. The freeze-dried material had the lowest apparent density (422 kg m-3), which was caused by slight shrinkage, and indicated high porosity. Apparent density was almost three times greater in dried materials produced using the other drying methods as compared to the freeze-dried variants. Freeze-dried pomace adsorbed vapour more quickly than the other dried variants, which was caused by its high porosity and relatively low degree of structural damage. Rehydration characteristicswere significantly affected by the dryingmethod. The highest mass increase and losses of soluble substance were recorded for the freeze-dried samples. Conversely, the traditional convective drying method resulted in the lowest mass increase and soluble substance leaching. A positive linear correlation was found between the loss of soluble dry substance components and the absorbance of liquid obtained during rehydration.

  7. Applying Cognitive Behavioural Methods to Retrain Children's Attributions for Success and Failure in Learning

    ERIC Educational Resources Information Center

    Toland, John; Boyle, Christopher

    2008-01-01

    This study involves the use of methods derived from cognitive behavioral therapy (CBT) to change the attributions for success and failure of school children with regard to learning. Children with learning difficulties and/or motivational and self-esteem difficulties (n = 29) were identified by their schools. The children then took part in twelve…

  8. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  9. Advanced signal processing methods applied to guided waves for wire rope defect detection

    NASA Astrophysics Data System (ADS)

    Tse, Peter W.; Rostami, Javad

    2016-02-01

    Steel wire ropes, which are usually composed of a polymer core and enclosed by twisted wires, are used to hoist heavy loads. These loads are different structures that can be clamshells, draglines, elevators, etc. Since the loading of these structures is dynamic, the ropes are working under fluctuating forces in a corrosive environment. This consequently leads to progressive loss of the metallic cross-section due to abrasion and corrosion. These defects can be seen in the forms of roughened and pitted surface of the ropes, reduction in diameter, and broken wires. Therefore, their deterioration must be monitored so that any unexpected damage or corrosion can be detected before it causes fatal accident. This is of vital importance in the case of passenger transportation, particularly in elevators in which any failure may cause a catastrophic disaster. At present, the widely used methods for thorough inspection of wire ropes include visual inspection and magnetic flux leakage (MFL). Reliability of the first method is questionable since it only depends on the operators' eyes that fails to determine the integrity of internal wires. The later method has the drawback of being a point by point and time-consuming inspection method. Ultrasonic guided wave (UGW) based inspection, which has proved its capability in inspecting plate like structures such as tubes and pipes, can monitor the cross-section of wire ropes in their entire length from a single point. However, UGW have drawn less attention for defect detection in wire ropes. This paper reports the condition monitoring of a steel wire rope from a hoisting elevator with broken wires as a result of corrosive environment and fatigue. Experiments were conducted to investigate the efficiency of using magnetostrictive based UGW for rope defect detection. The obtained signals were analyzed by two time-frequency representation (TFR) methods, namely the Short Time Fourier Transform (STFT) and the Wavelet analysis. The location of

  10. APPLYING EXPERIENCE SAMPLING METHODS TO PARTNER VIOLENCE RESEARCH: SAFETY AND FEASIBILITY IN A 90-DAY STUDY OF COMMUNITY WOMEN

    PubMed Central

    Sullivan, Tami P.; Khondkaryan, Enna; Dos Santos, Nancy P.; Peters, Erica N.

    2011-01-01

    An experience sampling method (ESM) rarely has been applied in studies of intimate partner violence (IPV) despite the benefits to be gained. Because ESM approaches and women who experience IPV present unique challenges for data collection an empirical question exists: is it safe and feasible to apply ESM to community women who currently are experiencing IPV? A 90-day, design-driven feasibility study examined daily telephone data collection, daily paper diaries, and monthly retrospective semi-structured interview methods among a community sample of 123 women currently experiencing IPV to study within-person relationships between IPV and substance use. Findings suggest that ESM is a promising method for collecting data among this population and can elucidate daily dynamics of victimization as well as associated behaviors and experiences. Lessons learned from the application of ESM to this population are also discussed. PMID:21307033

  11. 21 CFR 74.1206 - D&C Green No. 6.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... consistent with current good manufacturing practice. (d) Labeling. The label of the color additive shall... ADDITIVES SUBJECT TO CERTIFICATION Drugs § 74.1206 D&C Green No. 6. (a) Identity. The color additive D&C... additive D&C Green No. 6 for use in coloring externally applied drugs shall conform to the...

  12. 21 CFR 74.1707a - Ext. D&C Yellow No. 7.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... color additive Ext. D&C Yellow No. 7 is principally the disodium salt of 8-hydroxy-5,7-di-nitro-2-naphthalenesulfonic acid. (2) Color additive mixtures for drug use made with Ext. D&C Yellow No. 7 may contain only... additive mixtures for coloring externally applied drugs. (b) Specifications. Ext. D&C Yellow No. 7...

  13. 21 CFR 74.1707a - Ext. D&C Yellow No. 7.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... color additive Ext. D&C Yellow No. 7 is principally the disodium salt of 8-hydroxy-5,7-di-nitro-2-naphthalenesulfonic acid. (2) Color additive mixtures for drug use made with Ext. D&C Yellow No. 7 may contain only... additive mixtures for coloring externally applied drugs. (b) Specifications. Ext. D&C Yellow No. 7...

  14. 21 CFR 74.1707a - Ext. D&C Yellow No. 7.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... color additive Ext. D&C Yellow No. 7 is principally the disodium salt of 8-hydroxy-5,7-di-nitro-2-naphthalenesulfonic acid. (2) Color additive mixtures for drug use made with Ext. D&C Yellow No. 7 may contain only... additive mixtures for coloring externally applied drugs. (b) Specifications. Ext. D&C Yellow No. 7...

  15. 21 CFR 74.1206 - D&C Green No. 6.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... consistent with current good manufacturing practice. (d) Labeling. The label of the color additive shall... ADDITIVES SUBJECT TO CERTIFICATION Drugs § 74.1206 D&C Green No. 6. (a) Identity. The color additive D&C... additive D&C Green No. 6 for use in coloring externally applied drugs shall conform to the...

  16. 21 CFR 74.1206 - D&C Green No. 6.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... consistent with current good manufacturing practice. (d) Labeling. The label of the color additive shall... ADDITIVES SUBJECT TO CERTIFICATION Drugs § 74.1206 D&C Green No. 6. (a) Identity. The color additive D&C... additive D&C Green No. 6 for use in coloring externally applied drugs shall conform to the...

  17. 21 CFR 74.1707a - Ext. D&C Yellow No. 7.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... color additive Ext. D&C Yellow No. 7 is principally the disodium salt of 8-hydroxy-5,7-di-nitro-2-naphthalenesulfonic acid. (2) Color additive mixtures for drug use made with Ext. D&C Yellow No. 7 may contain only... additive mixtures for coloring externally applied drugs. (b) Specifications. Ext. D&C Yellow No. 7...

  18. 21 CFR 74.1206 - D&C Green No. 6.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... consistent with current good manufacturing practice. (d) Labeling. The label of the color additive shall... ADDITIVES SUBJECT TO CERTIFICATION Drugs § 74.1206 D&C Green No. 6. (a) Identity. The color additive D&C... additive D&C Green No. 6 for use in coloring externally applied drugs shall conform to the...

  19. Ultrasound method applied to characterize healthy femoral diaphysis of Wistar rats in vivo.

    PubMed

    Fontes-Pereira, A; Matusin, D P; Rosa, P; Schanaider, A; von Krüger, M A; Pereira, W C A

    2014-05-01

    A simple experimental protocol applying a quantitative ultrasound (QUS) pulse-echo technique was used to measure the acoustic parameters of healthy femoral diaphyses of Wistar rats in vivo. Five quantitative parameters [apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), time slope of apparent backscatter (TSAB), integrated reflection coefficient (IRC), and frequency slope of integrated reflection (FSIR)] were calculated using the echoes from cortical and trabecular bone in the femurs of 14 Wistar rats. Signal acquisition was performed three times in each rat, with the ultrasound signal acquired along the femur's central region from three positions 1 mm apart from each other. The parameters estimated for the three positions were averaged to represent the femur diaphysis. The results showed that AIB, FSAB, TSAB, and IRC values were statistically similar, but the FSIR values from Experiments 1 and 3 were different. Furthermore, Pearson's correlation coefficient showed, in general, strong correlations among the parameters. The proposed protocol and calculated parameters demonstrated the potential to characterize the femur diaphysis of rats in vivo. The results are relevant because rats have a bone structure very similar to humans, and thus are an important step toward preclinical trials and subsequent application of QUS in humans. PMID:24838643

  20. Error-free pathology: applying lean production methods to anatomic pathology.

    PubMed

    Condel, Jennifer L; Sharbaugh, David T; Raab, Stephen S

    2004-12-01

    The current state of our health care system calls for dramatic changes. In their pathology department, the authors believe these changes may be accomplished by accepting the long-term commitment of applying a lean production system. The ideal state of zero pathology errors is one that should be pursued by consistently asking, "Why can't we?" The philosophy of lean production systems began in the manufacturing industry: "All we are doing is looking at the time from the moment the customer gives us an order to the point when we collect the cash. And we are reducing that time line by removing non-value added wastes". The ultimate goals in pathology and overall health care are not so different. The authors' intention is to provide the patient (customer) with the most accurate diagnostic information in a timely and efficient manner. Their lead histotechnologist recently summarized this philosophy: she indicated that she felt she could sleep better at night knowing she truly did the best job she could. Her chances of making an error (in cutting or labeling) were dramatically decreased in the one-by-one continuous flow work process compared with previous practices. By designing a system that enables employees to be successful in meeting customer demand, and by empowering the frontline staff in the development and problem solving processes, one can meet the challenges of eliminating waste and build an improved, efficient system. PMID:15555747

  1. Improved Methods for Identifying, Applying, and Verifying Industrial Energy Efficiency Measures

    NASA Astrophysics Data System (ADS)

    Harding, Andrew Chase

    Energy efficiency is the least expensive source of additional energy capacity for today's global energy expansion. Energy efficiency offers additional benefits of cost savings for consumers, reduced environmental impacts, and enhanced energy security. The challenges of energy efficiency include identifying potential efficiency measures, quantifying savings, determining cost effectiveness, and verifying savings of installed measures. This thesis presents three separate chapters which address these challenges. The first is a paper presented at the 2014 industrial energy technology conference (IETC) that details a compressed air system project using the systems approach to identify cost effective measures, energy intensity to project savings, and proper measurement and verification (M&V) practices to prove that the savings were achieved. The second is a discussion of proper M&V techniques, how these apply to international M&V protocols, and how M&V professionals can improve the accuracy and efficacy of their M&V activities. The third is an energy intensity analysis of a poultry processing facility at a unit operations level, which details the M&V practices used to determine the intensities at each unit operation and compares these to previous works.

  2. A method for measuring vertical forces applied to the upper limb during sit-to-stand.

    PubMed

    Turner, H C; Yate, R M; Giddins, G E B; Miles, A W

    2004-01-01

    The aim of this study was to develop a basic measurement system to estimate the vertical loading of the upper limb during the sit-to-stand activity, with a view to increasing the understanding of the loading of the wrist in daily living activities. A chair was adapted and instrumented with strain gauges and position sensors so that the force applied through the upper limbs to the arms of the chair could be calculated. Four aspects of the chair's geometry could be varied. A force plate was positioned on the floor between the legs of the chair to record the corresponding foot loading. Twenty normal subjects (22-56 years, mean 32.7 years) participated in a pilot study in which loading through the upper and lower limbs was recorded for a range of chair geometries. The vertical force transmitted through each upper limb was typically 20-30 per cent of bodyweight. The vertical upper limb load averaged across all subjects showed a small reduction when either the seat height or the height of the chair arms was increased. PMID:15648670

  3. A quality function deployment method applied to highly reusable space transportation

    SciTech Connect

    Zapata, E.

    1997-01-01

    This paper will describe a Quality Function Deployment (QFD) currently in work the goal of which is to add definition and insight to the development of long term Highly Reusable Space Transportation (HRST). The objective here is twofold. First, to describe the process, the actual QFD experience as applies to the HRST study. Second, to describe the preliminary results of this process, in particular the assessment of possible directions for future pursuit such as promising candidate technologies or approaches that may finally open the space frontier. The iterative and synergistic nature of QFD provides opportunities in the process for the discovery of what is key in so far as it is useful, what is not, and what is merely true. Key observations on the QFD process will be presented. The importance of a customer definition as well as the similarity of the process of developing a technology portfolio to product development will be shown. Also, the relation of identified cost and operating drivers to future space vehicle designs that are robust to an uncertain future will be discussed. The results in particular of this HRST evaluation will be preliminary given the somewhat long term (or perhaps not?) nature of the task being considered. {copyright} {ital 1997 American Institute of Physics.}

  4. Testing of evaluation methods applied to raw infiltration data measured at very heterogeneous mountain forest soils

    NASA Astrophysics Data System (ADS)

    Jacka, Lukas; Pavlasek, Jirka; Pech, Pavel

    2016-04-01

    In order to obtain infiltration parameters and analytical expressions of the cumulative infiltration and infiltration rate, raw infiltration data are often evaluated using various infiltration equations. Knowledge about the evaluation variability of these equations in the specific case of extremely heterogeneous soils provides important information for many hydrological and engineering applications. This contribution presents an evaluation of measured data using five well-established physically-based equations and empirical equations, and makes a comparison of these procedures. Evaluation procedures were applied to datasets measured on three different sites of hydrologically important mountain podzols. A total of 47 single ring infiltration experiments were evaluated using these procedures. From the quality-of-fit perspective, all of the tested equations characterized most of the raw datasets properly. In a few cases, some of the physically-based equations led to poor fits of the datasets measured on the most heterogeneous site (characterized by the lowest depth of the organic horizon, and more bleached eluvial horizon than on the other tested sites). For the parameters evaluated on this site, the sorptivity estimates and the saturated hydraulic conductivity (Ks) estimates were distinctly different between the tested procedures.

  5. An ISO-surface folding analysis method applied to premature neonatal brain development

    NASA Astrophysics Data System (ADS)

    Rodriguez-Carranza, Claudia E.; Rousseau, Francois; Iordanova, Bistra; Glenn, Orit; Vigneron, Daniel; Barkovich, James; Studholme, Colin

    2006-03-01

    In this paper we describe the application of folding measures to tracking in vivo cortical brain development in premature neonatal brain anatomy. The outer gray matter and the gray-white matter interface surfaces were extracted from semi-interactively segmented high-resolution T1 MRI data. Nine curvature- and geometric descriptor-based folding measures were applied to six premature infants, aged 28-37 weeks, using a direct voxelwise iso-surface representation. We have shown that using such an approach it is feasible to extract meaningful surfaces of adequate quality from typical clinically acquired neonatal MRI data. We have shown that most of the folding measures, including a new proposed measure, are sensitive to changes in age and therefore applicable in developing a model that tracks development in premature infants. For the first time gyrification measures have been computed on the gray-white matter interface and on cases whose age is representative of a period of intense brain development.

  6. [Spectroscopic methods applied to component determination and species identification for coffee].

    PubMed

    Chen, Hua-zhou; Xu, Li-li; Qin, Qiang

    2014-06-01

    Spectroscopic analysis was applied to the determination of the nutrient quality of ground, instant and chicory coffees. By using inductively coupled plasma atomic emission spectrometry (ICP-ES), nine mineral elements were determined in solid coffee samples. Caffeine was determined by ultraviolet (UV) spectrometry and organic matter was investigated by Fourier transform infrared (FTIR) spectroscopy. Oxidation-reduction titration was utilized for measuring the oxalate. The differences between ground coffee and instant coffee was identified on the basis of the contents of caffeine, oxalate and mineral elements. Experimental evidence showed that, caffeine in instant coffee was 2-3 times higher than in ground coffee. Oxalate in instant coffee was significantly higher in ground coffee. Mineral elements of Mg, P and Zn in ground coffee is lower than in instant coffee, while Cu is several times higher. The mineral content in chicory coffee is overall lower than the instant coffee. In addition, we determined the content of Ti for different types of coffees, and simultaneously detected the elements of Cu, Ti and Zn in chicory coffee. As a fast detection technique, FTIR spectroscopy has the potential of detecting the differences between ground coffee and instant coffee, and is able to verify the presence of caffeine and oxalate. PMID:25358189

  7. Describing function method applied to solution of nonlinear heat conduction equation

    SciTech Connect

    Nassersharif, B. )

    1989-08-01

    Describing functions have traditionally been used to obtain the solutions of systems of ordinary differential equations. The describing function concept has been extended to include the non-linear, distributed parameter solid heat conduction equation. A four-step solution algorithm is presented that may be applicable to many classes of nonlinear partial differential equations. As a specific application of the solution technique, the one-dimensional nonlinear transient heat conduction equation in an annular fuel pin is considered. A computer program was written to calculate one-dimensional transient heat conduction in annular cylindrical geometry. It is found that the quasi-linearization used in the describing function method is as accurate as and faster than true linearization methods.

  8. An Automatic Image-Based Modelling Method Applied to Forensic Infography

    PubMed Central

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628

  9. The High-Resolution Wave-Propagation Method Applied to Meso- and Micro-Scale Flows

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2012-01-01

    The high-resolution wave-propagation method for computing the nonhydrostatic atmospheric flows on meso- and micro-scales is described. The design and implementation of the Riemann solver used for computing the Godunov fluxes is discussed in detail. The method uses a flux-based wave decomposition in which the flux differences are written directly as the linear combination of the right eigenvectors of the hyperbolic system. The two advantages of the technique are: 1) the need for an explicit definition of the Roe matrix is eliminated and, 2) the inclusion of source term due to gravity does not result in discretization errors. The resulting flow solver is conservative and able to resolve regions of large gradients without introducing dispersion errors. The methodology is validated against exact analytical solutions and benchmark cases for non-hydrostatic atmospheric flows.

  10. Fuzzy Comprehensive Evaluation Method Applied in the Real Estate Investment Risks Research

    NASA Astrophysics Data System (ADS)

    ML(Zhang Minli), Zhang; Wp(Yang Wenpo), Yang

    Real estate investment is a high-risk and high returned of economic activity, the key of real estate analysis is the identification of their types of investment risk and the risk of different types of effective prevention. But, as the financial crisis sweeping the world, the real estate industry also faces enormous risks, how effective and correct evaluation of real estate investment risks becomes the multitudinous scholar concern[1]. In this paper, real estate investment risks were summarized and analyzed, and comparative analysis method is discussed and finally presented fuzzy comprehensive evaluation method, not only in theory has the advantages of science, in the application also has the reliability, for real estate investment risk assessment provides an effective means for investors in real estate investing guidance on risk factors and forecasts.

  11. An enhanced sine dwell method as applied to the Galileo core structure modal survey

    NASA Technical Reports Server (NTRS)

    Smith, Kenneth S.; Trubert, Marc

    1990-01-01

    An incremental modal survey performed in 1988 on the core structure of the Galileo spacecraft with its adapters with the purpose of assessing the dynamics of the new portions of the structure is considered. Emphasis is placed on the enhancements of the sine dwell method employed in the test. For each mode, response data is acquired at 32 frequencies in a narrow band enclosing the resonance, utilizing the SWIFT technique. It is pointed out that due to the simplicity of the data processing involved, the diagnostic and modal-parameter data is available within several minutes after data acquisition; however, compared with straight curve-fitting approaches, the method requires more time for data acquisition.

  12. Method for finding mechanism and activation energy of magnetic transitions, applied to skyrmion and antivortex annihilation

    NASA Astrophysics Data System (ADS)

    Bessarab, Pavel F.; Uzdin, Valery M.; Jónsson, Hannes

    2015-11-01

    A method for finding minimum energy paths of transitions in magnetic systems is presented. The path is optimized with respect to orientation of the magnetic vectors while their magnitudes are fixed or obtained from separate calculations. The curvature of the configuration space is taken into account by: (1) using geodesics to evaluate distances and displacements of the system during the optimization, and (2) projecting the path tangent and the magnetic force on the tangent space of the manifold defined by all possible orientations of the magnetic vectors. The method, named geodesic nudged elastic band (GNEB), and its implementation are illustrated with calculations of complex transitions involving annihilation and creation of skyrmion and antivortex states. The lifetime of the latter was determined within harmonic transition state theory using a noncollinear extension of the Alexander-Anderson model.

  13. Autonomous Correction of Sensor Data Applied to Building Technologies Using Filtering Methods

    SciTech Connect

    Castello, Charles C; New, Joshua Ryan; Smith, Matt K

    2013-01-01

    Sensor data validity is extremely important in a number of applications, particularly building technologies where collected data are used to determine performance. An example of this is Oak Ridge National Laboratory s ZEBRAlliance research project, which consists of four single-family homes located in Oak Ridge, TN. The homes are outfitted with a total of 1,218 sensors to determine the performance of a variety of different technologies integrated within each home. Issues arise with such a large amount of sensors, such as missing or corrupt data. This paper aims to eliminate these problems using: (1) Kalman filtering and (2) linear prediction filtering techniques. Five types of data are the focus of this paper: (1) temperature; (2) humidity; (3) energy consumption; (4) pressure; and (5) airflow. Simulations show the Kalman filtering method performed best in predicting temperature, humidity, pressure, and airflow data, while the linear prediction filtering method performed best with energy consumption data.

  14. Method of preparing and applying single stranded DNA probes to double stranded target DNAs in situ

    DOEpatents

    Gray, Joe W.; Pinkel, Daniel

    1991-01-01

    A method is provided for producing single stranded non-self-complementary nucleic acid probes, and for treating target DNA for use therewith. Probe is constructed by treating DNA with a restriction enzyme and an exonuclease to form template/primers for a DNA polymerase. The digested strand is resynthesized in the presence of labeled nucleoside triphosphate precursor. Labeled single stranded fragments are separated from the resynthesized fragments to form the probe. Target DNA is treated with the same restriction enzyme used to construct the probe, and is treated with an exonuclease before application of the probe. The method significantly increases the efficiency and specificity of hybridization mixtures by increasing effective probe concentration by eliminating self-hybridization between both probe and target DNAs, and by reducing the amount of target DNA available for mismatched hybridizations.

  15. Method of preparing and applying single stranded DNA probes to double stranded target DNAs in situ

    DOEpatents

    Gray, J.W.; Pinkel, D.

    1991-07-02

    A method is provided for producing single stranded non-self-complementary nucleic acid probes, and for treating target DNA for use therewith. The probe is constructed by treating DNA with a restriction enzyme and an exonuclease to form template/primers for a DNA polymerase. The digested strand is resynthesized in the presence of labeled nucleoside triphosphate precursor. Labeled single stranded fragments are separated from the resynthesized fragments to form the probe. Target DNA is treated with the same restriction enzyme used to construct the probe, and is treated with an exonuclease before application of the probe. The method significantly increases the efficiency and specificity of hybridization mixtures by increasing effective probe concentration by eliminating self-hybridization between both probe and target DNAs, and by reducing the amount of target DNA available for mismatched hybridizations. No Drawings

  16. Method for applying a high-temperature bond coat on a metal substrate, and related compositions and articles

    DOEpatents

    Hasz, Wayne Charles; Sangeeta, D

    2002-01-01

    A method for applying a bond coat on a metal-based substrate is described. A slurry which contains braze material and a volatile component is deposited on the substrate. The slurry can also include bond coat material. Alternatively, the bond coat material can be applied afterward, in solid form or in the form of a second slurry. The slurry and bond coat are then dried and fused to the substrate. A repair technique using this slurry is also described, along with related compositions and articles.

  17. Method for applying a high-temperature bond coat on a metal substrate, and related compositions and articles

    DOEpatents

    Hasz, Wayne Charles; Sangeeta, D

    2006-04-18

    A method for applying a bond coat on a metal-based substrate is described. A slurry which contains braze material and a volatile component is deposited on the substrate. The slurry can also include bond coat material. Alternatively, the bond coat material can be applied afterward, in solid form or in the form of a second slurry. The slurry and bond coat are then dried and fused to the substrate. A repair technique using this slurry is also described, along with related compositions and articles.

  18. Method of applying a bond coating and a thermal barrier coating on a metal substrate, and related articles

    DOEpatents

    Hasz, Wayne Charles; Borom, Marcus Preston

    2002-01-01

    A method for applying at least one bond coating on a surface of a metal-based substrate is described. A foil of the bond coating material is first attached to the substrate surface and then fused thereto, e.g., by brazing. The foil is often initially prepared by thermally spraying the bond coating material onto a removable support sheet, and then detaching the support sheet. Optionally, the foil may also include a thermal barrier coating applied over the bond coating. The substrate can be a turbine engine component.

  19. A Rayleigh wave back-projection method applied to the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Roten, Daniel; Miyake, Hiroe; Koketsu, Kazuki

    2012-01-01

    We study the rupture process of the 2011 Tohoku megathrust by analyzing 384 regional strong-motion records using a novel back-projection method for Rayleigh waves with periods between 13 and 100 s. The proposed approach is based on isolating the signal at the selected period with a continuous wavelet transform, and generating the stack using arrival times predicted from detailed fundamental mode Rayleigh wave group velocity maps. We verify the method by back-projecting synthetic time series representing a point source off the coast of Tohoku, which we generate with a 3D finite difference method and a mesh based on the Japan Integrated Velocity Structure Model. Application of the method to K-NET/KiK-net records of the Mw 9.1 Tohoku earthquake reveals several Rayleigh wave emitters, which we attribute to different stages of rupture. Stage 1 is characterized by slow rupture down-dip from the hypocenter. The onset of stage 2 is marked by energetic Rayleigh waves emitted from the region between the JMA hypocenter and the trench within 60 s after hypocentral time. During stage 3 the rupture propagates bilaterally towards the north and south at rupture velocities between 3 and 3.5 km · s-1, reaching Iwate-oki 65 s and Ibaraki-oki 105 s after nucleation. In contrast to short-period back-projections from teleseismic P-waves, which place radiation sources below the Honshu coastline, Rayleigh wave emitters identified from our long-period back-projection are located 50-100 km west of the trench. This result supports the interpretation of frequency-dependent seismic wave radiation as suggested in previous studies.

  20. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.