Triple voltage dc-to-dc converter and method
Su, Gui-Jia
2008-08-05
A circuit and method of providing three dc voltage buses and transforming power between a low voltage dc converter and a high voltage dc converter, by coupling a primary dc power circuit and a secondary dc power circuit through an isolation transformer; providing the gating signals to power semiconductor switches in the primary and secondary circuits to control power flow between the primary and secondary circuits and by controlling a phase shift between the primary voltage and the secondary voltage. The primary dc power circuit and the secondary dc power circuit each further comprising at least two tank capacitances arranged in series as a tank leg, at least two resonant switching devices arranged in series with each other and arranged in parallel with the tank leg, and at least one voltage source arranged in parallel with the tank leg and the resonant switching devices, said resonant switching devices including power semiconductor switches that are operated by gating signals. Additional embodiments having a center-tapped battery on the low voltage side and a plurality of modules on both the low voltage side and the high voltage side are also disclosed for the purpose of reducing ripple current and for reducing the size of the components.
Center for Applied Linguistics, Washington DC, USA
ERIC Educational Resources Information Center
Sugarman, Julie; Fee, Molly; Donovan, Anne
2015-01-01
The Center for Applied Linguistics (CAL) is a private, nonprofit organization with over 50 years' experience in the application of research on language and culture to educational and societal concerns. CAL carries out its mission to improve communication through better understanding of language and culture by engaging in a variety of projects in…
Evaporation of nanofluid droplets with applied DC potential.
Orejon, Daniel; Sefiane, Khellil; Shanahan, Martin E R
2013-10-01
A considerable growth of interest in electrowetting (EW) has stemmed from the potential exploitation of this technique in numerous industrial and biological applications, such as microfluidics, lab-on-a-chip, electronic paper, and bioanalytical techniques. The application of EW to droplets of liquids containing nanoparticles (nanofluids) is a new area of interest. Understanding the effects of electrowetting at the fundamental level and being able to manipulate deposits from nanofluid droplets represents huge potential. In this work, we study the complete evaporation of nanofluid droplets under DC conditions. Different evolutions of contact angle and contact radius, as well as deposit patterns, are revealed. When a DC potential is applied, continuous and smoother receding of the contact line during the drying out of TiO2 nanofluids and more uniform patterning of the deposit are observed, in contrast to the typical "stick-slip" behavior and rings stains. Furthermore, the mechanisms for nanoparticle interactions with the applied DC potential differ from those proposed for the EW of droplets under AC conditions. The more uniform patterns of particle deposits resulting from DC potential are a consequence of a shorter timescale for electrophoretic mobility than advection transport driven by evaporation.
Reduction in plasma potential by applying negative DC cathode bias in RF magnetron sputtering
NASA Astrophysics Data System (ADS)
Isomura, Masao; Yamada, Toshinori; Osuga, Kosuke; Shindo, Haruo
2016-11-01
We applied a negative DC bias voltage to the cathode of an RF magnetron sputtering system and successfully reduced the plasma potential in both argon plasma and hydrogen-diluted argon plasma. The crystallinity of the deposited Ge films is improved by increasing the negative DC bias voltage. It is indicated that the reduction in plasma potential is effective for reducing the plasma damage on deposited materials, caused by the electric potential between the plasma and substrates. In addition, the deposition rate is increased by the increased electric potential between the plasma and the cathode owing to the negative DC bias voltage. The present method successfully gives us higher speed and lower damage sputtering deposition. The increased electric potential between the plasma and the cathode suppresses the evacuation of electrons from the plasma and also enhances the generation of secondary electrons on the cathode. These probably suppress the electron loss from the plasma and result in the reduction in plasma potential.
NASA Technical Reports Server (NTRS)
Schoenfeld, A. D.; Yu, Y.
1973-01-01
Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.
Double layer capacitor/DC-DC converter system applied to constant power loads
Spyker, R.L.; Nelms, R.M.
1996-12-31
Ultracapacitors or double layer capacitors are a recent technology based on the well-known electrochemical phenomenon of extremely high capacitance/unit area in an electrode-electrolyte interface and the high surface area achievable in activated carbon fibers. Capacitors have been tested with a rated capacitance value of 470 F and a rated voltage of 2.3 V. Test voltages as high as 3V (30% above rated) have been used without any short term effect on measured capacitance. At 3 V the total energy storage capacity of one capacitor is 2,100 Joules. With a total volume of 245 cm{sup 3}, the specific energy of this capacitor is 8.5 J/cm{sup 3}. To tap this entire energy store would require running the capacitor to zero voltage. Of course, few loads to which a capacitor bank might be connected can tolerate any drop in input voltage. To remedy this problem a DC/DC converter between the capacitor bank and load is proposed. This paper describes optimization of capacitor bank configurations when supplying a constant power load through a DC/DC converter.
Siemens programmable variable speed DC drives applied to wet and dry expansion engines
Markley, Daniel J.
1997-07-01
This document describes the technical details of the Siemens SIMOREG line of DC variable speed drives as applied to Fermilab wet and dry mechanical expander engines. The expander engines are used throughout the lab in Helium refrigerator installations.
NASA Technical Reports Server (NTRS)
Rheinfurth, M. H.; Wilson, H. B.
1991-01-01
The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and applied to the dynamic modeling of aerospace structures using the modal synthesis technique.
DC to DC power converters and methods of controlling the same
Steigerwald, Robert Louis; Elasser, Ahmed; Sabate, Juan Antonio; Todorovic, Maja Harfman; Agamy, Mohammed
2012-12-11
A power generation system configured to provide direct current (DC) power to a DC link is described. The system includes a first power generation unit configured to output DC power. The system also includes a first DC to DC converter comprising an input section and an output section. The output section of the first DC to DC converter is coupled in series with the first power generation unit. The first DC to DC converter is configured to process a first portion of the DC power output by the first power generation unit and to provide an unprocessed second portion of the DC power output of the first power generation unit to the output section.
Design of piezoelectric transformer for DC/DC converter with stochastic optimization method
NASA Astrophysics Data System (ADS)
Vasic, Dejan; Vido, Lionel
2016-04-01
Piezoelectric transformers were adopted in recent year due to their many inherent advantages such as safety, no EMI problem, low housing profile, and high power density, etc. The characteristics of the piezoelectric transformers are well known when the load impedance is a pure resistor. However, when piezoelectric transformers are used in AC/DC or DC/DC converters, there are non-linear electronic circuits connected before and after the transformer. Consequently, the output load is variable and due to the output capacitance of the transformer the optimal working point change. This paper starts from modeling a piezoelectric transformer connected to a full wave rectifier in order to discuss the design constraints and configuration of the transformer. The optimization method adopted here use the MOPSO algorithm (Multiple Objective Particle Swarm Optimization). We start with the formulation of the objective function and constraints; then the results give different sizes of the transformer and the characteristics. In other word, this method is looking for a best size of the transformer for optimal efficiency condition that is suitable for variable load. Furthermore, the size and the efficiency are found to be a trade-off. This paper proposes the completed design procedure to find the minimum size of PT in need. The completed design procedure is discussed by a given specification. The PT derived from the proposed design procedure can guarantee both good efficiency and enough range for load variation.
A method for simulating a flux-locked DC SQUID
NASA Technical Reports Server (NTRS)
Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.
1993-01-01
The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.
Circuit and Method for Communication Over DC Power Line
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.
2007-01-01
A circuit and method for transmitting and receiving on-off-keyed (OOK) signals with fractional signal-to-noise ratios uses available high-temperature silicon- on-insulator (SOI) components to move computational, sensing, and actuation abilities closer to high-temperature or high-ionizing radiation environments such as vehicle engine compartments, deep-hole drilling environments, industrial control and monitoring of processes like smelting, and operations near nuclear reactors and in space. This device allows for the networking of multiple, like nodes to each other and to a central processor. It can do this with nothing more than the already in-situ power wiring of the system. The device s microprocessor allows it to make intelligent decisions within the vehicle operational loop and to effect control outputs to its associated actuators. The figure illustrates how each node converts digital serial data to OOK 18-kHz in transmit mode and vice-versa in receive mode; though operations at lower frequencies or up to a megahertz are within reason using this method and these parts. This innovation s technique modulates a DC power bus with millivolt-level signals through a MOSFET (metal oxide semiconductor field effect transistor) and resistor by OOK. It receives and demodulates this signal from the DC power bus through capacitive coupling at high temperature and in high ionizing radiation environments. The demodulation of the OOK signal is accomplished by using an asynchronous quadrature detection technique realized by a quasi-discrete Fourier transform through use of the quadrature components (0 and 90 phases) of the carrier frequency as generated by the microcontroller and as a function of the selected crystal frequency driving its oscillator. The detected signal is rectified using an absolute-value circuit containing no diodes (diodes being non-operational at high temperatures), and only operational amplifiers. The absolute values of the two phases of the received signal
Modelling of stress fields during LFEM DC casting of aluminium billets by a meshless method
NASA Astrophysics Data System (ADS)
Mavrič, B.; Šarler, B.
2015-06-01
Direct Chill (DC) casting of aluminium alloys is a widely established technology for efficient production of aluminium billets and slabs. The procedure is being further improved by the application of Low Frequency Electromagnetic Field (LFEM) in the area of the mold. Novel LFEM DC processing technique affects many different phenomena which occur during solidification, one of them being the stresses and deformations present in the billet. These quantities can have a significant effect on the quality of the cast piece, since they impact porosity, hot-tearing and cold cracking. In this contribution a novel local radial basis function collocation method (LRBFCM) is successfully applied to the problem of stress field calculation during the stationary state of DC casting of aluminium alloys. The formulation of the method is presented in detail, followed by the presentation of the tackled physical problem. The model describes the deformations of linearly elastic, inhomogeneous isotropic solid with a given temperature field. The temperature profile is calculated using the in-house developed heat and mass transfer model. The effects of low frequency EM casting process parameters on the vertical, circumferential and radial stress and on the deformation of billet surface are presented. The application of the LFEM appears to decrease the amplitudes of the tensile stress occurring in the billet.
Synthesis of silicon nanotubes by DC arc plasma method
Tank, C. M.; Bhoraskar, S. V.; Mathe, V. L.
2012-06-05
Plasma synthesis is a novel technique of synthesis of nanomaterials as they provide high rate of production and promote metastable reactions. Very thin walled silicon nanotubes were synthesized in a DC direct arc thermal plasma reactor. The effect of parameters of synthesis i.e. arc current and presence of hydrogen on the morphology of Si nanoparticles is reported. Silicon nanotubes were characterized by Transmission Electron Microscopy (TEM), Local Energy Dispersive X-ray analysis (EDAX), and Scanning Tunneling Microscopy (STM).
Su, Gui-Jia
2003-06-10
A multilevel DC link inverter and method for improving torque response and current regulation in permanent magnet motors and switched reluctance motors having a low inductance includes a plurality of voltage controlled cells connected in series for applying a resulting dc voltage comprised of one or more incremental dc voltages. The cells are provided with switches for increasing the resulting applied dc voltage as speed and back EMF increase, while limiting the voltage that is applied to the commutation switches to perform PWM or dc voltage stepping functions, so as to limit current ripple in the stator windings below an acceptable level, typically 5%. Several embodiments are disclosed including inverters using IGBT's, inverters using thyristors. All of the inverters are operable in both motoring and regenerating modes.
Three dimensional finite element methods: Their role in the design of DC accelerator systems
Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.
2013-04-19
High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 Multiplication-Sign 300 mm{sup 2}. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.
Method to eliminate flux linkage DC component in load transformer for static transfer switch.
He, Yu; Mao, Chengxiong; Lu, Jiming; Wang, Dan; Tian, Bing
2014-01-01
Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2 ~ 30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method.
Method to Eliminate Flux Linkage DC Component in Load Transformer for Static Transfer Switch
2014-01-01
Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2~30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method. PMID:25133255
NASA Astrophysics Data System (ADS)
Watanabe, Tatsuya; Juliussen, Hâvard; Matsuoka, Norikazu; Christiansen, Hanne H.
2010-05-01
Patterned ground is one of the most characteristic features in arctic periglacial landscapes that originated from various periglacial processes. On flat tundra surfaces composed of fine-grained soils, ice-wedge polygons are dominant, but mud boils and hummocks are also developed. Their distribution is constrained by local ground material, hydrology, snow cover, vegetation and freeze/thaw regimes. Whereas there have been a large number of studies on patterned ground phenomena, environmental factors distinguishing the types of patterned ground are not well understood. We applied DC resistivity tomography to understanding hydrological characteristics and freeze/thaw dynamics at adjoining ice-wedge and mud-boil sites in Adventdalen, Svalbard, where comprehensive periglacial process monitoring has been undertaken. Electrode arrays consisting of 81 nails spaced at 20 cm intervals were fixed at each site early in June 2009 immediately after the snow cover disappeared. The nails were stuck within the top 5 cm to resolve the top layer of the ground. Measurements were carried out repeatedly at approximately two week intervals. Spring results from both sites are characterized by an increase in resistivity near surface due to drying up. This tendency is prominent in the ice-wedge polygon centre where standing water remains until late spring. Time-lapse analyses indicate a distinct decrease in resistivity in seasonal frozen layer at both sites probably due to an increase in unfrozen water content by downward heat transfer. Summer profiles from both sites display a distinct resistivity boundary propagating downward with time, corresponding well with the thaw depth measured by mechanical probing. These data also show near-surface high resistivity spots indicating the location of desiccation cracks. Profiles from the mud-boil site show higher resistivity in the thaw layer than those of ice-wedge site, implying different drainage condition between them. After seasonal freezing
MEG recordings of DC fields using the signal space separation method (SSS).
Taulu, S; Simola, J; Kajola, M
2004-11-30
Stationary SQUID sensors record time-varying magnetic fields only. Any DC sources, such as magnetic impurities on the scalp or physiological DC currents, are invisible in conventional MEG with stationary sources and sensors. However, movement of the subject relative to the measurement device transforms the DC fields into time-varying MEG signals, which are either signals of interest from biomagnetic sources, or movement artifacts when caused by magnetic residue on the head. These signals can be demodulated to DC by tracking the head movement and by using this recorded information to decompose the signals into a device-independent source model. To do this we have used the signal space separation method (SSS) along with a continuous head position monitoring system. From time variations of the recorded signal, a linear equation is obtained relating the averaged MEG signal variation, the DC-source in the head, and the varying external interference. In this way an unbiased estimate is obtained for the DC source as it is automatically separated from external interference. The method was tested by feeding DC current in an artificial current dipole on a phantom head and by continuously moving and rotating this phantom randomly with a motion amplitude of several centimeters. After the SSS based movement demodulation and reconstruction of the signal from inside of the helmet, the location of the DC current dipole in the phantom could be determined with an accuracy of 2 mm. It is concluded that the method enables localization of DC sources with MEG using voluntary head movements.
Investigation of an innovative method for DC flow suppression of double-inlet pulse tube coolers
NASA Astrophysics Data System (ADS)
Hu, J. Y.; Luo, E. C.; Wu, Z. H.; Dai, W.; Zhu, S. L.
2007-05-01
The use of double-inlet mode in the pulse tube cooler opens up a possibility of DC flow circulating around the regenerator and the pulse tube. The DC flow sometimes deteriorates the performance of the cryocooler because such a steady flow adds an unwanted thermal load to the cold heat exchanger. It seems that this problem is still not well solved although a lot of effort has been made. Here we introduce a membrane-barrier method for DC flow suppression in double-inlet pulse tube coolers. An elastic membrane is installed between the pulse tube cooler inlet and the double-inlet valve to break the closed-loop flow path of DC flow. The membrane is acoustically transparent, but would block the DC flow completely. Thus the DC flow is thoroughly suppressed and the merit of double-inlet mode is remained. With this method, a temperature reduction of tens of Kelvin was obtained in our single-stage pulse tube cooler and the lowest temperature reached 29.8 K.
Method for Estimating Low-Frequency Return Current of DC Electric Railcar
NASA Astrophysics Data System (ADS)
Hatsukade, Satoru
The Estimation of the harmonic current of railcars is necessary for achieving compatibility between train signaling systems and railcar equipment. However, although several theoretical analyses methods for estimating the harmonic current of railcars using switching functions exist, there are no theoretical analysis methods estimating a low-frequency current at a frequency less than the power converter's carrier frequency. This paper describes a method for estimating the spectrum (frequency and amplitude) of the low-frequency return current of DC electric railcars. First, relationships between the return current and characteristics of the DC electric railcars, such as mass and acceleration, are determined. Then, the mathematical (not numerical) calculation results for low-frequency current are obtained from the time-current curve for a DC electric railcar by using Fourier series expansions. Finally, the measurement results clearly show the effectiveness of the estimation method development in this study.
NASA Technical Reports Server (NTRS)
Mendrek, M. J.; Higgins, R. H.; Danford, M. D.
1988-01-01
To investigate metal surface corrosion and the breakdown of metal protective coatings, the ac impedance method is applied to six systems of primer coated and primer topcoated 4130 steel. Two primers were used: a zinc-rich epoxy primer and a red lead oxide epoxy primer. The epoxy-polyamine topcoat was used in four of the systems. The EG and G-PARC Model 368 ac impedance measurement system, along with dc measurements with the same system using the polarization resistance method, were used to monitor changing properties of coated 4230 steel disks immersed in 3.5 percent NaCl solutions buffered at pH 5.4 over periods of 40 to 60 days. The corrosion system can be represented by an electronic analog called an equivalent circuit consisting of resistors and capacitors in specific arrangements. This equivalent circuit parallels the impedance behavior of the corrosion system during a frequency scan. Values for the resistors and capacitors, that can be assigned in the equivalent circuit following a least-squares analysis of the data, describe changes that occur on the corroding metal surface and in the protective coatings. Two equivalent circuits have been determined that predict the correct Bode phase and magnitude of the experimental sample at different immersion times. The dc corrosion current density data are related to equivalent circuit element parameters. Methods for determining corrosion rate with ac impedance parameters are verified by the dc method.
Entropy viscosity method applied to Euler equations
Delchini, M. O.; Ragusa, J. C.; Berry, R. A.
2013-07-01
The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)
Adaptable DC offset correction
NASA Technical Reports Server (NTRS)
Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)
2009-01-01
Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.
DC-offset effect cancelation method using mean-padding FFT for automotive UWB radar sensor
NASA Astrophysics Data System (ADS)
Ju, Yeonghwan; Kim, Sang-Dong; Lee, Jong-Hun
2011-06-01
To improve road safety and realize intelligent transportation, Ultra-Wideband (UWB) radars sensor in the 24 GHz domain are currently under development for many automotive applications. Automotive UWB radar sensor must be small, require low power and inexpensive. By employing a direct conversion receiver, automotive UWB radar sensor is able to meet size and cost reduction requirements. We developed Automotive UWB radar sensor for automotive applications. The developed receiver of the automotive radar sensor is direct conversion architecture. Direct conversion architecture poses a dc-offset problem. In automotive UWB radar, Doppler frequency is used to extract velocity. The Doppler frequency of a vehicle can be detected using zero-padding Fast Fourier Transform (FFT). However, a zero-padding FFT error is occurs due to DC-offset problem in automotive UWB radar sensor using a direct conversion receiver. Therefore, dc-offset problem corrupts velocity ambiguity. In this paper we proposed a mean-padding method to reduce zero-padding FFT error due to DC-offset in automotive UWB radar using direct conversion receiver, and verify our proposed method with computer simulation and experiment using developed automotive UWB radar sensor. We present the simulation results and experiment result to compare velocity measurement probability of the zero-padding FFT and the mean-padding FFT. The proposed algorithm simulated using Matlab and experimented using designed the automotive UWB radar sensor in a real road environment. The proposed method improved velocity measurement probability.
Forward modeling of marine DC resistivity method for a layered anisotropic earth
NASA Astrophysics Data System (ADS)
Yin, Chang-Chun; Zhang, Ping; Cai, Jing
2016-06-01
Since the ocean bottom is a sedimentary environment wherein stratification is well developed, the use of an anisotropic model is best for studying its geology. Beginning with Maxwell's equations for an anisotropic model, we introduce scalar potentials based on the divergence-free characteristic of the electric and magnetic (EM) fields. We then continue the EM fields down into the deep earth and upward into the seawater and couple them at the ocean bottom to the transmitting source. By studying both the DC apparent resistivity curves and their polar plots, we can resolve the anisotropy of the ocean bottom. Forward modeling of a high-resistivity thin layer in an anisotropic half-space demonstrates that the marine DC resistivity method in shallow water is very sensitive to the resistive reservoir but is not influenced by airwaves. As such, it is very suitable for oil and gas exploration in shallowwater areas but, to date, most modeling algorithms for studying marine DC resistivity are based on isotropic models. In this paper, we investigate one-dimensional anisotropic forward modeling for marine DC resistivity method, prove the algorithm to have high accuracy, and thus provide a theoretical basis for 2D and 3D forward modeling.
Method and apparatus for generating radiation utilizing DC to AC conversion with a conductive front
Dawson, J.M.; Mori, W.B.; Lai, C.H.; Katsouleas, T.C.
1998-07-14
Method and apparatus ar disclosed for generating radiation of high power, variable duration and broad tunability over several orders of magnitude from a laser-ionized gas-filled capacitor array. The method and apparatus convert a DC electric field pattern into a coherent electromagnetic wave train when a relativistic ionization front passes between the capacitor plates. The frequency and duration of the radiation is controlled by the gas pressure and capacitor spacing. 4 figs.
Method and apparatus for generating radiation utilizing DC to AC conversion with a conductive front
Dawson, John M.; Mori, Warren B.; Lai, Chih-Hsiang; Katsouleas, Thomas C.
1998-01-01
Method and apparatus for generating radiation of high power, variable duration and broad tunability over several orders of magnitude from a laser-ionized gas-filled capacitor array. The method and apparatus convert a DC electric field pattern into a coherent electromagnetic wave train when a relativistic ionization front passes between the capacitor plates. The frequency and duration of the radiation is controlled by the gas pressure and capacitor spacing.
Clinical practice is not applied scientific method.
Cox, K
1995-08-01
Practice is often described as applied science, but real life is far too complex and interactive to be handled by analytical scientific methods. The limitations of usefulness of scientific method in clinical practice result from many factors. The complexity of the large number of ill-defined variables at many levels of the problem. Scientific method focuses on one variable at a time across a hundred identical animals to extract a single, generalizable 'proof' or piece of 'truth'. Clinical practice deals with a hundred variables at one time within one animal from among a clientele of non-identical animals in order to optimize a mix of outcomes intended to satisfy that particular animal's current needs and desires. Interdependence among the variables. Most factors in the illness, the disease, the patient and the setting are interdependent, and cannot be sufficiently isolated to allow their separate study. Practice as a human transaction involving at least two people is too complex to be analysed one factor at a time when the interaction stimulates unpredictable responses. Ambiguous data. Words have many usages. People not only assign different interpretations to the same words, they assign different 'meanings', especially according to the threat or hope they may imply. The perceptual data gleaned from physical examination may be difficult to specify exactly or to confirm objectively. The accuracy and precision of investigational data and their reporting can be low, and are frequently unknown. Differing goals between science and practice. Science strives for exact points of propositional knowledge, verifiable by logical argument using objective data and repetition of the experiment.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7661793
Method of measuring the dc electric field and other tokamak parameters
Fisch, Nathaniel J.; Kirtz, Arnold H.
1992-01-01
A method including externally imposing an impulsive momentum-space flux to perturb hot tokamak electrons thereby producing a transient synchrotron radiation signal, in frequency-time space, and the inference, using very fast algorithms, of plasma parameters including the effective ion charge state Z.sub.eff, the direction of the magnetic field, and the position and width in velocity space of the impulsive momentum-space flux, and, in particular, the dc toroidal electric field.
NASA Astrophysics Data System (ADS)
Boughariou, F.; Chouikhi, S.; Kallel, A.; Belgaroui, E.
2015-12-01
In this paper, we present a new theoretical and numerical formulation for the electrical and thermal breakdown phenomena, induced by charge packet dynamics, in low-density polyethylene (LDPE) insulating film under dc high applied field. The theoretical physical formulation is composed by the equations of bipolar charge transport as well as by the thermo-electric coupled equation associated for the first time in modeling to the bipolar transport problem. This coupled equation is resolved by the finite-element numerical model. For the first time, all bipolar transport results are obtained under non-uniform temperature distributions in the sample bulk. The principal original results show the occurring of very sudden abrupt increase in local temperature associated to a very sharp increase in external and conduction current densities appearing during the steady state. The coupling between these electrical and thermal instabilities reflects physically the local coupling between electrical conduction and thermal joule effect. The results of non-uniform temperature distributions induced by non-uniform electrical conduction current are also presented for several times. According to our formulation, the strong injection current is the principal factor of the electrical and thermal breakdown of polymer insulating material. This result is shown in this work. Our formulation is also validated experimentally.
Bootstrapping Methods Applied for Simulating Laboratory Works
ERIC Educational Resources Information Center
Prodan, Augustin; Campean, Remus
2005-01-01
Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…
Perturbation approach applied to modal diffraction methods.
Bischoff, Joerg; Hehl, Karl
2011-05-01
Eigenvalue computation is an important part of many modal diffraction methods, including the rigorous coupled wave approach (RCWA) and the Chandezon method. This procedure is known to be computationally intensive, accounting for a large proportion of the overall run time. However, in many cases, eigenvalue information is already available from previous calculations. Some of the examples include adjacent slices in the RCWA, spectral- or angle-resolved scans in optical scatterometry and parameter derivatives in optimization. In this paper, we present a new technique that provides accurate and highly reliable solutions with significant improvements in computational time. The proposed method takes advantage of known eigensolution information and is based on perturbation method. PMID:21532698
Applying Human Computation Methods to Information Science
ERIC Educational Resources Information Center
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
Applying Mixed Methods Techniques in Strategic Planning
ERIC Educational Resources Information Center
Voorhees, Richard A.
2008-01-01
In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…
Metal alloy coatings and methods for applying
Merz, Martin D.; Knoll, Robert W.
1991-01-01
A method of coating a substrate comprises plasma spraying a prealloyed feed powder onto a substrate, where the prealloyed feed powder comprises a significant amount of an alloy of stainless steel and at least one refractory element selected from the group consisting of titanium, zirconium, hafnium, niobium, tantalum, molybdenum, and tungsten. The plasma spraying of such a feed powder is conducted in an oxygen containing atmosphere and forms an adherent, corrosion resistant, and substantially homogenous metallic refractory alloy coating on the substrate.
METHOD OF APPLYING COPPER COATINGS TO URANIUM
Gray, A.G.
1959-07-14
A method is presented for protecting metallic uranium, which comprises anodic etching of the uranium in an aqueous phosphoric acid solution containing chloride ions, cleaning the etched uranium in aqueous nitric acid solution, promptly electro-plating the cleaned uranium in a copper electro-plating bath, and then electro-plating thereupon lead, tin, zinc, cadmium, chromium or nickel from an aqueous electro-plating bath.
Applying New Methods to Diagnose Coral Diseases
Kellogg, Christina A.; Zawada, David G.
2009-01-01
Coral disease, one of the major causes of reef degradation and coral death, has been increasing worldwide since the 1970s, particularly in the Caribbean. Despite increased scientific study, simple questions about the extent of disease outbreaks and the causative agents remain unanswered. A component of the U.S. Geological Survey Coral Reef Ecosystem STudies (USGS CREST) project is focused on developing and using new methods to approach the complex problem of coral disease.
METHOD OF APPLYING NICKEL COATINGS ON URANIUM
Gray, A.G.
1959-07-14
A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.
Scanning methods applied to bitemark analysis
NASA Astrophysics Data System (ADS)
Bush, Peter J.; Bush, Mary A.
2010-06-01
The 2009 National Academy of Sciences report on forensics focused criticism on pattern evidence subdisciplines in which statements of unique identity are utilized. One principle of bitemark analysis is that the human dentition is unique to the extent that a perpetrator may be identified based on dental traits in a bitemark. Optical and electron scanning methods were used to measure dental minutia and to investigate replication of detail in human skin. Results indicated that being a visco-elastic substrate, skin effectively reduces the resolution of measurement of dental detail. Conclusions indicate caution in individualization statements.
ALLOY COATINGS AND METHOD OF APPLYING
Eubank, L.D.; Boller, E.R.
1958-08-26
A method for providing uranium articles with a pro tective coating by a single dip coating process is presented. The uranium article is dipped into a molten zinc bath containing a small percentage of aluminum. The resultant product is a uranium article covered with a thin undercoat consisting of a uranium-aluminum alloy with a small amount of zinc, and an outer layer consisting of zinc and aluminum. The article may be used as is, or aluminum sheathing may then be bonded to the aluminum zinc outer layer.
Counterdiffusion methods applied to protein crystallization.
Otálora, Fermín; Gavira, José Antonio; Ng, Joseph D; García-Ruiz, Juan Manuel
2009-11-01
Accumulated experience during the last years on counterdiffusion crystallization methods shows that they are a convenient and generally applicable way of optimizing solution crystal growth experiments. Irrespective of whether the objective of the experiment is to improve crystal quality or size, many experiments reporting a positive or neutral effect of counterdiffusion exists, but adverse effects are consistently absent. Thus counterdiffusion is viewed as a rational crystallization approach to minimize supersaturation and impurity levels at the crystal growth front and to ensure steadiness of both values. This control of the phase transition state is automatically achieved and sustained by a dynamic equilibrium between mass transport and aggregation kinetics. The course of this function can be implemented in any media permitting diffusive mass transport (gels, capillaries, microfluidic devices or microgravity). The counterdiffusion technique has been exploited in many recent applications revealing interesting effects on nucleation and polymorphic precipitation, hence opening further possibilities for innovative screening of crystallization conditions.
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
NASA Astrophysics Data System (ADS)
Riba, Jordi-Roger
2015-09-01
This paper analyzes the skin and proximity effects in different conductive nonmagnetic straight conductor configurations subjected to applied alternating currents and voltages. These effects have important consequences, including a rise of the ac resistance, which in turn increases power loss, thus limiting the rating for the conductor. Alternating current (ac) resistance is important in power conductors and bus bars for line frequency applications, as well as in smaller conductors for high frequency applications. Despite the importance of this topic, it is not usually analyzed in detail in undergraduate and even in graduate studies. To address this, this paper compares the results provided by available exact formulas for simple geometries with those obtained by means of two-dimensional finite element method (FEM) simulations and experimental results. The paper also shows that FEM results are very accurate and more general than those provided by the formulas, since FEM models can be applied in a wide range of electrical frequencies and configurations.
Reflections on Mixing Methods in Applied Linguistics Research
ERIC Educational Resources Information Center
Hashemi, Mohammad R.
2012-01-01
This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…
Control method for peak power delivery with limited DC-bus voltage
Edwards, John; Xu, Longya; Bhargava, Brij B.
2006-09-05
A method for driving a neutral point-clamped multi-level voltage source inverter supplying a synchronous motor is provided. A DC current is received at a neutral point-clamped multi-level voltage source inverter. The inverter has first, second, and third output nodes. The inverter also has a plurality of switches. A desired speed of a synchronous motor connected to the inverter by the first second and third nodes is received by the inverter. The synchronous motor has a rotor and the speed of the motor is defined by the rotational rate of the rotor. A position of the rotor is sensed, current flowing to the motor out of at least two of the first, second, and third output nodes is sensed, and predetermined switches are automatically activated by the inverter responsive to the sensed rotor position, the sensed current, and the desired speed.
PLURAL METALLIC COATINGS ON URANIUM AND METHOD OF APPLYING SAME
Gray, A.G.
1958-09-16
A method is described of applying protective coatings to uranlum articles. It consists in applying chromium plating to such uranium articles by electrolysis in a chromic acid bath and subsequently applying, to this minum containing alloy. This aluminum contalning alloy (for example one of aluminum and silicon) may then be used as a bonding alloy between the chromized surface and an aluminum can.
DC/DC Converter Stability Testing Study
NASA Technical Reports Server (NTRS)
Wang, Bright L.
2008-01-01
This report presents study results on hybrid DC/DC converter stability testing methods. An input impedance measurement method and a gain/phase margin measurement method were evaluated to be effective to determine front-end oscillation and feedback loop oscillation. In particular, certain channel power levels of converter input noises have been found to have high degree correlation with the gain/phase margins. It becomes a potential new method to evaluate stability levels of all type of DC/DC converters by utilizing the spectral analysis on converter input noises.
Zolper, John C.; Sherwin, Marc E.; Baca, Albert G.
2000-01-01
A method for making compound semiconductor devices including the use of a p-type dopant is disclosed wherein the dopant is co-implanted with an n-type donor species at the time the n-channel is formed and a single anneal at moderate temperature is then performed. Also disclosed are devices manufactured using the method. In the preferred embodiment n-MESFETs and other similar field effect transistor devices are manufactured using C ions co-implanted with Si atoms in GaAs to form an n-channel. C exhibits a unique characteristic in the context of the invention in that it exhibits a low activation efficiency (typically, 50% or less) as a p-type dopant, and consequently, it acts to sharpen the Si n-channel by compensating Si donors in the region of the Si-channel tail, but does not contribute substantially to the acceptor concentration in the buried p region. As a result, the invention provides for improved field effect semiconductor and related devices with enhancement of both DC and high-frequency performance.
Zolper, J.C.; Sherwin, M.E.; Baca, A.G.
2000-07-04
A method for making compound semiconductor devices including the use of a p-type dopant is disclosed wherein the dopant is co-implanted with an n-type donor species at the time the n-channel is formed and a single anneal at moderate temperature is then performed. Also disclosed are devices manufactured using the method. In the preferred embodiment n-MESFETs and other similar field effect transistor devices are manufactured using C ions co-implanted with Si atoms in GaAs to form an n-channel. C exhibits a unique characteristic in the context of the invention in that it exhibits a low activation efficiency (typically, 50% or less) as a p-type dopant, and consequently, it acts to sharpen the Si n-channel by compensating Si donors in the region of the Si-channel tail, but does not contribute substantially to the acceptor concentration in the buried p region. As a result, the invention provides for improved field effect semiconductor and related devices with enhancement of both DC and high-frequency performance.
ERIC Educational Resources Information Center
Ates, Salih
2005-01-01
This study was undertaken to explore the effectiveness of the learning-cycle method when teaching direct current (DC) circuits to university students. Four Physics II classes participated in the study, which lasted approximately two and a half weeks in the middle of the spring semester of 2003. Participants were 120 freshmen (55 females and 65…
Building "Applied Linguistic Historiography": Rationale, Scope, and Methods
ERIC Educational Resources Information Center
Smith, Richard
2016-01-01
In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…
Applying Mixed Methods Research at the Synthesis Level: An Overview
ERIC Educational Resources Information Center
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
Design and development of DC high current sensor using Hall-Effect method
NASA Astrophysics Data System (ADS)
Dewi, Sasti Dwi Tungga; Panatarani, C.; Joni, I. Made
2016-02-01
This paper report a newly developed high DC current sensor by using a Hall effect method and also the measurement system. The Hall effect sensor receive the magnetic field generated by a current carrying conductor wire. The SS49E (Honeywell) magnetoresistive sensor was employed to sense the magnetic field from the field concentrator. The voltage received from SS49E then converted into digital by using analog to digital converter (ADC-10 bit). The digital data then processed in the microcontroller to be displayed as the value of the electric current in the LCD display. In addition the measurement was interfaced into Personal Computer (PC) using the communication protocols of RS232 which was finally displayed in real-time graphical form on the PC display. The performance test on the range ± 40 Ampere showed that the maximum relative error is 5.26%. It is concluded that the sensors and the measurement system worked properly according to the design with acceptable accuracy.
Slagter-Jäger, Jacoba G; Raney, Alexa; Lewis, Whitney E; DeBenedette, Mark A; Nicolette, Charles A; Tcherepanova, Irina Y
2013-01-01
Dendritic cells (DCs) transfected with total amplified tumor cell RNA have the potential to induce broad antitumor immune responses. However, analytical methods required for quantitatively assessing the integrity, fidelity, and functionality of the amplified RNA are lacking. We have developed a series of assays including gel electrophoresis, northern blot, capping efficiency, and microarray analysis to determine integrity and fidelity and a model system to assess functionality after transfection into human DCs. We employed these tools to demonstrate that modifications to our previously reported total cellular RNA amplification process including the use of the Fast Start High Fidelity (FSHF) PCR enzyme, T7 Powerswitch primer, post-transcriptional capping and incorporation of a type 1 cap result in amplification of longer transcripts, greater translational competence, and a higher fidelity representation of the starting total RNA population. To study the properties of amplified RNA after transfection into human DCs, we measured protein expression levels of defined antigens coamplified with the starting total RNA populations and measured antigen-specific T cell expansion in autologous DC-T cell co-cultured in vitro. We conclude from these analyses that the improved RNA amplification process results in superior protein expression levels and a greater capacity of the transfected DCs to induce multifunctional antigen-specific memory T cells. PMID:23653155
Lawler, J.S.
2001-10-29
The brushless dc motor (BDCM) has high-power density and efficiency relative to other motor types. These properties make the BDCM well suited for applications in electric vehicles provided a method can be developed for driving the motor over the 4 to 6:1 constant power speed range (CPSR) required by such applications. The present state of the art for constant power operation of the BDCM is conventional phase advance (CPA) [1]. In this paper, we identify key limitations of CPA. It is shown that the CPA has effective control over the developed power but that the current magnitude is relatively insensitive to power output and is inversely proportional to motor inductance. If the motor inductance is low, then the rms current at rated power and high speed may be several times larger than the current rating. The inductance required to maintain rms current within rating is derived analytically and is found to be large relative to that of BDCM designs using high-strength rare earth magnets. Th us, the CPA requires a BDCM with a large equivalent inductance.
Applied AC and DC magnetic fields cause alterations in the mitotic cycle of early sea urchin embryos
Levin, M.; Ernst, S.G.
1995-09-01
This study demonstrates that exposure to 60 Hz magnetic fields (3.4--8.8 mt) and magnetic fields over the range DC-600 kHz (2.5--6.5 mT) can alter the early embryonic development of sea urchin embryos by inducing alterations in the timing of the cell cycle. Batches of fertilized eggs were exposed to the fields produced by a coil system. Samples of the continuous cultures were taken and scored for cell division. The times of both the first and second cell divisions were advanced by ELF AC fields and by static fields. The magnitude of the 60 Hz effect appears proportional to the field strength over the range tested. the relationship to field frequency was nonlinear and complex. For certain frequencies above the ELF range, the exposure resulted in a delay of the onset of mitosis. The advance of mitosis was also dependent on the duration of exposure and on the timing of exposure relative to fertilization.
The method of averages applied to the KS differential equations
NASA Technical Reports Server (NTRS)
Graf, O. F., Jr.; Mueller, A. C.; Starke, S. E.
1977-01-01
A new approach for the solution of artificial satellite trajectory problems is proposed. The basic idea is to apply an analytical solution method (the method of averages) to an appropriate formulation of the orbital mechanics equations of motion (the KS-element differential equations). The result is a set of transformed equations of motion that are more amenable to numerical solution.
Lareau, Caleb A.; White, Bill C.; Montgomery, Courtney G.; McKinney, Brett A.
2015-01-01
Recent studies have implicated the role of differential co-expression or correlation structure in gene expression data to help explain phenotypic differences. However, few attempts have been made to characterize the function of variants based on their role in regulating differential co-expression. Here, we describe a statistical methodology that identifies pairs of transcripts that display differential correlation structure conditioned on genotypes of variants that regulate co-expression. Additionally, we present a user-friendly, computationally efficient tool, dcVar, that can be applied to expression quantitative trait loci (eQTL) or RNA-Seq datasets to infer differential co-expression variants (dcVars). We apply dcVar to the HapMap3 eQTL dataset and demonstrate the utility of this methodology at uncovering novel function of variants of interest with examples from a height genome-wide association and cancer drug resistance. We provide evidence that differential correlation structure is a valuable intermediate molecular phenotype for further characterizing the function of variants identified in GWAS and related studies. PMID:26539209
Evaluation of a complete denture trial method applying rapid prototyping.
Inokoshi, Masanao; Kanazawa, Manabu; Minakuchi, Shunsuke
2012-02-01
A new trial method for complete dentures using rapid prototyping (RP) was compared with the conventional method. Wax dentures were fabricated for 10 edentulous patients. Cone-beam CT was used to scan the wax dentures. Using 3D computer-aided design software, seven 3D denture images with different artificial teeth arrangements were made and seven trial dentures per patient were fabricated accordingly. Two prosthodontists performed a denture try-in for one patient using both conventional and RP methods. The prosthodontists and patients rated satisfaction for both methods using a visual analogue scale. Satisfaction ratings with both conventional and RP methods were compared using the Wilcoxon signed-rank test. Regarding prosthodontist's ratings, esthetics and stability were rated significantly higher with the conventional method than with the RP method, whereas chair time was rated significantly longer with the RP method than with the conventional method. Although further improvements are needed, the trial method applying RP seems promising.
Non-Symmetrized Hyperspherical Harmonics Method Applied to Light Hypernuclei
NASA Astrophysics Data System (ADS)
Ferrari Ruffino, F.; Barnea, N.; Deflorian, S.; Leidemann, W.; Orlandini, G.
2016-03-01
We have adapted the non-symmetrized hyperspherical harmonics method (NSHH) in order to treat light hypernuclei. In the past the method has been applied in the atomic and nuclear context dealing with identical particle systems exclusively. We have generalized and optimized the formalism in presence of two different species of particles, namely nucleons and hyperons. Preliminary benchmark results with a modern realistic 2-body nucleon-hyperon interaction are provided.
Aircraft operability methods applied to space launch vehicles
NASA Astrophysics Data System (ADS)
Young, Douglas
1997-01-01
The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.
Characterization of anomalies by applying methods of fractal analysis
Sakuma, M.; Kozma, R.; Kitamura, M.
1996-01-01
Fractal analysis is applied in a variety of research fields to characterize nonstationary data. Here, fractal analysis is used as a tool of characterization in time series. The fractal dimension is calculated by Higuchi`s method, and the effect of small data size on accuracy is studied in detail. Three types of fractal-based anomaly indicators are adopted: (a) the fractal dimension, (b) the error of the fractal dimension, and (c) the chi-square value of the linear fitting of the fractal curve in the wave number domain. Fractal features of time series can be characterized by introducing these three measures. The proposed method is applied to various simulated fractal time series with ramp, random, and periodic noise anomalies and also to neutron detector signals acquired in a nuclear reactor. Fractal characterization can successfully supplement conventional signal analysis methods especially if nonstationary and non-Gaussian features of the signal become important.
Aircraft operability methods applied to space launch vehicles
Young, D.
1997-01-01
The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The {open_quotes}building in{close_quotes} of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program. {copyright} {ital 1997 American Institute of Physics.}
Hur, J.; Hyun, D.S.; Hong, J.P.
1998-09-01
The method of reducing cogging torque and improving average torque has been studied by changing the dead zone angle of trapezoidal magnetization distribution of ring type rotor magnet in brushless DC motor (BLDCM). Because BLDCM has 3-D shape of overhang, 3-D analysis should be used for exact computation of its magnetic field. 3-D equivalent magnetic circuit network method (3-D EMCN) which can analyze an accurate 3-D magnetic field has been introduced. The analysis results of cogging torque using 3-D EMCN are compared with ones of 3-D finite element method (3-D FEM) and experimental data.
NASA Astrophysics Data System (ADS)
Kohara, Yusuke; Kubo, Naoya; Nishiyama, Tomofumi; Koizuka, Taiki; Alimudin, Mohammad; Rahmat, Amirul; Okamura, Hitoshi; Yamanokuchi, Tomoyuki; Nakamura, Kazuyuki
2016-04-01
Two new parallel bus coding methods for generating a DC-balanced code with additional bits are proposed to achieve the self-stabilization of the intermediate power level in Stacked-Vdd integrated circuits. They contribute to producing a uniform switching current in parallel inputs and outputs (I/Os). Type I coding minimizes the difference in the number of switchings between the upper and lower CMOS I/Os by 8B/10B coding followed by toggle conversion. Type II coding, in which the multi-value running disparity control feature is integrated into the bus-invert coding, requires only one redundant bit for any wider bus. Their DC-balanced feature and the stability effect of the intermediate power level in the Stacked-Vdd structure were experimentally confirmed from the measurement results obtained from the developed test chips.
Modeling of DC spacecraft power systems
NASA Technical Reports Server (NTRS)
Berry, F. C.
1995-01-01
Future spacecraft power systems must be capable of supplying power to various loads. This delivery of power may necessitate the use of high-voltage, high-power dc distribution systems to transmit power from the source to the loads. Using state-of-the-art power conditioning electronics such as dc-dc converters, complex series and parallel configurations may be required at the interface between the source and the distribution system and between the loads and the distribution system. This research will use state-variables to model and simulate a dc spacecraft power system. Each component of the dc power system will be treated as a multiport network, and a state model will be written with the port voltages as the inputs. The state model of a component will be solved independently from the other components using its state transition matrix. A state-space averaging method is developed first in general for any dc-dc switching converter, and then demonstrated in detail for the particular case of the boost power stage. General equations for both steady-state (dc) and dynamic effects (ac) are obtained, from which important transfer functions are derived and applied to a special case of the boost power stage.
Modeling of DC spacecraft power systems
NASA Astrophysics Data System (ADS)
Berry, F. C.
1995-07-01
Future spacecraft power systems must be capable of supplying power to various loads. This delivery of power may necessitate the use of high-voltage, high-power dc distribution systems to transmit power from the source to the loads. Using state-of-the-art power conditioning electronics such as dc-dc converters, complex series and parallel configurations may be required at the interface between the source and the distribution system and between the loads and the distribution system. This research will use state-variables to model and simulate a dc spacecraft power system. Each component of the dc power system will be treated as a multiport network, and a state model will be written with the port voltages as the inputs. The state model of a component will be solved independently from the other components using its state transition matrix. A state-space averaging method is developed first in general for any dc-dc switching converter, and then demonstrated in detail for the particular case of the boost power stage. General equations for both steady-state (dc) and dynamic effects (ac) are obtained, from which important transfer functions are derived and applied to a special case of the boost power stage.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Applying Taguchi Methods To Brazing Of Rocket-Nozzle Tubes
NASA Technical Reports Server (NTRS)
Gilbert, Jeffrey L.; Bellows, William J.; Deily, David C.; Brennan, Alex; Somerville, John G.
1995-01-01
Report describes experimental study in which Taguchi Methods applied with view toward improving brazing of coolant tubes in nozzle of main engine of space shuttle. Dr. Taguchi's parameter design technique used to define proposed modifications of brazing process reducing manufacturing time and cost by reducing number of furnace brazing cycles and number of tube-gap inspections needed to achieve desired small gaps between tubes.
Hur, J.; Chun, Y.D.; Lee, J.; Hyun, D.S.
1998-09-01
The distribution of radial force density in brushless permanent magnet DC motor is not uniform in axial direction. The analysis of radial force density has to consider the 3-D shape of teeth and overhand, because the radial force density causes vibration and acts on the surface of teeth inconstantly. For the analysis, a new 3-D equivalent magnetic circuit network method is used to account the rotor movement without remesh. The radial force density is calculated and analyzed by Maxwell stress tensor and discrete Fourier transform (DFT) respectively. The results of 3-D equivalent magnetic circuit method have been compared with the results of 3-D FEM.
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-03-10
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton`s method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton`s method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step.
Staining methods applied to glycol methacrylate embedded tissue sections.
Cerri, P S; Sasso-Cerri, E
2003-01-01
The use of glycol methacrylate (GMA) avoids some technical artifacts, which are usually observed in paraffin-embedded sections, providing good morphological resolution. On the other hand, weak staining have been mentioned during the use of different methods in plastic sections. In the present study, changes in the histological staining procedures have been assayed during the use of staining and histochemical methods in different GMA-embedded tissues. Samples of tongue, submandibular and sublingual glands, cartilage, portions of respiratory tract and nervous ganglion were fixed in 4% formaldehyde and embedded in glycol methacrylate. The sections of tongue and nervous ganglion were stained by H&E. Picrosirius, Toluidine Blue and Sudan Black B methods were applied, respectively, for identification of collagen fibers in submandibular gland, sulfated glycosaminoglycans in cartilage (metachromasia) and myelin lipids in nervous ganglion. Periodic Acid-Schiff (PAS) method was used for detection of glycoconjugates in submandibular gland and cartilage while AB/PAS combined methods were applied for detection of mucins in the respiratory tract. In addition, a combination of Alcian Blue (AB) and Picrosirius methods was also assayed in the sublingual gland sections. The GMA-embedded tissue sections showed an optimal morphological integrity and were favorable to the staining methods employed in the present study. In the sections of tongue and nervous ganglion, a good contrast of basophilic and acidophilic structures was obtained by H&E. An intense eosinophilia was observed either in the striated muscle fibers or in the myelin sheaths in which the lipids were preserved and revealed by Sudan Black B. In the cartilage matrix, a strong metachromasia was revealed by Toluidine Blue in the negatively-charged glycosaminoglycans. In the chondrocytes, glycogen granules were intensely positive to PAS method. Extracellular glycoproteins were also PAS positive in the basal membrane and in the
The crowding factor method applied to parafoveal vision
Ghahghaei, Saeideh; Walker, Laura
2016-01-01
Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (<3.5 visual degrees) with spatial uncertainty. We find that eccentricity and hemifield have less impact on crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170
Applying Quantitative Genetic Methods to Primate Social Behavior.
Blomquist, Gregory E; Brent, Lauren J N
2014-02-01
Increasingly, behavioral ecologists have applied quantitative genetic methods to investigate the evolution of behaviors in wild animal populations. The promise of quantitative genetics in unmanaged populations opens the door for simultaneous analysis of inheritance, phenotypic plasticity, and patterns of selection on behavioral phenotypes all within the same study. In this article, we describe how quantitative genetic techniques provide studies of the evolution of behavior with information that is unique and valuable. We outline technical obstacles for applying quantitative genetic techniques that are of particular relevance to studies of behavior in primates, especially those living in noncaptive populations, e.g., the need for pedigree information, non-Gaussian phenotypes, and demonstrate how many of these barriers are now surmountable. We illustrate this by applying recent quantitative genetic methods to spatial proximity data, a simple and widely collected primate social behavior, from adult rhesus macaques on Cayo Santiago. Our analysis shows that proximity measures are consistent across repeated measurements on individuals (repeatable) and that kin have similar mean measurements (heritable). Quantitative genetics may hold lessons of considerable importance for studies of primate behavior, even those without a specific genetic focus.
Methods for model selection in applied science and engineering.
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
The Lattice Boltzmann Method applied to neutron transport
Erasmus, B.; Van Heerden, F. A.
2013-07-01
In this paper the applicability of the Lattice Boltzmann Method to neutron transport is investigated. One of the main features of the Lattice Boltzmann method is the simultaneous discretization of the phase space of the problem, whereby particles are restricted to move on a lattice. An iterative solution of the operator form of the neutron transport equation is presented here, with the first collision source as the starting point of the iteration scheme. A full description of the discretization scheme is given, along with the quadrature set used for the angular discretization. An angular refinement scheme is introduced to increase the angular coverage of the problem phase space and to mitigate lattice ray effects. The method is applied to a model problem to investigate its applicability to neutron transport and the results are compared to a reference solution calculated, using MCNP. (authors)
Advancing MODFLOW Applying the Derived Vector Space Method
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.
2015-12-01
The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
About the method of investigation of applied unstable process
NASA Astrophysics Data System (ADS)
Romanova, O. V.; Sapega, V. F.
2003-04-01
ABOUT THE METHOD OF INVESTIGATION OF APPLIED UNSTABLE PROCESS O.V. Romanova (1), V.F. Sapega (1) (1) All-russian geological institute (VSEGEI) zapgeo@mail.wpus.net (mark: for Romanova)/Fax: +7-812-3289283 Samples of Late Proterosoic (Rephean) rocks from Arkhangelsk, Jaroslav and Leningrad regions were prepared by the developed method of sample preparation and researched by X-ray analysis. The presence of mantle fluid process had been previously estabished in some of the samples (injecting tuffizites) (Kazak, Jakobsson, 1999). It appears that unchanged rephean rocks contain the set of low-temperature minerals as illite, chlorite, vermiculite, goethite, indicates conditions of diagenesis with temperature less than 300° C. Presense of corrensite, rectorite, illite-montmorillonite indicates application of the post-diagenesis low-temperature process to the original sediment rock. At the same time the rocks involved in the fluid process, contain such minerals as olivine, pyrope, graphite and indicate application of the high-temperature process not less than 650-800°C. Within these samples a set of low-temperature minerals occur also, this demonstrates the short-timing and disequilibrium of the applied high-temperature process. Therefore implementation of the x-ray method provides unambiguous criterion to the establishment of the fluid process which as a rule is coupled with the development of kimberlite rock fields.
NASA Astrophysics Data System (ADS)
Vasilchenko, V. E.; Kharintsev, S. S.; Salakhov, M. Kh
2013-12-01
This paper presents a modified dc-pulsed low voltage electrochemical method in which a duty cycle is self tuned while etching. A higher yield of gold tips suitable for performing tip-enhanced Raman scattering (TERS) measurements is demonstrated. The improvement is caused by the self-control of the etching rate along the full surface of the tip. A capability of the gold tips to enhance a Raman signal is exemplified by TERS spectroscopy of single walled carbon nanotubes bundle, sulfur and vanadium oxide.
"Influence Method" applied to measure a moderated neutron flux
NASA Astrophysics Data System (ADS)
Rios, I. J.; Mayer, R. E.
2016-01-01
The "Influence Method" is conceived for the absolute determination of a nuclear particle flux in the absence of known detector efficiency. This method exploits the influence of the presence of one detector, in the count rate of another detector when they are placed one behind the other and define statistical estimators for the absolute number of incident particles and for the efficiency. The method and its detailed mathematical description were recently published (Rios and Mayer, 2015 [1]). In this article we apply it to the measurement of the moderated neutron flux produced by an 241AmBe neutron source surrounded by a light water sphere, employing a pair of 3He detectors. For this purpose, the method is extended for its application where particles arriving at the detector obey a Poisson distribution and also, for the case when efficiency is not constant over the energy spectrum of interest. Experimental distributions and derived parameters are compared with theoretical predictions of the method and implications concerning the potential application to the absolute calibration of neutron sources are considered.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Where do Students Go Wrong in Applying the Scientific Method?
NASA Astrophysics Data System (ADS)
Rubbo, Louis; Moore, Christopher
2015-04-01
Non-science majors completing a liberal arts degree are frequently required to take a science course. Ideally with the completion of a required science course, liberal arts students should demonstrate an improved capability in the application of the scientific method. In previous work we have demonstrated that this is possible if explicit instruction is spent on the development of scientific reasoning skills. However, even with explicit instruction, students still struggle to apply the scientific process. Counter to our expectations, the difficulty is not isolated to a single issue such as stating a testable hypothesis, designing an experiment, or arriving at a supported conclusion. Instead students appear to struggle with every step in the process. This talk summarizes our work looking at and identifying where students struggle in the application of the scientific method. This material is based upon work supported by the National Science Foundation under Grant No. 1244801.
NASA Astrophysics Data System (ADS)
Saito, Tatsuhito; Kondo, Keiichiro; Koseki, Takafumi
A DC-electrified railway system that is fed by diode rectifiers at a substation is unable to return the electric power to an AC grid. Accordingly, the braking cars have to restrict regenerative braking power when the power consumption of the powering cars is not sufficient. However, the characteristics of a DC-electrified railway system, including the powering cars, is not known, and a mathematical model for designing a controller has not been established yet. Hence, the object of this study is to obtain the mathematical model for an analytical design method of the regenerative braking control system. In the first part of this paper, the static characteristics of this system are presented to show the position of the equilibrium point. The linearization of this system at the equilibrium point is then performed to describe the dynamic characteristics of the system. An analytical design method is then proposed on the basis of these characteristics. The proposed design method is verified by experimental tests with a 1kW class miniature model, and numerical simulations.
Matched-filtering line search methods applied to Suzaku data
NASA Astrophysics Data System (ADS)
Miyazaki, Naoto; Yamada, Shin'ya; Enoto, Teruaki; Axelsson, Magnus; Ohashi, Takaya
2016-10-01
A detailed search for emission and absorption lines and an assessment of their upper limits are performed for Suzaku data. The method utilizes a matched-filtering approach to maximize the signal-to-noise ratio for a given energy resolution, which could be applicable to many types of line search. We first applied it to well-known active galactic nuclei spectra that have been reported to have ultra-fast outflows, and find that our results are consistent with previous findings at the ˜3σ level. We proceeded to search for emission and absorption features in two bright magnetars 4U 0142+61 and 1RXS J1708-4009, applying the filtering method to Suzaku data. We found that neither source showed any significant indication of line features, even using long-term Suzaku observations or dividing their spectra into spin phases. The upper limits on the equivalent width of emission/absorption lines are constrained to be a few eV at ˜1 keV and a few hundreds of eV at ˜10 keV. This strengthens previous reports that persistently bright magnetars do not show proton cyclotron absorption features in soft X-rays and, even if they exist, they would be broadened or much weaker than below the detection limit of X-ray CCD.
Doshi, J B; Ravetkar, S D; Ghole, V S; Rehani, K
2003-09-01
DPT, a combination vaccine against diphtheria, tetanus and pertussis is available since many years and still continued in the national immunisation schedule of many countries. Although highly potent, reactions to DPT vaccine are well known, mainly attributed to the factors like Pertussis component, aluminum adjuvant and lower purity of tetanus and diphtheria toxoids. The latter most important aspect has become a matter of concern, specially for the preparation of next generation combination vaccines with more number of antigens in combination with DPT. Purity of toxoid is expressed as Lf (Limes flocculation) per mg of protein nitrogen. The Kjeldahl method (KM) of protein nitrogen estimation suggested by WHO and British Pharmacopoeia is time consuming and less specific. Need has been felt to explore an alternative method which is quick and more specific for toxoid protein determination. DC (detergent compatible) protein assay, an improved Lowry's method, has been found to be much more advantageous than Kjeldahl method.
ERIC Educational Resources Information Center
Dynarski, Mark; Betts, Julian; Feldman, Jill
2016-01-01
The DC Opportunity Scholarship Program (OSP), established in 2004, is the only federally-funded private school voucher program for low-income parents in the United States. This evaluation brief describes findings using data from more than 2,000 applicants' parents, who applied to the program from spring 2011 to spring 2013 following…
NASA Astrophysics Data System (ADS)
P, Rajeeva M.; S, Naveen C.; Lamani, Ashok R.; Bothla, V. Prasad; Jayanna, H. S.
2015-06-01
Nanocrystalline tin oxide (SnO2) material of different particle size was synthesized using gel combustion method by varying oxidizer (HNO3) and keeping fuel as a constant. The prepared samples were characterized by X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM) and Energy Dispersive Analysis X-ray Spectroscope (EDAX). The effect of oxidizer in the gel combustion method was investigated by inspecting the particle size of nano SnO2 powder. The particle size was found to be increases with the increase of oxidizer from 8 to 12 moles. The X-ray diffraction patterns of the calcined product showed the formation of high purity tetragonal tin (IV) oxide with the particle size in the range of 17 to 31 nm which was calculated by Scherer's formula. The particles and temperature dependence of direct (DC) electrical conductivity of SnO2 nanomaterial was studied using Keithley source meter. The DC electrical conductivity of SnO2 nanomaterial increases with the temperature from 80 to 300K and decrease with the particle size at constant temperature.
P, Rajeeva M.; S, Naveen C.; Lamani, Ashok R.; Jayanna, H. S.; Bothla, V Prasad
2015-06-24
Nanocrystalline tin oxide (SnO{sub 2}) material of different particle size was synthesized using gel combustion method by varying oxidizer (HNO{sub 3}) and keeping fuel as a constant. The prepared samples were characterized by X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM) and Energy Dispersive Analysis X-ray Spectroscope (EDAX). The effect of oxidizer in the gel combustion method was investigated by inspecting the particle size of nano SnO{sub 2} powder. The particle size was found to be increases with the increase of oxidizer from 8 to 12 moles. The X-ray diffraction patterns of the calcined product showed the formation of high purity tetragonal tin (IV) oxide with the particle size in the range of 17 to 31 nm which was calculated by Scherer’s formula. The particles and temperature dependence of direct (DC) electrical conductivity of SnO{sub 2} nanomaterial was studied using Keithley source meter. The DC electrical conductivity of SnO{sub 2} nanomaterial increases with the temperature from 80 to 300K and decrease with the particle size at constant temperature.
Six Sigma methods applied to cryogenic coolers assembly line
NASA Astrophysics Data System (ADS)
Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René
2009-05-01
Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.
THE EXOPLANET CENSUS: A GENERAL METHOD APPLIED TO KEPLER
Youdin, Andrew N.
2011-11-20
We develop a general method to fit the underlying planetary distribution function (PLDF) to exoplanet survey data. This maximum likelihood method accommodates more than one planet per star and any number of planet or target star properties. We apply the method to announced Kepler planet candidates that transit solar-type stars. The Kepler team's estimates of the detection efficiency are used and are shown to agree with theoretical predictions for an ideal transit survey. The PLDF is fit to a joint power law in planet radius, down to 0.5 R{sub Circled-Plus }, and orbital period, up to 50 days. The estimated number of planets per star in this sample is {approx}0.7-1.4, where the range covers systematic uncertainties in the detection efficiency. To analyze trends in the PLDF we consider four planet samples, divided between shorter and longer periods at 7 days and between large and small radii at 3 R{sub Circled-Plus }. The size distribution changes appreciably between these four samples, revealing a relative deficit of {approx}3 R{sub Circled-Plus} planets at the shortest periods. This deficit is suggestive of preferential evaporation and sublimation of Neptune- and Saturn-like planets. If the trend and explanation hold, it would be spectacular observational support of the core accretion and migration hypotheses, and would allow refinement of these theories.
Integral wave-migration method applied to electromagnetic data
Bartel, L.C.
1994-12-31
Migration of the electromagnetic (EM) wave field will be discussed as a solution of the wave equation in which surface magnetic field measurements are the known boundary values. This approach is similar to classical optical diffraction theory. Here data is taken on a aperture, migrated (extrapolated), and deconvolved with a source function. The EM image is formed when the imaginary part of the Fourier transformed migrated field at time zero is zero or at least a minimum. The integral formulation for migration is applied to model data for surface magnetic fields calculated for a grounded, vertical electric source (VES). The conductivity structure is determined from comparing the measured migrated fields to calculated migrated fields for a yet to be determined conductivity structure. This comparison results in solving a Fredholm integral equation of the first kind for the conductivity structure. Solutions are obtained using the conjugate gradient method. The imaging method used here is similar to the EM holographic method reported earlier, except here the magnitudes, as well as the phases, of the extrapolated fields are preserved so that material properties can be determined.
Understanding the impulse response method applied to concrete bridge decks
NASA Astrophysics Data System (ADS)
Clem, D. J.; Popovics, J. S.; Schumacher, T.; Oh, T.; Ham, S.; Wu, D.
2013-01-01
The Impulse Response (IR) method is a well-established form of non-destructive testing (NDT) where the dynamic response of an element resulting from an impact event (hammer blow) is measured with a geophone to make conclusions about the element's integrity, stiffness, and/or support conditions. The existing ASTM Standard C1740-10 prescribes a set of parameters that can be used to evaluate the conditions above. These parameters are computed from the so-called `mobility' spectrum which is obtained by dividing the measured bridge deck response by the measured impact force in the frequency domain. While applying the test method in the laboratory as well as on an actual in-service concrete bridge deck, the authors of this paper observed several limitations that are presented and discussed in this paper. In order to better understand the underlying physics of the IR method, a Finite Element (FE) model was created. Parameters prescribed in the Standard were then computed from the FE data and are discussed. One main limitation appears to be the use of a fixed upper frequency of 800 Hz. Test data from the real bridge deck as well as the FE model both show that most energy is found above that limit. This paper presents and discusses limitations of the ASTM Standard found by the authors and suggests ways for improving it.
Method for applying photographic resists to otherwise incompatible substrates
NASA Technical Reports Server (NTRS)
Fuhr, W. (Inventor)
1981-01-01
A method for applying photographic resists to otherwise incompatible substrates, such as a baking enamel paint surface, is described wherein the uncured enamel paint surface is coated with a non-curing lacquer which is, in turn, coated with a partially cured lacquer. The non-curing lacquer adheres to the enamel and a photo resist material satisfactorily adheres to the partially cured lacquer. Once normal photo etching techniques are employed the lacquer coats can be easily removed from the enamel leaving the photo etched image. In the case of edge lighted instrument panels, a coat of uncured enamel is placed over the cured enamel followed by the lacquer coats and the photo resists which is exposed and developed. Once the etched uncured enamel is cured, the lacquer coats are removed leaving an etched panel.
Hargrove, Douglas L.
2004-09-14
A portable, hand-held meter used to measure direct current (DC) attenuation in low impedance electrical signal cables and signal attenuators. A DC voltage is applied to the signal input of the cable and feedback to the control circuit through the signal cable and attenuators. The control circuit adjusts the applied voltage to the cable until the feedback voltage equals the reference voltage. The "units" of applied voltage required at the cable input is the system attenuation value of the cable and attenuators, which makes this meter unique. The meter may be used to calibrate data signal cables, attenuators, and cable-attenuator assemblies.
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
ERIC Educational Resources Information Center
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Random-breakage mapping method applied to human DNA sequences
NASA Technical Reports Server (NTRS)
Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)
1996-01-01
The random-breakage mapping method [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was applied to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping method.
Urban drainage control applying rational method and geographic information technologies
NASA Astrophysics Data System (ADS)
Aldalur, Beatriz; Campo, Alicia; Fernández, Sandra
2013-09-01
The objective of this study is to develop a method of controlling urban drainages in the town of Ingeniero White motivated by the problems arising as a result of floods, water logging and the combination of southeasterly and high tides. A Rational Method was applied to control urban watersheds and used tools of Geographic Information Technology (GIT). A Geographic Information System was developed on the basis of 28 panchromatic aerial photographs of 2005. They were georeferenced with control points measured with Global Positioning Systems (basin: 6 km2). Flow rates of basins and sub-basins were calculated and it was verified that the existing open channels have a low slope with the presence of permanent water and generate stagnation of water favored by the presence of trash. It is proposed for the output of storm drains, the use of an existing channel to evacuate the flow. The solution proposed in this work is complemented by the placement of three pumping stations: one on a channel to drain rain water which will allow the drain of the excess water from the lower area where is located the Ingeniero White city and the two others that will drain the excess liquid from the port area.
NASA Technical Reports Server (NTRS)
Mclyman, C. W.
1983-01-01
Compact dc/dc inverter uses single integrated-circuit package containing six inverter gates that generate and amplify 100-kHz square-wave switching signal. Square-wave switching inverts 10-volt local power to isolated voltage at another desired level. Relatively high operating frequency reduces size of filter capacitors required, resulting in small package unit.
Early Oscillation Detection for DC/DC Converter Fault Diagnosis
NASA Technical Reports Server (NTRS)
Wang, Bright L.
2011-01-01
The electrical power system of a spacecraft plays a very critical role for space mission success. Such a modern power system may contain numerous hybrid DC/DC converters both inside the power system electronics (PSE) units and onboard most of the flight electronics modules. One of the faulty conditions for DC/DC converter that poses serious threats to mission safety is the random occurrence of oscillation related to inherent instability characteristics of the DC/DC converters and design deficiency of the power systems. To ensure the highest reliability of the power system, oscillations in any form shall be promptly detected during part level testing, system integration tests, flight health monitoring, and on-board fault diagnosis. The popular gain/phase margin analysis method is capable of predicting stability levels of DC/DC converters, but it is limited only to verification of designs and to part-level testing on some of the models. This method has to inject noise signals into the control loop circuitry as required, thus, interrupts the DC/DC converter's normal operation and increases risks of degrading and damaging the flight unit. A novel technique to detect oscillations at early stage for flight hybrid DC/DC converters was developed.
Advanced Signal Processing Methods Applied to Digital Mammography
NASA Technical Reports Server (NTRS)
Stauduhar, Richard P.
1997-01-01
without further support. Task 5: Better modeling does indeed make an improvement in the detection output. After the proposal ended, we came up with some new theoretical explanations that helps in understanding when the D4 filter should be better. This work is currently in the review process. Task 6: N/A. This no longer applies in view of Tasks 4-5. Task 7: Comprehensive plans for further work have been completed. These plans are the subject of two proposals, one to NASA and one to HHS. These proposals represent plans for a complete evaluation of the methods for identifying normal mammograms, augmented with significant further theoretical work.
An introduction to quantum chemical methods applied to drug design.
Stenta, Marco; Dal Peraro, Matteo
2011-06-01
The advent of molecular medicine allowed identifying the malfunctioning of subcellular processes as the source of many diseases. Since then, drugs are not only discovered, but actually designed to fulfill a precise task. Modern computational techniques, based on molecular modeling, play a relevant role both in target identification and drug lead development. By flanking and integrating standard experimental techniques, modeling has proven itself as a powerful tool across the drug design process. The success of computational methods depends on a balance between cost (computation time) and accuracy. Thus, the integration of innovative theories and more powerful hardware architectures allows molecular modeling to be used as a reliable tool for rationalizing the results of experiments and accelerating the development of new drug design strategies. We present an overview of the most common quantum chemistry computational approaches, providing for each one a general theoretical introduction to highlight limitations and strong points. We then discuss recent developments in software and hardware resources, which have allowed state-of-the-art of computational quantum chemistry to be applied to drug development.
Applying sociodramatic methods in teaching transition to palliative care.
Baile, Walter F; Walters, Rebecca
2013-03-01
We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented.
NASA Astrophysics Data System (ADS)
Feng, J. J.; Yan, P. X.; Yang, Q.; Chen, J. T.; Yan, D.
2008-10-01
High-yield preparation of polycrystalline Si nanotubes (SiNTs) filled with single-crystal Sn was achieved by the DC arc discharge method. The Sn/Si nanocables were identified by X-ray diffraction (XRD), field-emission scanning electron microscope (FE-SEM), transmission electron microscope (TEM) and photoluminescence (PL). The results show that the Sn/Si coaxial nanocables have homogeneous diameters of about 20-30 nm and lengths ranging from several ten to several hundred nanometers. Most of them are composed of an oval-shaped tip and a tapered hollow body. The possible growth mechanism is vapor-liquid-solid (VLS) model. The PL spectrum shows two characteristic emissions at 491 nm (blue emission) and 572 nm (yellow emission). The origin of luminescence was also discussed.
NASA Astrophysics Data System (ADS)
Li, Bin; Zhang, Qin-Jian; Shi, Yan-Chao; Li, Jia-Jun; Li, Hong; Lu, Fan-Xiu; Chen, Guang-Chao
2014-08-01
A nano-crystlline diamond film is grown by the dc arcjet chemical vapor deposition method. The film is characterized by scanning electron microscopy, high-resolution transmission electron microscopy (HRTEM), x-ray diffraction (XRD) and Raman spectra, respectively. The nanocrystalline grains are averagely with 80 nm in the size measured by XRD, and further proven by Raman and HRTEM. The observed novel morphology of the growth surface, pineapple-like morphology, is constructed by cubo-octahedral growth zones with a smooth faceted top surface and coarse side surfaces. The as-grown film possesses (100) dominant surface containing a little amorphous sp2 component, which is far different from the nano-crystalline film with the usual cauliflower-like morphology.
Naveen, C. S. Jayanna, H. S. Lamani, Ashok R. Rajeeva, M. P.
2014-04-24
ZnO nanoparticles of different size were prepared by varying the molar ratio of glycine and zinc nitrate hexahydrate as fuel and oxidizer (F/O = 0.8, 1.11, 1.7) by simple solution combustion method. Powder samples were characterized by UV-Visible spectrophotometer, X-ray diffractometer, Scanning electron microscope (SEM). DC electrical conductivity measurements at room temperature and in the temperature range of 313-673K were carried out for the prepared thick films and it was found to increase with increase of temperature which confirms the semiconducting nature of the samples. Activation energies were calculated and it was found that, F/O molar ratio 1.7 has low E{sub AL} (Low temperature activation energy) and high E{sub AH} (High temperature activation energy) compared to other samples.
Neural network method applied to particle image velocimetry
NASA Astrophysics Data System (ADS)
Grant, Ian; Pan, X.
1993-12-01
realised. An important class of neural network is the multi-layer perceptron. The neurons are distributed on surfaces and linked by weighted interconnections. In the present paper we demonstrate how this type of net can developed into a competitive, adaptive filter which will identify PIV image pairs in a number of commonly occurring flow types. Previous work by the authors in particle tracking analysis (1, 2) has shown the efficiency of statistical windowing techniques in flows without systematic (in time or space) variations. The effectiveness of the present neural net is illustrated by applying it to digital simulations ofturbulent and rotating flows. Work reported by Cenedese et al (3) has taken a different approach in examining the potential for neural net methods applied to PIV.
Technology Transfer Automated Retrieval System (TEKTRAN)
A comprehensive method for Vitamin D analysis has been developed by using the best aspects of currently available published methods. The comprehensive method can be applied to a wide range of food samples including dry breakfast cereal, diet supplement drinks, powdered infant formula, cheese and ot...
A GIS modeling method applied to predicting forest songbird habitat
Dettmers, Randy; Bart, Jonathan
1999-01-01
We have developed an approach for using a??presencea?? data to construct habitat models. Presence data are those that indicate locations where the target organism is observed to occur, but that cannot be used to define locations where the organism does not occur. Surveys of highly mobile vertebrates often yield these kinds of data. Models developed through our approach yield predictions of the amount and the spatial distribution of good-quality habitat for the target species. This approach was developed primarily for use in a GIS context; thus, the models are spatially explicit and have the potential to be applied over large areas. Our method consists of two primary steps. In the first step, we identify an optimal range of values for each habitat variable to be used as a predictor in the model. To find these ranges, we employ the concept of maximizing the difference between cumulative distribution functions of (1) the values of a habitat variable at the observed presence locations of the target organism, and (2) the values of that habitat variable for all locations across a study area. In the second step, multivariate models of good habitat are constructed by combining these ranges of values, using the Boolean operators a??anda?? and a??or.a?? We use an approach similar to forward stepwise regression to select the best overall model. We demonstrate the use of this method by developing species-specific habitat models for nine forest-breeding songbirds (e.g., Cerulean Warbler, Scarlet Tanager, Wood Thrush) studied in southern Ohio. These models are based on speciesa?? microhabitat preferences for moisture and vegetation characteristics that can be predicted primarily through the use of abiotic variables. We use slope, land surface morphology, land surface curvature, water flow accumulation downhill, and an integrated moisture index, in conjunction with a land-cover classification that identifies forest/nonforest, to develop these models. The performance of these
Nessa, Fazilatun; Ismail, Zhari; Karupiah, Sundram; Mohamed, Nornisah
2005-09-01
A selective and sensitive reversed-phase (RP) high-performance liquid chromatographic method is developed for the quantitative analysis of five naturally occurring flavonoids of Blumea balsamifera DC, namely dihydroquercetin-7,4'-dimethyl ether (DQDE), blumeatin (BL), quercetin (QN), 5,7,3',5'-tetrahydroxyflavanone (THFE), and dihydroquercetin-4'-methyl ether (DQME). These compounds have been isolated using various chromatographic methods. The five compounds are completely separated within 35 min using an RP C18, Nucleosil column and with an isocratic methanol-0.5% phosphoric acid (50:50, v/v) mobile phase at the flow rate of 0.9 mL/min. The separation of the compounds is monitored at 285 nm using UV detection. Identifications of specific flavonoids are made by comparing their retention times with those of the standards. Reproducibility of the method is good, with coefficients of variation of 1.48% for DQME, 2.25% for THFE, 2.31% for QN, 2.23% for DQDE, and 1.51% for BL. The average recoveries of pure flavonoids upon addition to lyophilized powder and subsequent extraction are 99.8% for DQME, 99.9% for THFE, 100.0% for BL, 100.6% for DQDE, and 97.4% for QN. PMID:16212782
A Review of System Identification Methods Applied to Aircraft
NASA Technical Reports Server (NTRS)
Klein, V.
1983-01-01
Airplane identification, equation error method, maximum likelihood method, parameter estimation in frequency domain, extended Kalman filter, aircraft equations of motion, aerodynamic model equations, criteria for the selection of a parsimonious model, and online aircraft identification are addressed.
Genualdi, Susie; MacMahon, Shaun; Robbins, Katherine; Farris, Samantha; Shyong, Nicole; DeJager, Lowri
2016-01-01
Sudan I, II, III and IV dyes are banned for use as food colorants in the United States and European Union because they are toxic and carcinogenic. These dyes have been illegally used as food additives in products such as chilli spices and palm oil to enhance their red colour. From 2003 to 2005, the European Union made a series of decisions requiring chilli spices and palm oil imported to the European Union to contain analytical reports declaring them free of Sudan I–IV. In order for the USFDA to investigate the adulteration of palm oil and chilli spices with unapproved colour additives in the United States, a method was developed for the extraction and analysis of Sudan dyes in palm oil, and previous methods were validated for Sudan dyes in chilli spices. Both LC-DAD and LC-MS/MS methods were examined for their limitations and effectiveness in identifying adulterated samples. Method validation was performed for both chilli spices and palm oil by spiking samples known to be free of Sudan dyes at concentrations close to the limit of detection. Reproducibility, matrix effects, and selectivity of the method were also investigated. Additionally, for the first time a survey of palm oil and chilli spices was performed in the United States, specifically in the Washington, DC, area. Illegal dyes, primarily Sudan IV, were detected in palm oil at concentrations from 150 to 24 000 ng ml−1. Low concentrations (< 21 μg kg−1) of Sudan dyes were found in 11 out of 57 spices and are most likely a result of cross-contamination during preparation and storage and not intentional adulteration. PMID:26824489
Genualdi, Susie; MacMahon, Shaun; Robbins, Katherine; Farris, Samantha; Shyong, Nicole; DeJager, Lowri
2016-01-01
Sudan I, II, III and IV dyes are banned for use as food colorants in the United States and European Union because they are toxic and carcinogenic. These dyes have been illegally used as food additives in products such as chilli spices and palm oil to enhance their red colour. From 2003 to 2005, the European Union made a series of decisions requiring chilli spices and palm oil imported to the European Union to contain analytical reports declaring them free of Sudan I-IV. In order for the USFDA to investigate the adulteration of palm oil and chilli spices with unapproved colour additives in the United States, a method was developed for the extraction and analysis of Sudan dyes in palm oil, and previous methods were validated for Sudan dyes in chilli spices. Both LC-DAD and LC-MS/MS methods were examined for their limitations and effectiveness in identifying adulterated samples. Method validation was performed for both chilli spices and palm oil by spiking samples known to be free of Sudan dyes at concentrations close to the limit of detection. Reproducibility, matrix effects, and selectivity of the method were also investigated. Additionally, for the first time a survey of palm oil and chilli spices was performed in the United States, specifically in the Washington, DC, area. Illegal dyes, primarily Sudan IV, were detected in palm oil at concentrations from 150 to 24 000 ng ml(-1). Low concentrations (< 21 µg kg(-1)) of Sudan dyes were found in 11 out of 57 spices and are most likely a result of cross-contamination during preparation and storage and not intentional adulteration.
Optimal Scheduling Method of Controllable Loads in DC Smart Apartment Building
NASA Astrophysics Data System (ADS)
Shimoji, Tsubasa; Tahara, Hayato; Matayoshi, Hidehito; Yona, Atsushi; Senjyu, Tomonobu
2015-12-01
From the perspective of global warming suppression and the depletion of energy resources, renewable energies, such as the solar collector (SC) and photovoltaic generation (PV), have been gaining attention in worldwide. Houses or buildings with PV and heat pumps (HPs) are recently being used in residential areas widely due to the time of use (TOU) electricity pricing scheme which is essentially inexpensive during middle-night and expensive during day-time. If fixed batteries and electric vehicles (EVs) can be introduced in the premises, the electricity cost would be even more reduced. While, if the occupants arbitrarily use these controllable loads respectively, power demand in residential buildings may fluctuate in the future. Thus, an optimal operation of controllable loads such as HPs, batteries and EV should be scheduled in the buildings in order to prevent power flow from fluctuating rapidly. This paper proposes an optimal scheduling method of controllable loads, and the purpose is not only the minimization of electricity cost for the consumers, but also suppression of fluctuation of power flow on the power supply side. Furthermore, a novel electricity pricing scheme is also suggested in this paper.
NASA Astrophysics Data System (ADS)
Walker, E.; Glover, P. W. J.; Ruel, J.
2014-02-01
High-quality streaming potential coupling coefficient measurements have been carried out using a newly designed cell with both a steady state methodology and a new pressure transient approach. The pressure transient approach has shown itself to be particularly good at providing high-quality streaming potential coefficient measurements as each transient increase or decrease allows thousands of measurements to be made at different pressures to which a good linear regression can be fitted. Nevertheless, the transient method can be up to 5 times as fast as the conventional measurement approaches because data from all flow rates are taken in the same transient measurement rather than separately. Test measurements have been made on samples of Berea and Boise sandstone as a function of salinity (approximately 18 salinities between 10-5 mol/dm3 and 2 mol/dm3). The data have also been inverted to obtain the zeta potential. The streaming potential coefficient becomes greater (more negative) for fluids with lower salinities, which is consistent with existing measurements. Our measurements are also consistent with the high-salinity streaming potential coefficient measurements made by Vinogradov et al. (2010). Both the streaming potential coefficient and the zeta potential have also been modeled using the theoretical approach of Glover (2012). This modeling allows the microstructural, electrochemical, and fluid properties of the saturated rock to be taken into account in order to provide a relationship that is unique to each particular rock sample. In all cases, we found that the experimental data were a good match to the theoretical model.
Use Conditions and Efficiency Measurements of DC Power Optimizers for Photovoltaic Systems: Preprint
Deline, C.; MacAlpine, S.
2013-10-01
No consensus standard exists for estimating annual conversion efficiency of DC-DC converters or power optimizers in photovoltaic (PV) applications. The performance benefits of PV power electronics including per-panel DC-DC converters depend in large part on the operating conditions of the PV system, along with the performance characteristics of the power optimizer itself. This work presents acase study of three system configurations that take advantage of the capabilities of DC power optimizers. Measured conversion efficiencies of DC-DC converters are applied to these scenarios to determine the annual weighted operating efficiency. A simplified general method of reporting weighted efficiency is given, based on the California Energy Commission's CEC efficiency rating and severalinput / output voltage ratios. Efficiency measurements of commercial power optimizer products are presented using the new performance metric, along with a description of the limitations of the approach.
The flow curvature method applied to canard explosion
NASA Astrophysics Data System (ADS)
Ginoux, Jean-Marc; Llibre, Jaume
2011-11-01
The aim of this work is to establish that the bifurcation parameter value leading to a canard explosion in dimension 2 obtained by the so-called geometric singular perturbation method can be found according to the flow curvature method. This result will be then exemplified with the classical Van der Pol oscillator.
Methods applied in studies of benthic marine debris.
Spengler, Angela; Costa, Monica F
2008-02-01
The ocean floor is one of the main accumulation sites of marine debris. The study of this kind of debris still lags behind that of shorelines. It is necessary to identify the methods used to evaluate this debris and how the results are presented and interpreted. From the available literature on benthic marine debris (26 studies), six sampling methods were registered: bottom trawl net, sonar, submersible, snorkeling, scuba diving and manta tow. The most frequent method used was bottom trawl net, followed by the three methods of diving. The majority of the debris was classified according to their former use and the results usually expressed as items per unity of area. To facilitate comparisons of the contamination levels among sites and regions some standardization requirements are suggested.
Cutting Force Control Applying Sensorless Cutting Force Monitoring Method
NASA Astrophysics Data System (ADS)
Kurihara, Daisuke; Kakinuma, Yasuhiro; Katsura, Seiichiro
Intelligent machine tools require the functions of high-accurate process monitoring and adaptive control to fit the optimum process condition in each workpieces. For realizing these functions, the various techniques to monitor the cutting process and control it using additional sensors have been proposed and widely studied. Authors propose the sensorless cutting force control method using parallel disturbance observer. The performance of our proposed method is evaluated through simulation and experiments using a linear motor driving table.
Spectral methods applied to fluidized bed combustors. Final report
Brown, R.C.; Christofides, N.J.; Junk, K.W.; Raines, T.S.; Thiede, T.D.
1996-08-01
The objective of this project was to develop methods for characterizing fuels and sorbents from time-series data obtained during transient operation of fluidized bed boilers. These methods aimed at determining time constants for devolatilization and char burnout using carbon dioxide (CO{sub 2}) profiles and from time constants for the calcination and sulfation processes using CO{sub 2} and sulfur dioxide (SO{sub 2}) profiles.
QSAGE iterative method applied to fuzzy parabolic equation
NASA Astrophysics Data System (ADS)
Dahalan, A. A.; Muthuvalu, M. S.; Sulaiman, J.
2016-02-01
The aim of this paper is to examine the effectiveness of the Quarter-Sweep Alternating Group Explicit (QSAGE) iterative method by solving linear system generated from the discretization of one-dimensional fuzzy diffusion problems. In addition, the formulation and implementation of the proposed method are also presented. The results obtained are then compared with Full-Sweep Gauss-Seidel (FSGS), Full-Sweep AGE (FSAGE) and Half-Sweep AGE (HSAGE) to illustrate their feasibility.
Newton like: Minimal residual methods applied to transonic flow calculations
NASA Technical Reports Server (NTRS)
Wong, Y. S.
1984-01-01
A computational technique for the solution of the full potential equation is presented. The method consists of outer and inner iterations. The outer iterate is based on a Newton like algorithm, and a preconditioned Minimal Residual method is used to seek an approximate solution of the system of linear equations arising at each inner iterate. The present iterative scheme is formulated so that the uncertainties and difficulties associated with many iterative techniques, namely the requirements of acceleration parameters and the treatment of additional boundary conditions for the intermediate variables, are eliminated. Numerical experiments based on the new method for transonic potential flows around the NACA 0012 airfoil at different Mach numbers and different angles of attack are presented, and these results are compared with those obtained by the Approximate Factorization technique. Extention to three dimensional flow calculations and application in finite element methods for fluid dynamics problems by the present method are also discussed. The Inexact Newton like method produces a smoother reduction in the residual norm, and the number of supersonic points and circulations are rapidly established as the number of iterations is increased.
The colour analysis method applied to homogeneous rocks
NASA Astrophysics Data System (ADS)
Halász, Amadé; Halmai, Ákos
2015-12-01
Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
DAKOTA reliability methods applied to RAVEN/RELAP-7.
Swiler, Laura Painton; Mandelli, Diego; Rabiti, Cristian; Alfonsi, Andrea
2013-09-01
This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).
Hopf Method Applied to Low and High Dimensional Dynamical Systems
NASA Astrophysics Data System (ADS)
Ma, Seungwook; Marston, Brad
2004-03-01
With an eye towards the goal of directly extracting statistical information from general circulation models (GCMs) of climate, thereby avoiding lengthy time integrations, we investigate the usage of the Hopf functional method(Uriel Frisch, Turbulence: The Legacy of A. N. Kolmogorov) (Cambridge University Press, 1995) chapter 9.5.. We use the method to calculate statistics over low-dimensional attractors, and for fluid flow on a rotating sphere. For the cases of the 3-dimensional Lorenz attractor, and a 5-dimensional nonlinear system introduced by Orszag as a toy model of turbulence(Steven Orszag in Fluid Dynamics: Les Houches (1977))., a comparison of results obtained by low-order truncations of the cumulant expansion against statistics calculated by direct numerical integration forward in time shows surprisingly good agreement. The extension of the Hopf method to a high-dimensional barotropic model of inviscid fluid flow on a rotating sphere, which employs Arakawa's method to conserve energy and enstrophy(Akio Arakawa, J. Comp. Phys. 1), 119 (1966)., is discussed.
Ideal and computer mathematics applied to meshfree methods
NASA Astrophysics Data System (ADS)
Kansa, E.
2016-10-01
Early numerical methods to solve ordinary and partial differential relied upon human computers who used mechanical devices. The algorithms used changed little over the evolution of electronic computers having only low order convergence rates. A meshfree scheme was developed for problems that converges exponentially using the latest computational science toolkit.
Method for applying pyrolytic carbon coatings to small particles
Beatty, Ronald L.; Kiplinger, Dale V.; Chilcoat, Bill R.
1977-01-01
A method for coating small diameter, low density particles with pyrolytic carbon is provided by fluidizing a bed of particles wherein at least 50 per cent of the particles have a density and diameter of at least two times the remainder of the particles and thereafter recovering the small diameter and coated particles.
System Identification and POD Method Applied to Unsteady Aerodynamics
NASA Technical Reports Server (NTRS)
Tang, Deman; Kholodar, Denis; Juang, Jer-Nan; Dowell, Earl H.
2001-01-01
The representation of unsteady aerodynamic flow fields in terms of global aerodynamic modes has proven to be a useful method for reducing the size of the aerodynamic model over those representations that use local variables at discrete grid points in the flow field. Eigenmodes and Proper Orthogonal Decomposition (POD) modes have been used for this purpose with good effect. This suggests that system identification models may also be used to represent the aerodynamic flow field. Implicit in the use of a systems identification technique is the notion that a relative small state space model can be useful in describing a dynamical system. The POD model is first used to show that indeed a reduced order model can be obtained from a much larger numerical aerodynamical model (the vortex lattice method is used for illustrative purposes) and the results from the POD and the system identification methods are then compared. For the example considered, the two methods are shown to give comparable results in terms of accuracy and reduced model size. The advantages and limitations of each approach are briefly discussed. Both appear promising and complementary in their characteristics.
GENERAL CONSIDERATIONS FOR GEOPHYSICAL METHODS APPLIED TO AGRICULTURE
Technology Transfer Automated Retrieval System (TEKTRAN)
Geophysics is the application of physical quantity measurement techniques to provide information on conditions or features beneath the earth’s surface. With the exception of borehole geophysical methods and soil probes like a cone penetrometer, these techniques are generally noninvasive with physica...
Mesoscopic electronics beyond DC transport
NASA Astrophysics Data System (ADS)
di Carlo, Leonardo
Since the inception of mesoscopic electronics in the 1980's, direct current (dc) measurements have underpinned experiments in quantum transport. Novel techniques complementing dc transport are becoming paramount to new developments in mesoscopic electronics, particularly as the road is paved toward quantum information processing. This thesis describes seven experiments on GaAs/AlGaAs and graphene nanostructures unified by experimental techniques going beyond traditional dc transport. Firstly, dc current induced by microwave radiation applied to an open chaotic quantum dot is investigated. Asymmetry of mesoscopic fluctuations of induced current in perpendicular magnetic field is established as a tool for separating the quantum photovoltaic effect from classical rectification. A differential charge sensing technique is next developed using integrated quantum point contacts to resolve the spatial distribution of charge inside a double quantum clot. An accurate method for determining interdot tunnel coupling and electron temperature using charge sensing is demonstrated. A two-channel system for detecting current noise in mesoscopic conductors is developed, enabling four experiments where shot noise probes transmission properties not available in dc transport and Johnson noise serves as an electron thermometer. Suppressed shot noise is observed in quantum point contacts at zero parallel magnetic field, associated with the 0.7 structure in conductance. This suppression evolves with increasing field into the shot-noise signature of spin-lifted mode degeneracy. Quantitative agreement is found with a phenomenological model for density-dependent mode splitting. Shot noise measurements of multi-lead quantum-dot structures in the Coulomb blockade regime distill the mechanisms by which Coulomb interaction and quantum indistinguishability correlate electron flow. Gate-controlled sign reversal of noise cross correlation in two capacitively-coupled dots is observed, and shown to
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
Paraxial WKB Method Applied to the Lower Hybrid Wave Propagation
Bertelli, N; Poli, E; Harvey, R; Wright, J C; Bonoli, P T; Phillips, C K; Simov, A P; Valeo, E
2012-07-12
The paraxial WKB (pWKB) approximation, also called beam tracing method, has been employed in order to study the propagation of lower hybrid (LH) waves in a tokamak plasma. Analogous to the well-know ray tracing method, this approach reduces Maxwell's equations to a set of ordinary differential equations, while, in addition, retains the effects of the finite beam cross-section, and, thus, the effects of diffraction. A new code, LHBEAM (Lower Hybrid BEAM tracing), is presented, which solves the pWKB equations in tokamak geometry for arbitrary launching conditions and for analytic and experimental plasma equilibria. In addition, LHBEAM includes linear electron Landau damping for the evaluation of the absorbed power density and the reconstruction of the wave electric field in both the physical and Fourier space. Illustrative LHBEAM calculations are presented along with a comparison with the ray tracing code GENRAY and the full wave solver TORIC-LH.
Finite Element Method Applied to Fuse Protection Design
NASA Astrophysics Data System (ADS)
Li, Sen; Song, Zhiquan; Zhang, Ming; Xu, Liuwei; Li, Jinchao; Fu, Peng; Wang, Min; Dong, Lin
2014-03-01
In a poloidal field (PF) converter module, fuse protection is of great importance to ensure the safety of the thyristors. The fuse is pre-selected in a traditional way and then verified by finite element analysis. A 3D physical model is built by ANSYS software to solve the thermal-electric coupled problem of transient process in case of external fault. The result shows that this method is feasible.
Differential correction method applied to measurement of the FAST reflector
NASA Astrophysics Data System (ADS)
Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng
2016-08-01
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%-80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.
Differential correction method applied to measurement of the FAST reflector
NASA Astrophysics Data System (ADS)
Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng
2016-08-01
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%–80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.
Matrix methods applied to acoustic waves in multilayers
NASA Astrophysics Data System (ADS)
Adler, Eric L.
1990-11-01
Matrix methods for analyzing the electroacoustic characteristics of anisotropic piezoelectric multilayers are described. The conceptual usefulness of the methods is demonstrated in a tutorial fashion by examples showing how formal statements of propagation, transduction, and boundary-value problems in complicated acoustic layered geometries such as those which occur in surface acoustic wave (SAW) devices, in multicomponent laminates, and in bulk-wave composite transducers are simplified. The formulation given reduces the electroacoustic equations to a set of first-order matrix differential equations, one for each layer, in the variables that must be continuous across interfaces. The solution to these equations is a transfer matrix that maps the variables from one layer face to the other. Interface boundary conditions for a planar multilayer are automatically satisfied by multiplying the individual transfer matrices in the appropriate order, thus reducing the problem to just having to impose boundary conditions appropriate to the remaining two surfaces. The computational advantages of the matrix method result from the fact that the problem rank is independent of the number of layers, and from the availability of personal computer software that makes interactive numerical experimentation with complex layered structures practical.
System and method of applying energetic ions for sterilization
Schmidt, John A.
2003-12-23
A method of sterilization of a container is provided whereby a cold plasma is caused to be disposed near a surface to be sterilized, and the cold plasma is then subjected to a pulsed voltage differential for producing energized ions in the plasma. Those energized ions then operate to achieve spore destruction on the surface to be sterilized. Further, a system for sterilization of a container which includes a conductive or non-conductive container, a cold plasma in proximity to the container, and a high voltage source for delivering a pulsed voltage differential between an electrode and the container and across the cold plasma, is provided.
System And Method Of Applying Energetic Ions For Sterlization
Schmidt, John A.
2002-06-11
A method of sterilization of a container is provided whereby a cold plasma is caused to be disposed near a surface to be sterilized, and the cold plasma is then subjected to a pulsed voltage differential for producing energized ions in the plasma. Those energized ions then operate to achieve spore destruction on the surface to be sterilized. Further, a system for sterilization of a container which includes a conductive or non-conductive container, a cold plasma in proximity to the container, and a high voltage source for delivering a pulsed voltage differential between an electrode and the container and across the cold plasma, is provided.
Algebraic multigrid methods applied to problems in computational structural mechanics
NASA Technical Reports Server (NTRS)
Mccormick, Steve; Ruge, John
1989-01-01
The development of algebraic multigrid (AMG) methods and their application to certain problems in structural mechanics are described with emphasis on two- and three-dimensional linear elasticity equations and the 'jacket problems' (three-dimensional beam structures). Various possible extensions of AMG are also described. The basic idea of AMG is to develop the discretization sequence based on the target matrix and not the differential equation. Therefore, the matrix is analyzed for certain dependencies that permit the proper construction of coarser matrices and attendant transfer operators. In this manner, AMG appears to be adaptable to structural analysis applications.
Applied high resolution geophysical methods: Offshore geoengineering hazards
Trabant, P.K.
1984-01-01
This book is an examination of the purpose, methodology, equipment, and data interpretation of high-resolution geophysical methods, which are used to assess geological and manmade engineering hazards at offshore construction locations. It is a state-of-the-art review. Contents: 1. Introduction. 2. Maring geophysics, an overview. 3. Marine geotechnique, an overview. 4. Echo sounders. 5. Side scan sonar. 6. Subbottom profilers. 7. Seismic sources. 8. Single-channel seismic reflection systems. 9. Multifold acquisition and digital processing. 10. Marine magnetometers. 11. Marine geoengineering hazards. 12. Survey organization, navigation, and future developments. Appendix. Glossary. References. Index.
[Dichotomizing method applied to calculating equilibrium constant of dimerization system].
Cheng, Guo-zhong; Ye, Zhi-xiang
2002-06-01
The arbitrary trivariate algebraic equations are formed based on the combination principle. The univariata algebraic equation of equilibrium constant kappa for dimerization system is obtained through a series of algebraic transformation, and it depends on the properties of monotonic functions whether the equation is solvable or not. If the equation is solvable, equilibrium constant of dimerization system is obtained by dichotomy and its final equilibrium constant of dimerization system is determined according to the principle of error of fitting. The equilibrium constants of trisulfophthalocyanine and biosulfophthalocyanine obtained with this method are 47,973.4 and 30,271.8 respectively. The results are much better than those reported previously.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.
Ramírez, C L; Martí, M A; Roitberg, A E
2016-01-01
One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost.
Applied methods to verify LP turbine performance after retrofit
Overby, R.; Lindberg, G.
1996-12-31
With increasing operational hours of power plants, many utilities may find it necessary to replace turbine components, i.e., low pressure turbines. In order to decide between different technical and economic solutions, the utility often takes the opportunity to choose between an OEM or non-OEM supplier. This paper will deal with the retrofitting of LP turbines. Depending on the scope of supply the contract must define the amount of improvement and specifically how to verify this improvement. Unfortunately, today`s Test Codes, such as ASME PTC 6 and 6.1, do not satisfactorily cover these cases. The methods used by Florida Power and Light (FP and L) and its supplier to verify the improvement of the low pressure turbine retrofit at the Martin No. 1 and Sanford No. 4 units will be discussed and the experience gained will be presented. In particular the influence of the thermal cycle on the applicability of the available methods will be analyzed and recommendations given.
Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation
NASA Astrophysics Data System (ADS)
Arotaritei, D.; Rotariu, C.
2015-09-01
In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).
Microcanonical ensemble simulation method applied to discrete potential fluids
NASA Astrophysics Data System (ADS)
Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro
2015-09-01
In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.
Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.
Ramírez, C L; Martí, M A; Roitberg, A E
2016-01-01
One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost. PMID:27497165
Atomistic Method Applied to Computational Modeling of Surface Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Abel, Phillip B.
2000-01-01
The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
NASA Astrophysics Data System (ADS)
Yang, G. Y.; Lian, G. D.; Dickey, E. C.; Randall, C. A.; Barber, D. E.; Pinceloup, P.; Henderson, M. A.; Hill, R. A.; Beeson, J. J.; Skamser, D. J.
2004-12-01
The microchemical and microstructural origins of insulation-resistance degradation in BaTiO3-based capacitors are studied by complementary impedance spectroscopy and analytical transmission electron microscopy. The degradation under dc-field bias involves electromigration and accumulation of oxygen vacancies at interfaces. The nonstoichiometric BaTiO3-δ becomes locally more conducting through increased oxygen vacancy concentration and Ti ion reduction. The symmetry across the dielectric layer and locally across each grain is broken during the degradation process. Locally, the nonstoichiometry becomes so severe that metastable lattice structures are formed. The degradation in insulation resistance at the grain boundaries and electrode interfaces is associated with the double Schottky-barrier potential lowering and narrowing. This may correlate with an effective decrease in net acceptor charge density at the grain boundaries.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Applying dynamic methods in off-line signature recognition
NASA Astrophysics Data System (ADS)
Igarza, Juan Jose; Hernaez, Inmaculada; Goirizelaia, Inaki; Espinosa, Koldo
2004-08-01
In this paper we present the work developed on off-line signature verification using Hidden Markov Models (HMM). HMM is a well-known technique used by other biometric features, for instance, in speaker recognition and dynamic or on-line signature verification. Our goal here is to extend Left-to-Right (LR)-HMM to the field of static or off-line signature processing using results provided by image connectivity analysis. The chain encoding of perimeter points for each blob obtained by this analysis is an ordered set of points in the space, clockwise around the perimeter of the blob. We discuss two different ways of generating the models depending on the way the blobs obtained from the connectivity analysis are ordered. In the first proposed method, blobs are ordered according to their perimeter length. In the second proposal, blobs are ordered in their natural reading order, i.e. from the top to the bottom and left to right. Finally, two LR-HMM models are trained using the parameters obtained by the mentioned techniques. Verification results of the two techniques are compared and some improvements are proposed.
Taguchi methods applied to oxygen-enriched diesel engine experiments
Marr, W.W.; Sekar, R.R.; Cole, R.L.; Marciniak, T.J. ); Longman, D.E. )
1992-01-01
This paper describes a test series conducted on a six-cylinder diesel engine to study the impacts of controlled factors (i.e., oxygen content of the combustion air, water content of the fuel, fuel rate, and fuel-injection timing) on engine emissions using Taguchi methods. Three levels of each factor were used in the tests. Only the main effects of the factors were examined; no attempt was made to analyze the interactions among the factors. It was found that, as in the case of the single-cylinder engine tests, oxygen in the combustion air was very effective in reducing particulate and smoke emissions. Increases in NO[sub x] due to the oxygen enrichment observed in the single-cylinder tests also occurred in the present six-cylinder tests. Water in the emulsified fuel was found to be much less effective in decreasing NO[sub x] emissions for the six-cylinder engine than it was for the single-cylinder engine.
Taguchi methods applied to oxygen-enriched diesel engine experiments
Marr, W.W.; Sekar, R.R.; Cole, R.L.; Marciniak, T.J.; Longman, D.E.
1992-12-01
This paper describes a test series conducted on a six-cylinder diesel engine to study the impacts of controlled factors (i.e., oxygen content of the combustion air, water content of the fuel, fuel rate, and fuel-injection timing) on engine emissions using Taguchi methods. Three levels of each factor were used in the tests. Only the main effects of the factors were examined; no attempt was made to analyze the interactions among the factors. It was found that, as in the case of the single-cylinder engine tests, oxygen in the combustion air was very effective in reducing particulate and smoke emissions. Increases in NO{sub x} due to the oxygen enrichment observed in the single-cylinder tests also occurred in the present six-cylinder tests. Water in the emulsified fuel was found to be much less effective in decreasing NO{sub x} emissions for the six-cylinder engine than it was for the single-cylinder engine.
What health care managers do: applying Mintzberg's structured observation method.
Arman, Rebecka; Dellve, Lotta; Wikström, Ewa; Törnström, Linda
2009-09-01
Aim The aim of the present study was to explore and describe what characterizes first- and second-line health care managers' use of time. Background Many Swedish health care managers experience difficulties managing their time. Methods Structured and unstructured observations were used. Ten first- and second-line managers in different health care settings were studied in detail from 3.5 and 4 days each. Duration and frequency of different types of work activities were analysed. Results The individual variation was considerable. The managers' days consisted to a large degree of short activities (<9 minutes). On average, nearly half of the managers' time was spent in meetings. Most of the managers' time was spent with subordinates and <1% was spent alone with their superiors. Sixteen per cent of their time was spent on administration and only a small fraction on explicit strategic work. Conclusions The individual variations in time use patterns suggest the possibility of interventions to support changes in time use patterns. Implications for nursing management A reliable description of what managers do paves the way for analyses of what they should do to be effective.
NASA Astrophysics Data System (ADS)
Urabe, Keiichiro; Shirai, Naoki; Tomita, Kentaro; Akiyama, Tsuyoshi; Murakami, Tomoyuki
2016-08-01
The density and temperature of electrons and key heavy particles were measured in an atmospheric-pressure pulsed-dc helium discharge plasma with a nitrogen molecular impurity generated using system with a liquid or metal anode and a metal cathode. To obtain these parameters, we conducted experiments using several laser-aided methods: Thomson scattering spectroscopy to obtain the spatial profiles of electron density and temperature, Raman scattering spectroscopy to obtain the neutral molecular nitrogen rotational temperature, phase-modulated dispersion interferometry to determine the temporal variation of the electron density, and time-resolved laser absorption spectroscopy to analyze the temporal variation of the helium metastable atom density. The electron density and temperature measured by Thomson scattering varied from 2.4 × 1014 cm-3 and 1.8 eV at the center of the discharge to 0.8 × 1014 cm-3 and 1.5 eV near the outer edge of the plasma in the case of the metal anode, respectively. The electron density obtained with the liquid anode was approximately 20% smaller than that obtained with the metal anode, while the electron temperature was not significantly affected by the anode material. The molecular nitrogen rotational temperatures were 1200 K with the metal anode and 1650 K with the liquid anode at the outer edge of the plasma column. The density of helium metastable atoms decreased by a factor of two when using the liquid anode.
NASA Astrophysics Data System (ADS)
Urabe, Keiichiro; Shirai, Naoki; Tomita, Kentaro; Akiyama, Tsuyoshi; Murakami, Tomoyuki
2016-08-01
The density and temperature of electrons and key heavy particles were measured in an atmospheric-pressure pulsed-dc helium discharge plasma with a nitrogen molecular impurity generated using system with a liquid or metal anode and a metal cathode. To obtain these parameters, we conducted experiments using several laser-aided methods: Thomson scattering spectroscopy to obtain the spatial profiles of electron density and temperature, Raman scattering spectroscopy to obtain the neutral molecular nitrogen rotational temperature, phase-modulated dispersion interferometry to determine the temporal variation of the electron density, and time-resolved laser absorption spectroscopy to analyze the temporal variation of the helium metastable atom density. The electron density and temperature measured by Thomson scattering varied from 2.4 × 1014 cm‑3 and 1.8 eV at the center of the discharge to 0.8 × 1014 cm‑3 and 1.5 eV near the outer edge of the plasma in the case of the metal anode, respectively. The electron density obtained with the liquid anode was approximately 20% smaller than that obtained with the metal anode, while the electron temperature was not significantly affected by the anode material. The molecular nitrogen rotational temperatures were 1200 K with the metal anode and 1650 K with the liquid anode at the outer edge of the plasma column. The density of helium metastable atoms decreased by a factor of two when using the liquid anode.
Complexity methods applied to turbulence in plasma astrophysics
NASA Astrophysics Data System (ADS)
Vlahos, L.; Isliker, H.
2016-09-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the
Power Network impedance effects on noise emission of DC-DC converters
NASA Astrophysics Data System (ADS)
Esteban, M. C.; Arteche, F.; Iglesias, M.; Gimeno, A.; Arcega, F. J.; Johnson, M.; Cooper, W. E.
2012-01-01
The characterization of electromagnetic noise emissions of DC-DC converters is a critical issue that has been analyzed during the desing phase of CMS tracker upgrade. Previous simulation studies showed important variations in the level of conducted emissions when DC-DC converters are loaded/driven by different impedances and power network topologies. Several tests have been performed on real DC-DC converters to validate the Pspice model and simulation results. This paper presents these test results. Conducted noise emissions at the input and at the output terminals of DC-DC converters has been measured for different types of power and FEE impedances. Special attention has been paid to influence on the common-mode emissions by the carbon fiber material used to build the mechanical structure of the central detector. These study results show important recommendations and criteria to be applied in order to decrease the system noise level when integrating the DC-DC.
Radiation-Tolerant DC-DC Converters
NASA Technical Reports Server (NTRS)
Skutt, Glenn; Sable, Dan; Leslie, Leonard; Graham, Shawn
2012-01-01
A document discusses power converters suitable for space use that meet the DSCC MIL-PRF-38534 Appendix G radiation hardness level P classification. A method for qualifying commercially produced electronic parts for DC-DC converters per the Defense Supply Center Columbus (DSCC) radiation hardened assurance requirements was developed. Development and compliance testing of standard hybrid converters suitable for space use were completed for missions with total dose radiation requirements of up to 30 kRad. This innovation provides the same overall performance as standard hybrid converters, but includes assurance of radiation- tolerant design through components and design compliance testing. This availability of design-certified radiation-tolerant converters can significantly reduce total cost and delivery time for power converters for space applications that fit the appropriate DSCC classification (30 kRad).
7 CFR 632.16 - Methods of applying planned land use and treatment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 6 2012-01-01 2012-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply the planned land uses and conservation treatment specified in the contract by one or more of...
7 CFR 632.16 - Methods of applying planned land use and treatment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 6 2014-01-01 2014-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply the planned land uses and conservation treatment specified in the contract by one or more of...
7 CFR 632.16 - Methods of applying planned land use and treatment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 6 2011-01-01 2011-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply the planned land uses and conservation treatment specified in the contract by one or more of...
7 CFR 632.16 - Methods of applying planned land use and treatment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 6 2013-01-01 2013-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply the planned land uses and conservation treatment specified in the contract by one or more of...
7 CFR 632.16 - Methods of applying planned land use and treatment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Methods of applying planned land use and treatment... Qualifications § 632.16 Methods of applying planned land use and treatment. (a) Land users may arrange to apply the planned land uses and conservation treatment specified in the contract by one or more of...
Near-infrared radiation curable multilayer coating systems and methods for applying same
Bowman, Mark P; Verdun, Shelley D; Post, Gordon L
2015-04-28
Multilayer coating systems, methods of applying and related substrates are disclosed. The coating system may comprise a first coating comprising a near-IR absorber, and a second coating deposited on a least a portion of the first coating. Methods of applying a multilayer coating composition to a substrate may comprise applying a first coating comprising a near-IR absorber, applying a second coating over at least a portion of the first coating and curing the coating with near infrared radiation.
NASA Technical Reports Server (NTRS)
Hamilton, H. B.; Strangas, E.
1980-01-01
The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.
NASA Astrophysics Data System (ADS)
Kimura, Akira
In inverter-converter driving systems for AC electric cars, the DC input voltage of an inverter contains a ripple component with a frequency that is twice as high as the line voltage frequency, because of a single-phase converter. The ripple component of the inverter input voltage causes pulsations on torques and currents of driving motors. To decrease the pulsations, a beat-less control method, which modifies a slip frequency depending on the ripple component, is applied to the inverter control. In the present paper, the beat-less control method was analyzed in the frequency domain. In the first step of the analysis, transfer functions, which revealed the relationship among the ripple component of the inverter input voltage, the slip frequency, the motor torque pulsation and the current pulsation, were derived with a synchronous rotating model of induction motors. An analysis model of the beat-less control method was then constructed using the transfer functions. The optimal setting of the control method was obtained according to the analysis model. The transfer functions and the analysis model were verified through simulations.
ERIC Educational Resources Information Center
Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti
2010-01-01
In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…
Eom, Ji Mi; Oh, Hyun Gon; Cho, Il Hwan; Kwon, Sang Jik; Cho, Eou Sik
2013-11-01
Niobium oxide (Nb2O5) films were deposited on p-type Si wafers and sodalime glasses at a room temperature using in-line pulsed-DC magnetron sputtering system with various duty ratios. The different duty ratio was obtained by varying the reverse voltage time of pulsed DC power from 0.5 to 2.0 micros at the fixed frequency of 200 kHz. From the structural and optical characteristics of the sputtered NbOx films, it was possible to obtain more uniform and coherent NbOx films in case of the higher reverse voltage time as a result of the cleaning effect on the Nb2O5 target surface. The electrical characteristics from the metal-insulator-semiconductor (MIS) fabricated with the NbOx films shows the leakage currents are influenced by the reverse voltage time and the Schottky barrier diode characteristics.
High-mobility ZrInO thin-film transistor prepared by an all-DC-sputtering method at room temperature
Xiao, Peng; Dong, Ting; Lan, Linfeng; Lin, Zhenguo; Song, Wei; Luo, Dongxiang; Xu, Miao; Peng, Junbiao
2016-01-01
Thin-film transistors (TFTs) with zirconium-doped indium oxide (ZrInO) semiconductor were successfully fabricated by an all-DC-sputtering method at room temperature. The ZrInO TFT without any intentionally annealing steps exhibited a high saturation mobility of 25.1 cm2V−1s−1. The threshold voltage shift was only 0.35 V for the ZrInO TFT under positive gate bias stress for 1 hour. Detailed studies showed that the room-temperature ZrInO thin film was in the amorphous state with low carrier density because of the strong bonding strength of Zr-O. The room-temperature process is attractive for its compatibility with almost all kinds of the flexible substrates, and the DC sputtering process is good for the production efficiency improvement and the fabrication cost reduction. PMID:27118177
Xiao, Peng; Dong, Ting; Lan, Linfeng; Lin, Zhenguo; Song, Wei; Luo, Dongxiang; Xu, Miao; Peng, Junbiao
2016-01-01
Thin-film transistors (TFTs) with zirconium-doped indium oxide (ZrInO) semiconductor were successfully fabricated by an all-DC-sputtering method at room temperature. The ZrInO TFT without any intentionally annealing steps exhibited a high saturation mobility of 25.1 cm(2)V(-1)s(-1). The threshold voltage shift was only 0.35 V for the ZrInO TFT under positive gate bias stress for 1 hour. Detailed studies showed that the room-temperature ZrInO thin film was in the amorphous state with low carrier density because of the strong bonding strength of Zr-O. The room-temperature process is attractive for its compatibility with almost all kinds of the flexible substrates, and the DC sputtering process is good for the production efficiency improvement and the fabrication cost reduction. PMID:27118177
Xiao, Peng; Dong, Ting; Lan, Linfeng; Lin, Zhenguo; Song, Wei; Luo, Dongxiang; Xu, Miao; Peng, Junbiao
2016-04-27
Thin-film transistors (TFTs) with zirconium-doped indium oxide (ZrInO) semiconductor were successfully fabricated by an all-DC-sputtering method at room temperature. The ZrInO TFT without any intentionally annealing steps exhibited a high saturation mobility of 25.1 cm(2)V(-1)s(-1). The threshold voltage shift was only 0.35 V for the ZrInO TFT under positive gate bias stress for 1 hour. Detailed studies showed that the room-temperature ZrInO thin film was in the amorphous state with low carrier density because of the strong bonding strength of Zr-O. The room-temperature process is attractive for its compatibility with almost all kinds of the flexible substrates, and the DC sputtering process is good for the production efficiency improvement and the fabrication cost reduction.
High-mobility ZrInO thin-film transistor prepared by an all-DC-sputtering method at room temperature
NASA Astrophysics Data System (ADS)
Xiao, Peng; Dong, Ting; Lan, Linfeng; Lin, Zhenguo; Song, Wei; Luo, Dongxiang; Xu, Miao; Peng, Junbiao
2016-04-01
Thin-film transistors (TFTs) with zirconium-doped indium oxide (ZrInO) semiconductor were successfully fabricated by an all-DC-sputtering method at room temperature. The ZrInO TFT without any intentionally annealing steps exhibited a high saturation mobility of 25.1 cm2V‑1s‑1. The threshold voltage shift was only 0.35 V for the ZrInO TFT under positive gate bias stress for 1 hour. Detailed studies showed that the room-temperature ZrInO thin film was in the amorphous state with low carrier density because of the strong bonding strength of Zr-O. The room-temperature process is attractive for its compatibility with almost all kinds of the flexible substrates, and the DC sputtering process is good for the production efficiency improvement and the fabrication cost reduction.
Rajeeva, M. P. Jayanna, H. S. Ashok, R. L.; Naveen, C. S.; Bothla, V. Prasad
2014-04-24
Nanocrystalline Tin oxide material with different grain size was synthesized using gel combustion method by varying the fuel (C{sub 6}H{sub 8}O{sub 7}) to oxidizer (HNO{sub 3}) molar ratio by keeping the amount of fuel as constant. The prepared samples were characterized by using X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM) and Energy Dispersive Analysis X-ray Spectroscopy (EDAX). The effect of fuel to oxidizer molar ratio in the gel combustion method was investigated by inspecting the grain size of nano SnO{sub 2} powder. The grain size was found to be reduced with the amount of oxidizer increases from 0 to 6 moles in the step of 2. The X-ray diffraction patterns of the calcined product showed the formation of high purity tetragonal tin (IV) oxide with the grain size in the range of 12 to 31 nm which was calculated by Scherer's formula. Molar ratio and temperature dependence of DC electrical conductivity of SnO{sub 2} nanomaterial was studied using Keithley source meter. DC electrical conductivity of SnO{sub 2} nanomaterial increases with the temperature from 80K to 300K. From the study it was observed that the DC electrical conductivity of SnO{sub 2} nanomaterial decreases with the grain size at constant temperature.
Efficient Design in a DC to DC Converter Unit
NASA Technical Reports Server (NTRS)
Bruemmer, Joel E.; Williams, Fitch R.; Schmitz, Gregory V.
2002-01-01
Space Flight hardware requires high power conversion efficiencies due to limited power availability and weight penalties of cooling systems. The International Space Station (ISS) Electric Power System (EPS) DC-DC Converter Unit (DDCU) power converter is no exception. This paper explores the design methods and tradeoffs that were utilized to accomplish high efficiency in the DDCU. An isolating DC to DC converter was selected for the ISS power system because of requirements for separate primary and secondary grounds and for a well-regulated secondary output voltage derived from a widely varying input voltage. A flyback-current-fed push-pull topology or improved Weinberg circuit was chosen for this converter because of its potential for high efficiency and reliability. To enhance efficiency, a non-dissipative snubber circuit for the very-low-Rds-on Field Effect Transistors (FETs) was utilized, redistributing the energy that could be wasted during the switching cycle of the power FETs. A unique, low-impedance connection system was utilized to improve contact resistance over a bolted connection. For improved consistency in performance and to lower internal wiring inductance and losses a planar bus system is employed. All of these choices contributed to the design of a 6.25 KW regulated dc to dc converter that is 95 percent efficient. The methodology used in the design of this DC to DC Converter Unit may be directly applicable to other systems that require a conservative approach to efficient power conversion and distribution.
Early Oscillation Detection Technique for Hybrid DC/DC Converters
NASA Technical Reports Server (NTRS)
Wang, Bright L.
2011-01-01
normal operation. This technique eliminates the probing problem of a gain/phase margin method by connecting the power input to a spectral analyzer. Therefore, it is able to evaluate stability for all kinds of hybrid DC/DC converters with or without remote sense pins, and is suitable for real-time and in-circuit testing. This frequency-domain technique is more sensitive to detect oscillation at early stage than the time-domain method using an oscilloscope.
High-Efficiency dc/dc Converter
NASA Technical Reports Server (NTRS)
Sturman, J.
1982-01-01
High-efficiency dc/dc converter has been developed that provides commonly used voltages of plus or minus 12 Volts from an unregulated dc source of from 14 to 40 Volts. Unique features of converter are its high efficiency at low power level and ability to provide output either larger or smaller than input voltage.
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M A
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods.
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M. A.
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods. PMID:24883374
DC-Compensated Current Transformer.
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-20
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component.
DC-Compensated Current Transformer.
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-01
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830
Czosnek, Cezary; Bućko, Mirosław M.; Janik, Jerzy F.; Olejniczak, Zbigniew; Bystrzejewski, Michał; Łabędź, Olga; Huczko, Andrzej
2015-03-15
Highlights: • Make-up of the SiC-based nanopowders is a function of the C:Si:O ratio in precursor. • Two-stage aerosol-assisted synthesis offers conditions close to equilibrium. • DC thermal plasma synthesis yields kinetically controlled SiC products. - Abstract: Nanosized SiC-based powders were prepared from selected liquid-phase organosilicon precursors by the aerosol-assisted synthesis, the DC thermal plasma synthesis, and a combination of the two methods. The two-stage aerosol-assisted synthesis method provides at the end conditions close to thermodynamic equilibrium. The single-stage thermal plasma method is characterized by short particle residence times in the reaction zone, which can lead to kinetically controlled products. The by-products and final nanopowders were characterized by powder XRD, infrared spectroscopy FT-IR, scanning electron microscopy SEM, and {sup 29}Si MAS NMR spectroscopy. BET specific surface areas of the products were determined by standard physical adsorption of nitrogen at 77 K. The major component in all synthesis routes was found to be cubic silicon carbide β-SiC with average crystallite sizes ranging from a few to tens of nanometers. In some cases, it was accompanied by free carbon, elemental silicon or silica nanoparticles. The final mesoporous β-SiC-based nanopowders have a potential as affordable catalyst supports.
An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy
ERIC Educational Resources Information Center
Gamso, Nancy M.
2011-01-01
The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…
Diagnostic of water trees using DC and AC voltages
Romero, P.; Puerta, J.
1996-12-31
Electric tools for non-destructive water tree diagnostic in XLPE medium voltage cables, by means of DC and AC voltages are presented. The DC method is related to the determination of a non-linear dependence of the polarization current on the applied DC step voltage, in contrast to a linear dependence found in non-water tree-damaged cables. In both cases the current follows the Curie-von Schweidler empirical law, I(t) = I{sub 0}t{sup {minus}n}. The AC method is based on the measurement of the dispersion relation of both the loss factor and the capacitance in the low and very low frequency ranges by means of the Fourier Transform techniques. The devised measuring instrumentation is presented.
Petigara, Bhakti R; Scher, Alan L
2007-01-01
A reversed-phase liquid chromatographic method was developed to determine parts-per-million and higher levels of Sudan 1, 1-(phenylazo)-2-naphthalenol, in the disulfo monoazo color additive FD&C Yellow No. 6 and in a related monosulfo monoazo color additive, D&C Orange No. 4. Sudan I, the corresponding unsulfonated monoazo dye, is a known impurity in these color additives. The color additives are dissolved in water and methanol, and the filtered solutions are directly chromatographed, without extraction or concentration, by using gradient elution at 0.25 mL/min. Calibrations from peak areas at 485 nm were linear. At a 99% confidence level, the limits of determination were 0.008 microg Sudan I/mL (0.4 ppm) in FD&C Yellow No. 6 and 0.011 microg Sudan I/mL (0.00011%) in D&C Orange No. 4. The confidence intervals were 0.202 +/- 0.002 microg Sudan I/mL (10.1 +/- 0.1 ppm) near the specification level for Sudan I in FD&C Yellow No. 6 and 20.0 +/- 0.2 microg Sudan I/mL (0.200 +/- 0.002%) near the highest concentration of Sudan I found in D&C Orange No. 4. A survey was conducted to determine Sudan I in 28 samples of FD&C Yellow No. 6 from 17 international manufacturers over 3 years, and in a pharmacology-tested sample. These samples were found to contain undetected levels (16 samples), 0.5-9.7 ppm Sudan I (0.01-0.194 microg Sudan I/mL in analyzed solutions; 11 samples including the pharmacology sample), and > or =10 ppm Sudan I (> or = 0.2 microg Sudan I/mL; 2 samples). Analyses of 21 samples of D&C Orange No. 4 from 8 international manufacturers over 4 years found Sudan I at undetected levels (8 samples), 0.0005 to < 0.005% Sudan I (0.05 to < 0.5 microg Sudan I/mL in analyzed solutions; 3 samples, including a pharmacology batch), 0.005 to <0.05% Sudan I (0.5 to <5 microg Sudan I/mL; 9 samples), and 0.18% Sudan I (18 microg Sudan I/mL; 1 sample).
An Empirical Study of Applying Associative Method in College English Vocabulary Learning
ERIC Educational Resources Information Center
Zhang, Min
2014-01-01
Vocabulary is the basis of any language learning. To many Chinese non-English majors it is difficult to memorize English words. This paper applied associative method in presenting new words to them. It is found that associative method did receive a better result both in short-term and long-term retention of English words. Compared with the…
The bi-potential method applied to the modeling of dynamic problems with friction
NASA Astrophysics Data System (ADS)
Feng, Z.-Q.; Joli, P.; Cros, J.-M.; Magnain, B.
2005-10-01
The bi-potential method has been successfully applied to the modeling of frictional contact problems in static cases. This paper presents an extension of this method for dynamic analysis of impact problems with deformable bodies. A first order algorithm is applied to the numerical integration of the time-discretized equation of motion. Using the Object-Oriented Programming (OOP) techniques in C++ and OpenGL graphical support, a finite element code including pre/postprocessor FER/Impact is developed. The numerical results show that, at the present stage of development, this approach is robust and efficient in terms of numerical stability and precision compared with the penalty method.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
Active Problem Solving and Applied Research Methods in a Graduate Course on Numerical Methods
ERIC Educational Resources Information Center
Maase, Eric L.; High, Karen A.
2008-01-01
"Chemical Engineering Modeling" is a first-semester graduate course traditionally taught in a lecture format at Oklahoma State University. The course as taught by the author for the past seven years focuses on numerical and mathematical methods as necessary skills for incoming graduate students. Recent changes to the course have included Visual…
NASA Astrophysics Data System (ADS)
Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto
In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.
The application of standardized control and interface circuits to three dc to dc power converters.
NASA Technical Reports Server (NTRS)
Yu, Y.; Biess, J. J.; Schoenfeld, A. D.; Lalli, V. R.
1973-01-01
Standardized control and interface circuits were applied to the three most commonly used dc to dc converters: the buck-boost converter, the series-switching buck regulator, and the pulse-modulated parallel inverter. The two-loop ASDTIC regulation control concept was implemented by using a common analog control signal processor and a novel digital control signal processor. This resulted in control circuit standardization and superior static and dynamic performance of the three dc-to-dc converters. Power components stress control, through active peak current limiting and recovery of switching losses, was applied to enhance reliability and converter efficiency.
NASA Technical Reports Server (NTRS)
Atkins, H. L.; Shu, Chi-Wang
2001-01-01
The explicit stability constraint of the discontinuous Galerkin method applied to the diffusion operator decreases dramatically as the order of the method is increased. Block Jacobi and block Gauss-Seidel preconditioner operators are examined for their effectiveness at accelerating convergence. A Fourier analysis for methods of order 2 through 6 reveals that both preconditioner operators bound the eigenvalues of the discrete spatial operator. Additionally, in one dimension, the eigenvalues are grouped into two or three regions that are invariant with order of the method. Local relaxation methods are constructed that rapidly damp high frequencies for arbitrarily large time step.
An uncertainty analysis of the PVT gauging method applied to sub-critical cryogenic propellant tanks
NASA Astrophysics Data System (ADS)
Van Dresar, Neil T.
2004-06-01
The PVT (pressure, volume, temperature) method of liquid quantity gauging in low-gravity is based on gas law calculations assuming conservation of pressurant gas within the propellant tank and the pressurant supply bottle. There is interest in applying this method to cryogenic propellant tanks since the method requires minimal additional hardware or instrumentation. To use PVT with cryogenic fluids, a non-condensable pressurant gas (helium) is required. With cryogens, there will be a significant amount of propellant vapor mixed with the pressurant gas in the tank ullage. This condition, along with the high sensitivity of propellant vapor pressure to temperature, makes the PVT method susceptible to substantially greater measurement uncertainty than is the case with less volatile propellants. A conventional uncertainty analysis is applied to example cases of liquid hydrogen and liquid oxygen tanks. It appears that the PVT method may be feasible for liquid oxygen. Acceptable accuracy will be more difficult to obtain with liquid hydrogen.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
Small, Joshua; Fruehling, Adam; Garg, Anurag; Liu, Xiaoguang; Peroulis, Dimitrios
2014-01-01
Mechanically underdamped electrostatic fringing-field MEMS actuators are well known for their fast switching operation in response to a unit step input bias voltage. However, the tradeoff for the improved switching performance is a relatively long settling time to reach each gap height in response to various applied voltages. Transient applied bias waveforms are employed to facilitate reduced switching times for electrostatic fringing-field MEMS actuators with high mechanical quality factors. Removing the underlying substrate of the fringing-field actuator creates the low mechanical damping environment necessary to effectively test the concept. The removal of the underlying substrate also a has substantial improvement on the reliability performance of the device in regards to failure due to stiction. Although DC-dynamic biasing is useful in improving settling time, the required slew rates for typical MEMS devices may place aggressive requirements on the charge pumps for fully-integrated on-chip designs. Additionally, there may be challenges integrating the substrate removal step into the back-end-of-line commercial CMOS processing steps. Experimental validation of fabricated actuators demonstrates an improvement of 50x in switching time when compared to conventional step biasing results. Compared to theoretical calculations, the experimental results are in good agreement. PMID:25145811
Small, Joshua; Fruehling, Adam; Garg, Anurag; Liu, Xiaoguang; Peroulis, Dimitrios
2014-01-01
Mechanically underdamped electrostatic fringing-field MEMS actuators are well known for their fast switching operation in response to a unit step input bias voltage. However, the tradeoff for the improved switching performance is a relatively long settling time to reach each gap height in response to various applied voltages. Transient applied bias waveforms are employed to facilitate reduced switching times for electrostatic fringing-field MEMS actuators with high mechanical quality factors. Removing the underlying substrate of the fringing-field actuator creates the low mechanical damping environment necessary to effectively test the concept. The removal of the underlying substrate also a has substantial improvement on the reliability performance of the device in regards to failure due to stiction. Although DC-dynamic biasing is useful in improving settling time, the required slew rates for typical MEMS devices may place aggressive requirements on the charge pumps for fully-integrated on-chip designs. Additionally, there may be challenges integrating the substrate removal step into the back-end-of-line commercial CMOS processing steps. Experimental validation of fabricated actuators demonstrates an improvement of 50x in switching time when compared to conventional step biasing results. Compared to theoretical calculations, the experimental results are in good agreement.
Small, Joshua; Fruehling, Adam; Garg, Anurag; Liu, Xiaoguang; Peroulis, Dimitrios
2014-01-01
Mechanically underdamped electrostatic fringing-field MEMS actuators are well known for their fast switching operation in response to a unit step input bias voltage. However, the tradeoff for the improved switching performance is a relatively long settling time to reach each gap height in response to various applied voltages. Transient applied bias waveforms are employed to facilitate reduced switching times for electrostatic fringing-field MEMS actuators with high mechanical quality factors. Removing the underlying substrate of the fringing-field actuator creates the low mechanical damping environment necessary to effectively test the concept. The removal of the underlying substrate also a has substantial improvement on the reliability performance of the device in regards to failure due to stiction. Although DC-dynamic biasing is useful in improving settling time, the required slew rates for typical MEMS devices may place aggressive requirements on the charge pumps for fully-integrated on-chip designs. Additionally, there may be challenges integrating the substrate removal step into the back-end-of-line commercial CMOS processing steps. Experimental validation of fabricated actuators demonstrates an improvement of 50x in switching time when compared to conventional step biasing results. Compared to theoretical calculations, the experimental results are in good agreement. PMID:25145811
Simultaneous distribution of AC and DC power
Polese, Luigi Gentile
2015-09-15
A system and method for the transport and distribution of both AC (alternating current) power and DC (direct current) power over wiring infrastructure normally used for distributing AC power only, for example, residential and/or commercial buildings' electrical wires is disclosed and taught. The system and method permits the combining of AC and DC power sources and the simultaneous distribution of the resulting power over the same wiring. At the utilization site a complementary device permits the separation of the DC power from the AC power and their reconstruction, for use in conventional AC-only and DC-only devices.
Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services
Rajabi, A; Dabiri, A
2012-01-01
Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171
Michałowska-Kaczmarczyk, Anna Maria; Asuero, Agustin G; Martin, Julia; Alonso, Esteban; Jurado, Jose Marcos; Michałowski, Tadeusz
2014-12-01
Rational functions of the Padé type are used for the calibration curve (CCM), and standard addition (SAM) methods purposes. In this paper, the related functions were applied to results obtained from the analyses of (a) nickel with use of FAAS method, (b) potassium according to FAES method, and (c) salicylic acid according to HPLC-MS/MS method. A uniform, integral criterion of nonlinearity of the curves, obtained according to CCM and SAM, is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1982-01-01
A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.
Applying Item Response Theory Methods to Design a Learning Progression-Based Science Assessment
ERIC Educational Resources Information Center
Chen, Jing
2012-01-01
Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1)…
NASA Astrophysics Data System (ADS)
Ling, J. F.; Docobo, J. A.; Abad, A. J.
1995-08-01
This article discusses the stellar three-body problem using an approximation in which the outer orbit is assumed to be Keplerian. The equations of motion are integrated by the stroboscopic method, i.e., basically at successive periods of a rapidly changing variable (the eccentric anomaly of the inner orbit). The theory is applied to the triple-star system ξ Ursae Majoris.
A Method of Measuring the Costs and Benefits of Applied Research.
ERIC Educational Resources Information Center
Sprague, John W.
The Bureau of Mines studied the application of the concepts and methods of cost-benefit analysis to the problem of ranking alternative applied research projects. Procedures for measuring the different classes of project costs and benefits, both private and public, are outlined, and cost-benefit calculations are presented, based on the criteria of…
Campbell, Jeremy B; Newson, Steve
2013-02-26
Embodiments of DC source assemblies of power inverter systems of the type suitable for deployment in a vehicle having an electrically grounded chassis are provided. An embodiment of a DC source assembly comprises a housing, a DC source disposed within the housing, a first terminal, and a second terminal. The DC source also comprises a first capacitor having a first electrode electrically coupled to the housing, and a second electrode electrically coupled to the first terminal. The DC source assembly further comprises a second capacitor having a first electrode electrically coupled to the housing, and a second electrode electrically coupled to the second terminal.
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented an offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.
An applied study using systems engineering methods to prioritize green systems options
Lee, Sonya M; Macdonald, John M
2009-01-01
For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.
Method of applying a cerium diffusion coating to a metallic alloy
Jablonski, Paul D.; Alman, David E.
2009-06-30
A method of applying a cerium diffusion coating to a preferred nickel base alloy substrate has been discovered. A cerium oxide paste containing a halide activator is applied to the polished substrate and then dried. The workpiece is heated in a non-oxidizing atmosphere to diffuse cerium into the substrate. After cooling, any remaining cerium oxide is removed. The resulting cerium diffusion coating on the nickel base substrate demonstrates improved resistance to oxidation. Cerium coated alloys are particularly useful as components in a solid oxide fuel cell (SOFC).
NASA Astrophysics Data System (ADS)
Watanabe, T.; Seto, K.; Toyoda, H.; Takano, T.
2016-09-01
Connected Control Method (CCM) is a well-known mechanism in the field of civil structural vibration control that utilizes mutual reaction forces between plural buildings connected by dampers as damping force. However, the fact that CCM requires at least two buildings to obtain reaction force prevents CCM from further development. In this paper, a novel idea to apply CCM onto a single building by splitting the building into four substructures is presented. An experimental model structure split into four is built and CCM is applied by using four magnetic dampers. Experimental analysis is carried out and basic performance and effectiveness of the presented idea is confirmed.
Smart, JC
2016-01-01
Background The National HIV/AIDS Strategy calls for active surveillance programs for human immunodeficiency virus (HIV) to more accurately measure access to and retention in care across the HIV care continuum for persons living with HIV within their jurisdictions and to identify persons who may need public health services. However, traditional public health surveillance methods face substantial technological and privacy-related barriers to data sharing. Objective This study developed a novel data-sharing approach to improve the timeliness and quality of HIV surveillance data in three jurisdictions where persons may often travel across the borders of the District of Columbia, Maryland, and Virginia. Methods A deterministic algorithm of approximately 1000 lines was developed, including a person-matching system with Enhanced HIV/AIDS Reporting System (eHARS) variables. Person matching was defined in categories (from strongest to weakest): exact, very high, high, medium high, medium, medium low, low, and very low. The algorithm was verified using conventional component testing methods, manual code inspection, and comprehensive output file examination. Results were validated by jurisdictions using internal review processes. Results Of 161,343 uploaded eHARS records from District of Columbia (N=49,326), Maryland (N=66,200), and Virginia (N=45,817), a total of 21,472 persons were matched across jurisdictions over various strengths in a matching process totaling 21 minutes and 58 seconds in the privacy device, leaving 139,871 uniquely identified with only one jurisdiction. No records matched as medium low or low. Over 80% of the matches were identified as either exact or very high matches. Three separate validation methods were conducted for this study, and they all found ≥90% accuracy between records matched by this novel method and traditional matching methods. Conclusions This study illustrated a novel data-sharing approach that may facilitate timelier and better
Marine organism repellent covering for protection of underwater objects and method of applying same
Fischer, K.J.
1993-07-13
A method is described of protecting the surface of underwater objects from fouling by growth of marine organisms thereon comprising the steps of: (A) applying a layer of waterproof adhesive to the surface to be protected; (B) applying to the waterproof adhesive layer, a deposit of cayenne pepper material; (C) applying a permeable layer of copper containing material to the adhesive layer in such a configuration as to leave certain areas of the outer surface of the adhesive layer exposed, through open portions of the permeable layer, to the ambient environment of the surface to be protected when such surface is submerged in water; (D) the permeable layer having the property of being a repellent to marine organisms.
Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma
Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.
2015-01-15
An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.
Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem
Alchalabi, R.M.; Turinsky, P.J.
1996-12-31
The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.
NASA Astrophysics Data System (ADS)
Langer, Stefan
2014-11-01
For unstructured finite volume methods an agglomeration multigrid with an implicit multistage Runge-Kutta method as a smoother is developed for solving the compressible Reynolds averaged Navier-Stokes (RANS) equations. The implicit Runge-Kutta method is interpreted as a preconditioned explicit Runge-Kutta method. The construction of the preconditioner is based on an approximate derivative. The linear systems are solved approximately with a symmetric Gauss-Seidel method. To significantly improve this solution method grid anisotropy is treated within the Gauss-Seidel iteration in such a way that the strong couplings in the linear system are resolved by tridiagonal systems constructed along these directions of strong coupling. The agglomeration strategy is adapted to this procedure by taking into account exactly these anisotropies in such a way that a directional coarsening is applied along these directions of strong coupling. Turbulence effects are included by a Spalart-Allmaras model, and the additional transport-type equation is approximately solved in a loosely coupled manner with the same method. For two-dimensional and three-dimensional numerical examples and a variety of differently generated meshes we show the wide range of applicability of the solution method. Finally, we exploit the GMRES method to determine approximate spectral information of the linearized RANS equations. This approximate spectral information is used to discuss and compare characteristics of multistage Runge-Kutta methods.
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V.
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139
The Role of Applied Epidemiology Methods in the Disaster Management Cycle
Heumann, Michael; Perrotta, Dennis; Wolkin, Amy F.; Schnall, Amy H.; Podgornik, Michelle N.; Cruz, Miguel A.; Horney, Jennifer A.; Zane, David; Roisman, Rachel; Greenspan, Joel R.; Thoroughman, Doug; Anderson, Henry A.; Wells, Eden V.; Simms, Erin F.
2014-01-01
Disaster epidemiology (i.e., applied epidemiology in disaster settings) presents a source of reliable and actionable information for decision-makers and stakeholders in the disaster management cycle. However, epidemiological methods have yet to be routinely integrated into disaster response and fully communicated to response leaders. We present a framework consisting of rapid needs assessments, health surveillance, tracking and registries, and epidemiological investigations, including risk factor and health outcome studies and evaluation of interventions, which can be practiced throughout the cycle. Applying each method can result in actionable information for planners and decision-makers responsible for preparedness, response, and recovery. Disaster epidemiology, once integrated into the disaster management cycle, can provide the evidence base to inform and enhance response capability within the public health infrastructure. PMID:25211748
The Role of Applied Epidemiology Methods in the Disaster Management Cycle
Malilay, Josephine; Heumann, Michael; Perrotta, Dennis; Wolkin, Amy F.; Schnall, Amy H.; Podgornik, Michelle N.; Cruz, Miguel A.; Horney, Jennifer A.; Zane, David; Roisman, Rachel; Greenspan, Joel R.; Thoroughman, Doug; Anderson, Henry A.; Wells, Eden V.; Simms, Erin F.
2015-01-01
Disaster epidemiology (i.e., applied epidemiology in disaster settings) presents a source of reliable and actionable information for decision-makers and stakeholders in the disaster management cycle. However, epidemiological methods have yet to be routinely integrated into disaster response and fully communicated to response leaders. We present a framework consisting of rapid needs assessments, health surveillance, tracking and registries, and epidemiological investigations, including risk factor and health outcome studies and evaluation of interventions, which can be practiced throughout the cycle. Applying each method can result in actionable information for planners and decision-makers responsible for preparedness, response, and recovery. Disaster epidemiology, once integrated into the disaster management cycle, can provide the evidence base to inform and enhance response capability within the public health infrastructure. PMID:25211748
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139
REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.
UMEDA, T.; MATSUFURU, H.
2005-07-25
We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.
Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.
Lestelle, Lawrence C.; Mobrand, Lars E.
1996-05-01
The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.
Experiential learning in gerontology: methods for applying concepts and theories to practice.
Wilber, K H; Shoecraft, C
1989-01-01
Theories of developmental aging are a crucial component of professional training in gerontology and methods for applying these theories are equally important. Current models are presented which include a variety of opportunities to test theories and develop skills in the gerontology classroom. In addition, the methodology of field studies and consultation are discussed. Various practica models and the instructor's role in organizing and directing them are also presented.
Critical conditions of saddle-node bifurcations in switching DC-DC converters
NASA Astrophysics Data System (ADS)
Fang, Chung-Chieh
2013-08-01
Although existence of multiple periodic orbits in some DC-DC converters have been known for decades, linking the multiple periodic orbits with the saddle-node bifurcation (SNB) is rarely reported. The SNB occurs in popular DC-DC converters, but it is generally reported as a strange instability. Recently, design-oriented instability critical conditions are of great interest. In this article, average, sampled-data and harmonic balance analyses are applied and they lead to equivalent results. Many new critical conditions are derived. They facilitate future research on the instability associated with multiple periodic orbits, sudden voltage jumps or disappearances of periodic orbits observed in DC-DC converters. The effects of various converter parameters on the instability can be readily seen from the derived critical conditions. New Nyquist-like plots are also proposed to predict or prevent the occurrence of the instability.
Suter, Glenn W; Cormier, Susan M
2013-02-01
Causal relationships derived from field data are potentially confounded by variables that are correlated with both the cause and its effect. The present study presents a method for assessing the potential for confounding and applies it to the relationship between ionic strength and impairment of benthic invertebrate assemblages in central Appalachian streams. The method weighs all available evidence for and against confounding by each potential confounder. It identifies 10 types of evidence for confounding, presents a qualitative scoring system, and provides rules for applying the scores. Twelve potential confounders were evaluated: habitat, organic enrichment, nutrients, deposited sediments, pH, selenium, temperature, lack of headwaters, catchment area, settling ponds, dissolved oxygen, and metals. One potential confounder, low pH, was found to be biologically significant and eliminated by removing sites with pH < 6. Other potential confounders were eliminated based on the weight of evidence. This method was found to be useful and defensible. It could be applied to other environmental assessments that use field data to develop causal relationships, including contaminated site remediation or management of natural resources.
Investigation on reconstruction methods applied to 3D terahertz computed tomography.
Recur, B; Younus, A; Salort, S; Mounaix, P; Chassagne, B; Desbarats, P; Caumes, J-P; Abraham, E
2011-03-14
3D terahertz computed tomography has been performed using a monochromatic millimeter wave imaging system coupled with an infrared temperature sensor. Three different reconstruction methods (standard back-projection algorithm and two iterative analysis) have been compared in order to reconstruct large size 3D objects. The quality (intensity, contrast and geometric preservation) of reconstructed cross-sectional images has been discussed together with the optimization of the number of projections. Final demonstration to real-life 3D objects has been processed to illustrate the potential of the reconstruction methods for applied terahertz tomography.
Radiation effects on DC-DC Converters
NASA Technical Reports Server (NTRS)
Zhang, Dexin; Attia, John O.; Kankam, Mark D. (Technical Monitor)
2000-01-01
DC-DC switching converters are circuits that can be used to convert a DC voltage of one value to another by switching action. They are increasing being used in space systems. Most of the popular DC-DC switching converters utilize power MOSFETs. However power MOSFETs, when subjected to radiation, are susceptible to degradation of device characteristics or catastrophic failure. This work focuses on the effects of total ionizing dose on converter performance. Four fundamental switching converters (buck converter, buck-boost converter, cuk converter, and flyback converter) were built using Harris IRF250 power MOSFETs. These converters were designed for converting an input of 60 volts to an output of about 12 volts with a switching frequency of 100 kHz. The four converters were irradiated with a Co-60 gamma source at dose rate of 217 rad/min. The performances of the four converters were examined during the exposure to the radiation. The experimental results show that the output voltage of the converters increases as total dose increases. However, the increases of the output voltage were different for the four different converters, with the buck converter and cuk converter the highest and the flyback converter the lowest. We observed significant increases in output voltage for cuk converter at a total dose of 24 krad (si).
Applying simulation model to uniform field space charge distribution measurements by the PEA method
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished from space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.
A review of studies applying environmental impact assessment methods on fruit production systems.
Cerutti, Alessandro K; Bruun, Sander; Beccaro, Gabriele L; Bounous, Giancarlo
2011-10-01
Although many aspects of environmental accounting methodologies in food production have already been investigated, the application of environmental indicators in the fruit sector is still rare and no consensus can be found on the preferred method. On the contrary, widely diverging approaches have been taken to several aspects of the analyses, such as data collection, handling of scaling issues, and goal and scope definition. This paper reviews studies assessing the sustainability or environmental impacts of fruit production under different conditions and identifies aspects of fruit production that are of environmental importance. Four environmental assessment methods which may be applied to assess fruit production systems are evaluated, namely Life Cycle Assessment, Ecological Footprint Analysis, Emergy Analysis and Energy Balance. In the 22 peer-reviewed journal articles and two conference articles applying one of these methods in the fruit sector that were included in this review, a total of 26 applications of environmental impact assessment methods are described. These applications differ concerning e.g. overall objective, set of environmental issues considered, definition of system boundaries and calculation algorithms. Due to the relatively high variability in study cases and approaches, it was not possible to identify any one method as being better than the others. However, remarks on methodologies and suggestions for standardisation are given and the environmental burdens of fruit systems are highlighted.
NASA Technical Reports Server (NTRS)
Thompson, David E.
2005-01-01
Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.
Lessons learned applying CASE methods/tools to Ada software development projects
NASA Technical Reports Server (NTRS)
Blumberg, Maurice H.; Randall, Richard L.
1993-01-01
This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.
Xie, Zhaoheng; Li, Suying; Yang, Kun; Xu, Baixuan; Ren, Qiushi
2016-01-01
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging. PMID:27240368
Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro
2013-01-01
Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline. PMID:23920682
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).
NASA Technical Reports Server (NTRS)
Cuk, Slobodan M. (Inventor); Middlebrook, Robert D. (Inventor)
1980-01-01
A dc-to-dc converter having nonpulsating input and output current uses two inductances, one in series with the input source, the other in series with the output load. An electrical energy transferring device with storage, namely storage capacitance, is used with suitable switching means between the inductances to DC level conversion. For isolation between the source and load, the capacitance may be divided into two capacitors coupled by a transformer, and for reducing ripple, the inductances may be coupled. With proper design of the coupling between the inductances, the current ripple can be reduced to zero at either the input or the output, or the reduction achievable in that way may be divided between the input and output.
The method of averaging applied to pharmacokinetic/pharmacodynamic indirect response models.
Dunne, Adrian; de Winter, Willem; Hsu, Chyi-Hung; Mariam, Shiferaw; Neyens, Martine; Pinheiro, José; Woot de Trixhe, Xavier
2015-08-01
The computational effort required to fit the pharmacodynamic (PD) part of a pharmacokinetic/pharmacodynamic (PK/PD) model can be considerable if the differential equations describing the model are solved numerically. This burden can be greatly reduced by applying the method of averaging (MAv) in the appropriate circumstances. The MAv gives an approximate solution, which is expected to be a good approximation when the PK profile is periodic (i.e. repeats its values in regular intervals) and the rate of change of the PD response is such that it is approximately constant over a single period of the PK profile. This paper explains the basis of the MAv by means of a simple mathematical derivation. The NONMEM® implementation of the MAv using the abbreviated FORTRAN function FUNCA is described and explained. The application of the MAv is illustrated by means of an example involving changes in glycated hemoglobin (HbA1c%) following administration of canagliflozin, a selective sodium glucose co-transporter 2 inhibitor. The PK/PD model applied to these data is fitted with NONMEM® using both the MAv and the standard method using a numerical differential equation solver (NDES). Both methods give virtually identical results but the NDES method takes almost 8 h to run both the estimation and covariance steps, whilst the MAv produces the same results in less than 30 s. An outline of the NONMEM® control stream and the FORTRAN code for the FUNCA function is provided in the appendices. PMID:26142076
Radiation Effects on DC-DC Converters
NASA Technical Reports Server (NTRS)
Zhang, De-Xin; AbdulMazid, M. D.; Attia, John O.; Kankam, Mark D. (Technical Monitor)
2001-01-01
In this work, several DC-DC converters were designed and built. The converters are Buck Buck-Boost, Cuk, Flyback, and full-bridge zero-voltage switched. The total ionizing dose radiation and single event effects on the converters were investigated. The experimental results for the TID effects tests show that the voltages of the Buck Buck-Boost, Cuk, and Flyback converters increase as total dose increased when using power MOSFET IRF250 as a switching transistor. The change in output voltage with total dose is highest for the Buck converter and the lowest for Flyback converter. The trend of increase in output voltages with total dose in the present work agrees with those of the literature. The trends of the experimental results also agree with those obtained from PSPICE simulation. For the full-bridge zero-voltage switch converter, it was observed that the dc-dc converter with IRF250 power MOSFET did not show a significant change of output voltage with total dose. In addition, for the dc-dc converter with FSF254R4 radiation-hardened power MOSFET, the output voltage did not change significantly with total dose. The experimental results were confirmed by PSPICE simulation that showed that FB-ZVS converter with IRF250 power MOSFET's was not affected with the increase in total ionizing dose. Single Event Effects (SEE) radiation tests were performed on FB-ZVS converters. It was observed that the FB-ZVS converter with the IRF250 power MOSFET, when the device was irradiated with Krypton ion with ion-energy of 150 MeV and LET of 41.3 MeV-square cm/mg, the output voltage increased with the increase in fluence. However, for Krypton with ion-energy of 600 MeV and LET of 33.65 MeV-square cm/mg, and two out of four transistors of the converter were permanently damaged. The dc-dc converter with FSF254R4 radiation hardened power MOSFET's did not show significant change at the output voltage with fluence while being irradiated by Krypton with ion energy of 1.20 GeV and LET of 25
Sennhenn, B; Giese, K; Plamann, K; Harendt, N; Kölmel, K
1993-01-01
Spectroscopic techniques are reported on which allow to study in vivo the penetration behaviour of topically applied light-absorbing drugs into human skin. Remittance spectroscopy, a purely optical method, provides a good tool in both, skin adaptation by use of a remote viewing head coupled to the spectrometer via optical fibres, and adequate sensitivity for the detection of small amounts of the applied drugs. The measuring depth in the skin is determined by the wavelength-dependent optical penetration depth, which itself depends on light absorption and light scattering. In the UV-spectral region the optical penetration depth is of the order of the thickness of the stratum corneum (UV-A) or of only a superficial part of it (UV-B, UV-C). Fluorescence spectroscopy, another optical method, offers two kinds of drug detection, a direct one in case of self-fluorescent drugs or an indirect one being based on the light absorption of the drug, which may give rise to a screening of the self-fluorescence of the skin itself or of an applied marker. The measuring depth is comparable to that achieved with remittance spectroscopy. A third method is photothermal spectroscopy which is determined by thermal properties of the skin in addition to optical properties. Photothermal spectroscopy is unique in that it allows depth profiles of drug concentration to be measured non-invasively, as the photothermal measuring depth can be changed by varying the modulation frequency of the intensity-modulated incident light. Results of measurements demonstrating the potentials of these spectroscopic methods are presented.
Non-destructive research methods applied on materials for the new generation of nuclear reactors
NASA Astrophysics Data System (ADS)
Bartošová, I.; Slugeň, V.; Veterníková, J.; Sojak, S.; Petriska, M.; Bouhaddane, A.
2014-06-01
The paper is aimed on non-destructive experimental techniques applied on materials for the new generation of nuclear reactors (GEN IV). With the development of these reactors, also materials have to be developed in order to guarantee high standard properties needed for construction. These properties are high temperature resistance, radiation resistance and resistance to other negative effects. Nevertheless the changes in their mechanical properties should be only minimal. Materials, that fulfil these requirements, are analysed in this work. The ferritic-martensitic (FM) steels and ODS steels are studied in details. Microstructural defects, which can occur in structural materials and can be also accumulated during irradiation due to neutron flux or alpha, beta and gamma radiation, were analysed using different spectroscopic methods as positron annihilation spectroscopy and Barkhausen noise, which were applied for measurements of three different FM steels (T91, P91 and E97) as well as one ODS steel (ODS Eurofer).
Independent component analysis of noninvasively recorded cortical magnetic DC-fields in humans.
Wübbeler, G; Ziehe, A; Mackert, B M; Müller, K R; Trahms, L; Curio, G
2000-05-01
We apply a recently developed multivariate statistical data analysis technique--so called blind source separation (BSS) by independent component analysis--to process magnetoencephalogram recordings of near-dc fields. The extraction of near-dc fields from MEG recordings has great relevance for medical applications since slowly varying dc-phenomena have been found, e.g., in cerebral anoxia and spreading depression in animals. Comparing several BSS approaches, it turns out that an algorithm based on temporal decorrelation successfully extracted a dc-component which was induced in the auditory cortex by presentation of music. The task is challenging because of the limited amount of available data and the corruption by outliers, which makes it an interesting real-world testbed for studying the robustness of ICA methods.
NASA Astrophysics Data System (ADS)
Eppeldauer, George P.; Yoon, Howard W.; Jarrett, Dean G.; Larason, Thomas C.
2013-10-01
For photocurrent measurements with low uncertainties, wide dynamic range reference current-to-voltage converters and a new converter calibration method have been developed at the National Institute of Standards and Technology (NIST). The high-feedback resistors of a reference converter were in situ calibrated on a high-resistivity, printed circuit board placed in an electrically shielded box electrically isolated from the operational amplifier using jumpers. The feedback resistors, prior to their installation, were characterized, selected and heat treated. The circuit board was cleaned with solvents, and the in situ resistors were calibrated using measurement systems for 10 kΩ to 10 GΩ standard resistors. We demonstrate that dc currents from 1 nA to 100 µA can be measured with uncertainties of 55 × 10-6 (k = 2) or lower, which are lower in uncertainties than any commercial device by factors of 10 to 30 at the same current setting. The internal (NIST) validations of the reference converter are described.
Problems in rigid body dynamics and in applied gyroscope theory: Analytical methods
NASA Astrophysics Data System (ADS)
Koshliakov, V. N.
Analytical methods are presented for solving certain problems in rigid body mechanics and applied gyroscope theory. In particular, consideration is given to classical problems in the dynamics of a heavy rigid body rotating about a fixed point. The methods proposed here are illustrated by using the Kowalevski and Goriachev-Chaplygin spinning tops as an example. A class of exact solutions to equations of motion for a gyrocompass is obtained, and the effect of resistance forces on the motion of a vertical gyroscope is analyzed; the deviations of a vertical gyroscope under nonstationary rotation of the gyroscope rotor are investigated. Special attention is given to methods for solving the problem of stability of gyrocompasses and adjustable gyrohorizon-compasses on a maneuvering ship.
Finite volume and finite element methods applied to 3D laminar and turbulent channel flows
Louda, Petr; Příhoda, Jaromír; Sváček, Petr; Kozel, Karel
2014-12-10
The work deals with numerical simulations of incompressible flow in channels with rectangular cross section. The rectangular cross section itself leads to development of various secondary flow patterns, where accuracy of simulation is influenced by numerical viscosity of the scheme and by turbulence modeling. In this work some developments of stabilized finite element method are presented. Its results are compared with those of an implicit finite volume method also described, in laminar and turbulent flows. It is shown that numerical viscosity can cause errors of same magnitude as different turbulence models. The finite volume method is also applied to 3D turbulent flow around backward facing step and good agreement with 3D experimental results is obtained.
A note on the accuracy of spectral method applied to nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Wong, Peter S.
1994-01-01
Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.
Relativistic convergent close-coupling method applied to electron scattering from mercury
Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor
2010-08-15
We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) {sup 3}P{sub 0,1,2} states. These cross sections are associated with the formation of negative ion (Hg{sup -}) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.
Nonlinear Phenomena and Resonant Parametric Perturbation Control in QR-ZCS Buck DC-DC Converters
NASA Astrophysics Data System (ADS)
Hsieh, Fei-Hu; Liu, Feng-Shao; Hsieh, Hui-Chang
The purpose of this study is to investigate the chaotic phenomena and to control in current-mode controlled quasi-resonant zero-current-switching (QR-ZCS) DC-DC buck converters, and to present control of chaos by resonant parametric perturbation control methods. First of all, MATLAB/SIMULINK is used to derive a mathematical model for QR-ZCS DC-DC buck converters, and to simulate the converters to observe the waveform of output voltages, inductance currents and phase-plane portraits from the period-doubling bifurcation to chaos by changing the load resistances. Secondly, using resonant parametric perturbation control in QR-ZCS buck DC-DC converters, the simulation results of the chaotic converter form chaos state turn into stable state period 1, and improve ripple amplitudes of converters under the chaos, to verify the validity of the proposes method.
NASA Astrophysics Data System (ADS)
Bailey, Teresa S.
In this dissertation we discuss the development, implementation, analysis and testing of the Piecewise Linear Discontinuous Finite Element Method (PWLD) applied to the particle transport equation in two-dimensional cylindrical (RZ) and three-dimensional Cartesian (XYZ) geometries. We have designed this method to be applicable to radiative-transfer problems in radiation-hydrodynamics systems for arbitrary polygonal and polyhedral meshes. For RZ geometry, we have implemented this method in the Capsaicin radiative-transfer code being developed at Los Alamos National Laboratory. In XYZ geometry, we have implemented the method in the Parallel Deterministic Transport code being developed at Texas A&M University. We discuss the importance of the thick diffusion limit for radiative-transfer problems, and perform a thick diffusion-limit analysis on our discretized system for both geometries. This analysis predicts that the PWLD method will perform well in this limit for many problems of physical interest with arbitrary polygonal and polyhedral cells. Finally, we run a series of test problems to determine some useful properties of the method and verify the results of our thick diffusion limit analysis. Finally, we test our method on a variety of test problems and show that it compares favorably to existing methods. With these test problems, we also show that our method performs well in the thick diffusion limit as predicted by our analysis. Based on PWLD's solid finite-element foundation, the desirable properties it shows under analysis, and the excellent performance it demonstrates on test problems even with highly distorted spatial grids, we conclude that it is an excellent candidate for radiative-transfer problems that need a robust method that performs well in thick diffusive problems or on distorted grids.
Regulation of a lightweight high efficiency capacitator diode voltage multiplier dc-dc converter
NASA Technical Reports Server (NTRS)
Harrigill, W. T., Jr.; Myers, I. T.
1976-01-01
A method for the regulation of a capacitor diode voltage multiplier dc-dc converter has been developed which has only minor penalties in weight and efficiency. An auxiliary inductor is used, which only handles a fraction of the total power, to control the output voltage through a pulse width modulation method in a buck boost circuit.
Regulation of a lightweight high efficiency capacitor diode voltage multiplier dc-dc converter
NASA Technical Reports Server (NTRS)
Harrigill, W. T., Jr.; Myers, I. T.
1976-01-01
A method for the regulation of a capacitor diode voltage multiplier dc-dc converter has been developed which has only minor penalties in weight and efficiency. An auxiliary inductor is used, which only handles a fraction of the total power, to control the output voltage through a pulse width modulation method in a buck boost circuit.
The Fractional Step Method Applied to Simulations of Natural Convective Flows
NASA Technical Reports Server (NTRS)
Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)
2002-01-01
This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The
Research methods used in developing and applying quality indicators in primary care
Campbell, S; Braspenning, J; Hutchinson, A; Marshall, M
2002-01-01
Quality indicators have been developed throughout Europe primarily for use in hospitals, but also increasingly for primary care. Both development and application are important but there has been less research on the application of indicators. Three issues are important when developing or applying indicators: (1) which stakeholder perspective(s) are the indicators intended to reflect; (2) what aspects of health care are being measured; and (3) what evidence is available? The information required to develop quality indicators can be derived using systematic or non-systematic methods. Non-systematic methods such as case studies play an important role but they do not tap in to available evidence. Systematic methods can be based directly on scientific evidence by combining available evidence with expert opinion, or they can be based on clinical guidelines. While it may never be possible to produce an error free measure of quality, measures should adhere, as far as possible, to some fundamental a priori characteristics (acceptability, feasibility, reliability, sensitivity to change, and validity). Adherence to these characteristics will help maximise the effectiveness of quality indicators in quality improvement strategies. It is also necessary to consider what the results of applying indicators tell us about quality of care. PMID:12468698
Homotopic approach and pseudospectral method applied jointly to low thrust trajectory optimization
NASA Astrophysics Data System (ADS)
Guo, Tieding; Jiang, Fanghua; Li, Junfeng
2012-02-01
The homotopic approach and the pseudospectral method are two popular techniques for low thrust trajectory optimization. A hybrid scheme is proposed in this paper by combining the above two together to cope with various difficulties encountered when they are applied separately. Explicitly, a smooth energy-optimal problem is first discretized by the pseudospectral method, leading to a nonlinear programming problem (NLP). Costates, especially their initial values, are then estimated from Karush-Kuhn-Tucker (KKT) multipliers of this NLP. Based upon these estimated initial costates, homotopic procedures are initiated efficiently and the desirable non-smooth fuel-optimal results are finally obtained by continuing the smooth energy-optimal results through a homotopic algorithm. Two main difficulties, one due to absence of reasonable initial costates when the homotopic procedures are being initiated and the other due to discontinuous bang-bang controls when the pseudospectral method is applied to the fuel-optimal problem, are both resolved successfully. Numerical results of two scenarios are presented in the end, demonstrating feasibility and well performance of this hybrid technique.
Methods for evaluating the biological impact of potentially toxic waste applied to soils
Neuhauser, E.F.; Loehr, R.C.; Malecki, M.R.
1985-12-01
The study was designed to evaluate two methods that can be used to estimate the biological impact of organics and inorganics that may be in wastes applied to land for treatment and disposal. The two methods were the contact test and the artificial soil test. The contact test is a 48 hr test using an adult worm, a small glass vial, and filter paper to which the test chemical or waste is applied. The test is designed to provide close contact between the worm and a chemical similar to the situation in soils. The method provides a rapid estimate of the relative toxicity of chemicals and industrial wastes. The artificial soil test uses a mixture of sand, kaolin, peat, and calcium carbonate as a representative soil. Different concentrations of the test material are added to the artificial soil, adult worms are added and worm survival is evaluated after two weeks. These studies have shown that: earthworms can distinguish between a wide variety of chemicals with a high degree of accuracy.
High sensitivity ancilla assisted nanoscale DC magnetometry
NASA Astrophysics Data System (ADS)
Liu, Yixiang; Ajoy, Ashok; Marseglia, Luca; Saha, Kasturi; Cappellaro, Paola
2016-05-01
Sensing slowly varying magnetic fields are particularly relevant to many real world scenarios, where the signals of interest are DC or close to static. Nitrogen Vacancy (NV) centers in diamond are a versatile platform for such DC magnetometry on nanometer length scales. Using NV centers, the standard technique for measuring DC magnetic fields is via the Ramsey protocol, where sensitivities can approach better than 1 μ T/vHz, but are limited by the sensor fast dephasing time T2*. In this work we instead present a method of sensing DC magnetic fields that is intrinsically limited by the much longer T2 coherence time. The method exploits a strongly-coupled ancillary nuclear spin to achieve high DC field sensitivities potentially exceeding that of the Ramsey method. In addition, through this method we sense the perpendicular component of the DC magnetic field, which in conjunction with the parallel component sensed by the Ramsey method provides a valuable tool for vector DC magnetometry at the nanoscale.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-02-18
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Cork-resin ablative insulation for complex surfaces and method for applying the same
NASA Technical Reports Server (NTRS)
Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)
1980-01-01
A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.
Method for applying a photoresist layer to a substrate having a preexisting topology
Morales, Alfredo M.; Gonzales, Marcela
2004-01-20
The present invention describes a method for preventing a photoresist layer from delaminating, peeling, away from the surface of a substrate that already contains an etched three dimensional structure such as a hole or a trench. The process comprises establishing a saturated vapor phase of the solvent media used to formulate the photoresist layer, above the surface of the coated substrate as the applied photoresist is heated in order to "cure" or drive off the retained solvent constituent within the layer. By controlling the rate and manner in which solvent is removed from the photoresist layer the layer is stabilized and kept from differentially shrinking and peeling away from the substrate.
Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca
2016-01-01
Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly applied within healthcare organisations, although limited guidance exists and different methods, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the methods used within them, including the extent to which staff and patients are involved. Methods Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently applied within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the methods used often required extensive resources. Conclusion Further research is required to develop high quality yet practical methods which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency
The effects of subsampling gene trees on coalescent methods applied to ancient divergences.
Simmons, Mark P; Sloan, Daniel B; Gatesy, John
2016-04-01
Gene-tree-estimation error is a major concern for coalescent methods of phylogenetic inference. We sampled eight empirical studies of ancient lineages with diverse numbers of taxa and genes for which the original authors applied one or more coalescent methods. We found that the average pairwise congruence among gene trees varied greatly both between studies and also often within a study. We recommend that presenting plots of pairwise congruence among gene trees in a dataset be treated as a standard practice for empirical coalescent studies so that readers can readily assess the extent and distribution of incongruence among gene trees. ASTRAL-based coalescent analyses generally outperformed MP-EST and STAR with respect to both internal consistency (congruence between analyses of subsamples of genes with the complete dataset of all genes) and congruence with the concatenation-based topology. We evaluated the approach of subsampling gene trees that are, on average, more congruent with other gene trees as a method to reduce artifacts caused by gene-tree-estimation errors on coalescent analyses. We suggest that this method is well suited to testing whether gene-tree-estimation error is a primary cause of incongruence between concatenation- and coalescent-based results, to reconciling conflicting phylogenetic results based on different coalescent methods, and to identifying genes affected by artifacts that may then be targeted for reciprocal illumination. We provide scripts that automate the process of calculating pairwise gene-tree incongruence and subsampling trees while accounting for differential taxon sampling among genes. Finally, we assert that multiple tree-search replicates should be implemented as a standard practice for empirical coalescent studies that apply MP-EST. PMID:26768112
Liu, Fan; Huang, Guanxing; Sun, Jichao; Jing, Jihong; Zhang, Ying
2016-02-01
Groundwater quality assessment is essential for drinking from a security point of view. In this paper, a new evaluation method called toxicity combined fuzzy evaluation (TCFE) has been put forward, which is based on the fuzzy synthetic evaluation (FSE) method and the toxicity data from Agency for Toxic Substances and Disease Registry. The comparison of TCFE and FSE in the groundwater quality assessment of Guangzhou region also has been done. The assessment results are divided into 5 water quality levels; level I is the best while level V is the worst. Results indicate that the proportion of level I, level II, and level III used by the FSE method was 69.33% in total. By contrast, this proportion rose to 81.33% after applying the TCFE method. In addition, 66.7% of level IV samples in the FSE method became level I (50%), level II (25%), and level III (25%) in the TCFE method and 29.41% of level V samples became level I (50%) and level III (50%). This trend was caused by the weight change after the combination of toxicity index. By analyzing the changes of different indicators' weight, it could be concluded that the better-changed samples mainly exceeded the corresponding standards of regular indicators and the deteriorated samples mainly exceeded the corresponding standards of toxic indicators. The comparison between the two results revealed that the TCFE method could represent the health implications of toxic indicators reasonably. As a result, the TCFE method is more scientific in view of drinking safety. PMID:26803098
The generalized method of moments as applied to the generalized gamma distribution
NASA Astrophysics Data System (ADS)
Ashkar, F.; Bobée, B.; Leroux, D.; Morisette, D.
1988-09-01
The generalized gamma (GG) distribution has a density function that can take on many possible forms commonly encountered in hydrologic applications. This fact has led many authors to study the properties of the distribution and to propose various estimation techniques (method of moments, mixed moments, maximum likelihood etc.). We discuss some of the most important properties of this flexible distribution and present a flexible method of parameter estimation, called the “generalized method of moments” (GMM) which combines any three moments of the GG distribution. The main advantage of this general method is that it has many of the previously proposed methods of estimation as special cases. We also give a general formula for the variance of the T-year event X T obtained by the GMM along with a general formula for the parameter estimates and also for the covariances and correlation coefficients between any pair of such estimates. By applying the GMM and carefully choosing the order of the moments that are used in the estimation one can significantly reduce the variance of T-year events for the range of return periods that are of interest.
Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES
NASA Astrophysics Data System (ADS)
Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean
2009-06-01
Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).
Castello, Charles C; New, Joshua Ryan
2012-01-01
Autonomous detection and correction of potentially missing or corrupt sensor data is a essential concern in building technologies since data availability and correctness is necessary to develop accurate software models for instrumented experiments. Therefore, this paper aims to address this problem by using statistical processing methods including: (1) least squares; (2) maximum likelihood estimation; (3) segmentation averaging; and (4) threshold based techniques. Application of these validation schemes are applied to a subset of data collected from Oak Ridge National Laboratory s (ORNL) ZEBRAlliance research project, which is comprised of four single-family homes in Oak Ridge, TN outfitted with a total of 1,218 sensors. The focus of this paper is on three different types of sensor data: (1) temperature; (2) humidity; and (3) energy consumption. Simulations illustrate the threshold based statistical processing method performed best in predicting temperature, humidity, and energy data.
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Deyst, J. J.; Crawford, B. S.
1975-01-01
The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.
Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Atkins, Harold L.
2009-01-01
The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.
NASA Astrophysics Data System (ADS)
Balasoiu, Maria; Kuklin, Alexander
2012-03-01
Separate determination of the nuclear and magnetic contributions to the scattering intensity by means of a contrast variation method applied in a small angle neutron scattering experiment of nonpolarized neutrons in ferrofluids in early 90 's at the MURN instrument is reviewed. The nuclear scattering contribution gives the features of the colloidal particle dimensions, surfactant shell structure and the solvent degree penetration to the macromolecular layer. The magnetic scattering part is compatible to the models where is supposed that the particle surface has a nonmagnetic layer. Details on experimental "Grabcev method" in obtaining separate nuclear and magnetic contributions to the small angle neutron scattering intensity of unpolarized neutrons are emphasized for the case of a high quality ultrastabile benzene-based ferrofluid with magnetite nanoparticles.
Using computer simulations to teach electrical and electromagnetic methods in applied geophysics
NASA Astrophysics Data System (ADS)
Butler, S. L.; Fowlie, C.; Merriam, J.
2008-12-01
When teaching geophysics, it is useful to have models that students can investigate in order to develop intuition concerning the physical systems that they are learning about. In recent years, numerical modeling packages have emerged that are sufficiently easy to use that they are suitable for use in undergraduate classroom settings. In this submission, I will describe some examples of the use of computer simulations to enhance student learning in a course on applied geophysics. In some cases, the results of the simulations are compared with the results of analog experiments. Examples of the numerical calculations include simulations of the resistivity method and of the frequency-domain and the time-domain electromagnetic method. I will describe the numerical models used and there results and their effectiveness as a teaching aid.
Life table methods applied to use of medical care and of prescription drugs in early childhood.
Rasmussen, F; Smedby, B
1989-06-01
Life table methods were applied to analyses of longitudinal data on the use of medical care during the first 5 years of life among all 1701 children born in a Swedish semirural municipality. Cumulative proportions of the children who had used particular types of medical care or prescription drugs at least once by certain ages were estimated. By the fifth birthday, 98% had made at least one visit to any physician and 82% at least one visit to a paediatrician. By the fifth birthday at least one prescription for antibiotics had been purchased at a pharmacy by 82%; and 33% had been admitted to inpatient hospital care at least once (excluding immediate postnatal care). Acute conditions and more chronic diseases were also studied using these methods. At least one visit to a physician at a primary health care centre had been made for acute otitis media in 65% of 5 year olds and for atopic dermatitis in 8%.
Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies
Variational method applied to two-component Ginzburg-Landau theory
NASA Astrophysics Data System (ADS)
Romaguera, Antonio R. de C.; Silva, K. J. S.
2013-09-01
In this paper, we apply a variational method to two-component superconductors, as in the MgB2 materials, using the two-component Ginzburg-Landau (GL) theory. We expand the order parameter in a series of eigenfunctions containing one or two terms in each component. We also assume azimuthal symmetry to the set of eigenfunctions used in the mathematical procedure. The extension of the GL theory to two components leads to the quantization of the magnetic flux in fractions of ϕ0. We consider two kinds of component interaction potentials: Γ1|ΨI|2|ΨII|2 and Γ _2(Ψ _I^*Ψ _{II}+Ψ _IΨ _{II}^*). The simplicity of the method allows one to implement it in a broad range of physical systems, such as hybrid magnetic-superconducting mesoscopic systems, texturized thin films, metallic hydrogen superfluid, and mesoscopic superconductors near inhomogeneous magnetic fields, simply by replacing the vector potential by its corresponding expression. As an example, we apply our results to a disk of radius R and thickness t.
NASA Astrophysics Data System (ADS)
Hayashi, K.
2014-12-01
. Engineers need more quantitative information. In order to apply geophysical methods to engineering design works, quantitative interpretation is very important. The presentation introduces several case studies from different countries around the world (Fig. 2) from the integrated and quantitative points of view.
Analysis of a class of pulse modulated dc-to-dc power converters
NASA Technical Reports Server (NTRS)
Burger, P.
1975-01-01
The basic operational characteristics of dc-to-dc converters are analyzed. The basic physical characteristics of power converters are identified. A simple class of dc-to-dc power converters is chosen which could satisfy any set of operating requirements. Three different controlling methods in this class are described in detail. Necessary conditions for the stability of these converters are measured through analog computer simulation. These curves are related to other operational characteristics, such as ripple and regulation. Finally, further research is suggested for the solution of the physical design of absolutely stable, reliable, and efficient power converters of this class.
Das, B; Meirovitch, H; Navon, I M
2003-07-30
Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem.
A Novel Microaneurysms Detection Method Based on Local Applying of Markov Random Field.
Ganjee, Razieh; Azmi, Reza; Moghadam, Mohsen Ebrahimi
2016-03-01
Diabetic Retinopathy (DR) is one of the most common complications of long-term diabetes. It is a progressive disease and by damaging retina, it finally results in blindness of patients. Since Microaneurysms (MAs) appear as a first sign of DR in retina, early detection of this lesion is an essential step in automatic detection of DR. In this paper, a new MAs detection method is presented. The proposed approach consists of two main steps. In the first step, the MA candidates are detected based on local applying of Markov random field model (MRF). In the second step, these candidate regions are categorized to identify the correct MAs using 23 features based on shape, intensity and Gaussian distribution of MAs intensity. The proposed method is evaluated on DIARETDB1 which is a standard and publicly available database in this field. Evaluation of the proposed method on this database resulted in the average sensitivity of 0.82 for a confidence level of 75 as a ground truth. The results show that our method is able to detect the low contrast MAs with the background while its performance is still comparable to other state of the art approaches.
Weis, Arthur E; Kossler, Tanya M
2004-06-01
It has been argued from first principles that plants mate assortatively by flowering time. However, there have been very few studies of phenological assortative mating, perhaps because current methods to infer paternal phenotype are difficult to apply to natural populations. Two methods are presented to estimate the phenotypic correlation between mates-the quantitative genetic metric for assortative mating-for phenological traits. The first method uses individual flowering schedules to estimate mating probabilities for every potential pairing in a sample. These probabilities are then incorporated into a weighted phenotypic correlation between all potential mates and thus yield a prospective estimate based on mating opportunities. The correlation between mates can also be estimated retrospectively by comparing the regression of offspring phenotype over one parent, which is inflated by assortative mating, to the regression over mid-parent, which is not. In a demonstration experiment with Brassica rapa, the prospective correlation between flowering times (days from germination to anthesis) of pollen recipients and their potential donors was 0.58. The retrospective estimate of this correlation strongly agreed with the prospective estimate. The prospective method is easily employed in field studies that explore the effect of phenological assortative mating on selection response and population differentiation.
NASA Astrophysics Data System (ADS)
van Andel, Marijke
Methods used to quantify ammonia (NH3) volatilization from land applied manures or nitrogen fertilizers are either expensive, inefficient, or of questionable accuracy. This thesis investigates an alternative method which utilizes a Gastec passive dosimeter tube (dositube) and a semi-open static chamber which may provide an economical and simple solution to measuring total NH3 loss. Field experiments indicated that a medium perforated chamber provided a good compromise between reproducibility and minimizing contamination from nearby NH3 sources. A calibration of the Dositube method against the wind tunnel method found losses could accurately be estimated by: Estimated Total Loss (kg N ha-1) = (0.217Dw) - (0.034D) + 0.71. This calibration is based on the dositube (D) being read every 24 h, placed horizontally at a height of 0.15 m in the Dositube chamber, with wind speed (w, m s-1) measured at a height of 0.3 m and averaged over the coinciding time period.
Martin, Jennifer A.; Smith, Joshua E.; Warren, Mercedes; Chávez, Jorge L.; Hagen, Joshua A.; Kelley-Loughnane, Nancy
2015-01-01
Small molecules provide rich targets for biosensing applications due to their physiological implications as biomarkers of various aspects of human health and performance. Nucleic acid aptamers have been increasingly applied as recognition elements on biosensor platforms, but selecting aptamers toward small molecule targets requires special design considerations. This work describes modification and critical steps of a method designed to select structure-switching aptamers to small molecule targets. Binding sequences from a DNA library hybridized to complementary DNA capture probes on magnetic beads are separated from nonbinders via a target-induced change in conformation. This method is advantageous because sequences binding the support matrix (beads) will not be further amplified, and it does not require immobilization of the target molecule. However, the melting temperature of the capture probe and library is kept at or slightly above RT, such that sequences that dehybridize based on thermodynamics will also be present in the supernatant solution. This effectively limits the partitioning efficiency (ability to separate target binding sequences from nonbinders), and therefore many selection rounds will be required to remove background sequences. The reported method differs from previous structure-switching aptamer selections due to implementation of negative selection steps, simplified enrichment monitoring, and extension of the length of the capture probe following selection enrichment to provide enhanced stringency. The selected structure-switching aptamers are advantageous in a gold nanoparticle assay platform that reports the presence of a target molecule by the conformational change of the aptamer. The gold nanoparticle assay was applied because it provides a simple, rapid colorimetric readout that is beneficial in a clinical or deployed environment. Design and optimization considerations are presented for the assay as proof-of-principle work in buffer to
NASA Astrophysics Data System (ADS)
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
NASA Astrophysics Data System (ADS)
Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos
2013-04-01
In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission
NASA Astrophysics Data System (ADS)
Lukemire, Alan T.
1993-03-01
A pulse-width modulated DC-to-DC power converter including a first inductor, i.e. a transformer or an equivalent fixed inductor equal to the inductance of the secondary winding of the transformer, coupled across a source of DC input voltage via a transistor switch which is rendered alternately conductive (ON) and nonconductive (OFF) in accordance with a signal from a feedback control circuit is described. A first capacitor capacitively couples one side of the first inductor to a second inductor which is connected to a second capacitor which is coupled to the other side of the first inductor. A circuit load shunts the second capacitor. A semiconductor diode is additionally coupled from a common circuit connection between the first capacitor and the second inductor to the other side of the first inductor. A current sense transformer generating a current feedback signal for the switch control circuit is directly coupled in series with the other side of the first inductor so that the first capacitor, the second inductor and the current sense transformer are connected in series through the first inductor. The inductance values of the first and second inductors, moreover, are made identical. Such a converter topology results in a simultaneous voltsecond balance in the first inductance and ampere-second balance in the current sense transformer.
NASA Astrophysics Data System (ADS)
Lukemire, Alan T.
1995-05-01
A pulse-width modulated DC-to-DC power converter including a first inductor, i.e. a transformer or an equivalent fixed inductor equal to the inductance of the secondary winding of the transformer, coupled across a source of DC input voltage via a transistor switch which is rendered alternately conductive (ON) and nonconductive (OFF) in accordance with a signal from a feedback control circuit is described. A first capacitor capacitively couples one side of the first inductor to a second inductor which is connected to a second capacitor which is coupled to the other side of the first inductor. A circuit load shunts the second capacitor. A semiconductor diode is additionally coupled from a common circuit connection between the first capacitor and the second inductor to the other side of the first inductor. A current sense transformer generating a current feedback signal for the switch control circuit is directly coupled in series with the other side of the first inductor so that the first capacitor, the second inductor and the current sense transformer are connected in series through the first inductor. The inductance values of the first and second inductors, moreover, are made identical. Such a converter topology results in a simultaneous voltsecond balance in the first inductance and ampere-second balance in the current sense transformer.
NASA Technical Reports Server (NTRS)
Lukemire, Alan T. (Inventor)
1995-01-01
A pulse-width modulated DC-to-DC power converter including a first inductor, i.e. a transformer or an equivalent fixed inductor equal to the inductance of the secondary winding of the transformer, coupled across a source of DC input voltage via a transistor switch which is rendered alternately conductive (ON) and nonconductive (OFF) in accordance with a signal from a feedback control circuit is described. A first capacitor capacitively couples one side of the first inductor to a second inductor which is connected to a second capacitor which is coupled to the other side of the first inductor. A circuit load shunts the second capacitor. A semiconductor diode is additionally coupled from a common circuit connection between the first capacitor and the second inductor to the other side of the first inductor. A current sense transformer generating a current feedback signal for the switch control circuit is directly coupled in series with the other side of the first inductor so that the first capacitor, the second inductor and the current sense transformer are connected in series through the first inductor. The inductance values of the first and second inductors, moreover, are made identical. Such a converter topology results in a simultaneous voltsecond balance in the first inductance and ampere-second balance in the current sense transformer.
A new mechanical characterization method for microactuators applied to shape memory films
Ackler, H D; Krulevitch, P; Ramsey, P B; Seward, K P
1999-03-01
We present a new technique for the mechanical characterization of microactuators and apply it to shape memory alloy (SMA) thin films. A test instrument was designed which utilizes a spring-loaded transducer to measure displacements with resolution of 1.5 mm and forces with resolution of 0.2 mN. Employing an out- of-plane loading method for SMA thin films, strain resolution of 30 me and stress resolution of 2.5 MPa were achieved. Four mm long, 2 {micro}m thick NiTiCu ligaments suspended across open windows were bulk micromachined for use in the out-of-plane stress and strain measurements. Static analysis showed that 63% of the applied strain was recovered while ligaments were subjected to tensile stresses of 870 MPa. This corresponds to 280 mm of actual displacement against a load of 52 mN. Fatigue analysis of the ligaments showed 33% degradation in recoverable strain (from 0.3% to 0.2%) with 2 {+-} 10{sup 4} cycles for an initial strain of 2.8%.
Lopes, Fernanda Cristina Rezende; Tannous, Katia; Rueda-Ordóñez, Yesid Javier
2016-11-01
This work aims the study of decomposition kinetics of guarana seed residue using thermogravimetric analyzer under synthetic air atmosphere applying heating rates of 5, 10, and 15°C/min, from room temperature to 900°C. Three thermal decomposition stages were identified: dehydration (25.1-160°C), oxidative pyrolysis (240-370°C), and combustion (350-650°C). The activation energies, reaction model, and pre-exponential factor were determined through four isoconversional methods, master plots, and linearization of the conversion rate equation, respectively. A scheme of two-consecutive reactions was applied validating the kinetic parameters of first-order reaction and two-dimensional diffusion models for the oxidative pyrolysis stage (149.57kJ/mol, 6.97×10(10)1/s) and for combustion stage (77.98kJ/mol, 98.611/s), respectively. The comparison between theoretical and experimental conversion and conversion rate showed good agreement with average deviation lower than 2%, indicating that these results could be used for modeling of guarana seed residue. PMID:27513645
A useful method to overcome the difficulties of applying silicone gel sheet on irregular surfaces.
Grella, Roberto; Nicoletti, Gianfranco; D'Ari, Antonio; Romanucci, Vincenza; Santoro, Mariangela; D'Andrea, Francesco
2015-04-01
To date, silicone gel and silicone occlusive plates are the most useful and effective treatment options for hypertrophic scars (surgical and traumatic). Use of silicone sheeting has also been demonstrated to be effective in the treatment of minor keloids in association with corticosteroid intralesional infiltration. In our practice, we encountered four problems: maceration, rashes, pruritus and infection. Not all patients are able to tolerate the cushion, especially children, and certain anatomical regions as the face and the upper chest are not easy to dress for obvious social, psychological and aesthetic reasons. In other anatomical regions, it is also difficult to obtain adequate compression and occlusion of the scar. To overcome such problems of applying silicone gel sheeting, we tested the use of liquid silicone gel (LSG) in the treatment of 18 linear hypertrophic scars (HS group) and 12 minor keloids (KS group) as an alternative to silicone gel sheeting or cushion. Objective parameters (volume, thickness and colour) and subjective symptoms such as pain and pruritus were examined. Evaluations were made when the therapy started and after 30, 90 and 180 days of follow-up. After 90 days of treatment with silicone gel alone (two applications daily), HS group showed a significant improvement in terms of volume decrease, reduced inflammation and redness and improved elasticity. In conclusion, on the basis of our clinical data, we find LSG to be a useful method to overcome the difficulties of applying silicone gel sheeting on irregular surface.
Lopes, Fernanda Cristina Rezende; Tannous, Katia; Rueda-Ordóñez, Yesid Javier
2016-11-01
This work aims the study of decomposition kinetics of guarana seed residue using thermogravimetric analyzer under synthetic air atmosphere applying heating rates of 5, 10, and 15°C/min, from room temperature to 900°C. Three thermal decomposition stages were identified: dehydration (25.1-160°C), oxidative pyrolysis (240-370°C), and combustion (350-650°C). The activation energies, reaction model, and pre-exponential factor were determined through four isoconversional methods, master plots, and linearization of the conversion rate equation, respectively. A scheme of two-consecutive reactions was applied validating the kinetic parameters of first-order reaction and two-dimensional diffusion models for the oxidative pyrolysis stage (149.57kJ/mol, 6.97×10(10)1/s) and for combustion stage (77.98kJ/mol, 98.611/s), respectively. The comparison between theoretical and experimental conversion and conversion rate showed good agreement with average deviation lower than 2%, indicating that these results could be used for modeling of guarana seed residue.
Photonic simulation method applied to the study of structural color in Myxomycetes.
Dolinko, Andrés; Skigin, Diana; Inchaussandague, Marina; Carmaran, Cecilia
2012-07-01
We present a novel simulation method to investigate the multicolored effect of the Diachea leucopoda (Physarales order, Myxomycetes class), which is a microorganism that has a characteristic pointillistic iridescent appearance. It was shown that this appearance is of structural origin, and is produced within the peridium -protective layer that encloses the mass of spores-, which is basically a corrugated sheet of a transparent material. The main characteristics of the observed color were explained in terms of interference effects using a simple model of homogeneous planar slab. In this paper we apply a novel simulation method to investigate the electromagnetic response of such structure in more detail, i.e., taking into account the inhomogeneities of the biological material within the peridium and its curvature. We show that both features, which could not be considered within the simplified model, affect the observed color. The proposed method is of great potential for the study of biological structures, which present a high degree of complexity in the geometrical shapes as well as in the materials involved. PMID:22772212
Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui
2011-01-01
Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization method to determine which gene patents could indeed be problematic. The method is applied to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing methods and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called ‘gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing. PMID:21811306
Photonic simulation method applied to the study of structural color in Myxomycetes.
Dolinko, Andrés; Skigin, Diana; Inchaussandague, Marina; Carmaran, Cecilia
2012-07-01
We present a novel simulation method to investigate the multicolored effect of the Diachea leucopoda (Physarales order, Myxomycetes class), which is a microorganism that has a characteristic pointillistic iridescent appearance. It was shown that this appearance is of structural origin, and is produced within the peridium -protective layer that encloses the mass of spores-, which is basically a corrugated sheet of a transparent material. The main characteristics of the observed color were explained in terms of interference effects using a simple model of homogeneous planar slab. In this paper we apply a novel simulation method to investigate the electromagnetic response of such structure in more detail, i.e., taking into account the inhomogeneities of the biological material within the peridium and its curvature. We show that both features, which could not be considered within the simplified model, affect the observed color. The proposed method is of great potential for the study of biological structures, which present a high degree of complexity in the geometrical shapes as well as in the materials involved.
Resampling method for applying density-dependent habitat selection theory to wildlife surveys.
Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel
2015-01-01
Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large
Resampling method for applying density-dependent habitat selection theory to wildlife surveys.
Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel
2015-01-01
Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large
Resampling Method for Applying Density-Dependent Habitat Selection Theory to Wildlife Surveys
Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel
2015-01-01
Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large
Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system
NASA Astrophysics Data System (ADS)
Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew
2016-05-01
Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.
A Field Method for Backscatter Calibration Applied to NOAA's Reson 7125 Multibeam Echo-Sounders
NASA Astrophysics Data System (ADS)
Welton, Briana
Acoustic seafloor backscatter measurements made by multiple Reson multibeam echo-sounders (MBES) used for hydrographic survey are observed to be inconsistent, affecting the quality of data products and impeding large-scale processing efforts. A method to conduct a relative inter and intea sonar calibration in the field using dual frequency Reson 7125 MBES has been developed, tested, and evaluated to improve the consistency of backscatter measurements made from multiple MBES systems. The approach is unique in that it determines a set of corrections for power, gain, pulse length, and an angle dependent calibration term relative to a single Reson 7125 MBES calibrated in an acoustic test tank. These corrections for each MBES can then be applied during processing for any acquisition setting combination. This approach seeks to reduce the need for subjective and inefficient manual data or data product manipulation during post processing, providing a foundation for improved automated seafloor characterization using data from more than one MBES system.
IBA-Europhysics Prize in Applied Nuclear Science and Nuclear Methods in Medicine
NASA Astrophysics Data System (ADS)
MacGregor, I. J. Douglas
2014-03-01
The Nuclear Physics Board of the European Physical Society is pleased to announce that the 2013 IBA-Europhysics Prize in Applied Nuclear Science and Nuclear Methods in Medicine is awarded to Prof. Marco Durante, Director of the Biophysics Department at GSI Helmholtz Center (Darmstadt, Germany); Professor at the Technical University of Darmstadt (Germany) and Adjunct Professor at the Temple University, Philadelphia, USA. The prize was presented in the closing Session of the INPC 2013 conference by Mr. Thomas Servais, R&D Manager for Accelerator Development at the IBA group, who sponsor the IBA Europhysics Prize. The Prize Diploma was presented by Dr. I J Douglas MacGregor, Chair-elect of the EPS Nuclear Physics Division and Chair of the IBA Prize committee.
Investigation of the equatorial orographic-dynamic mechanism applying the bounded derivative method
NASA Technical Reports Server (NTRS)
Semazzi, F. H. M.
1984-01-01
A system of equations which describe the motion of a barotropic fluid in the presence of bottom topography are presented. The mathematical expression for orography is developed and the bounded derivative initialization method is applied to suppress gravitational oscillations. A stationary orographic trough is simulated. The geopotential and zonal motion have maximum deviation from the mean state at the top of the mountain. Regarding meridional speed, outflow occurs on the windward slope and inflow on the leeward slope. Divergence of order (10(-6)s(-1) is found on the windward slope while convergence of the same order of magnitude resides on the leeward slope. This outcome may have interesting implications regarding real climatology occurring over the equatorial regions of continental land masses.
Jackson, Rebecca D; Best, Thomas M; Borlawsky, Tara B; Lai, Albert M; James, Stephen; Gurcan, Metin N
2012-01-01
The conduct of clinical and translational research regularly involves the use of a variety of heterogeneous and large-scale data resources. Scalable methods for the integrative analysis of such resources, particularly when attempting to leverage computable domain knowledge in order to generate actionable hypotheses in a high-throughput manner, remain an open area of research. In this report, we describe both a generalizable design pattern for such integrative knowledge-anchored hypothesis discovery operations and our experience in applying that design pattern in the experimental context of a set of driving research questions related to the publicly available Osteoarthritis Initiative data repository. We believe that this ‘test bed’ project and the lessons learned during its execution are both generalizable and representative of common clinical and translational research paradigms. PMID:22647689
MITOM: a new unfolding code based on a spectra model method applied to neutron spectrometry.
Tomás, M; Fernández, F; Bakali, M; Muller, H
2004-01-01
The MITOM code was developed at UAB (Universitat Autònoma de Barcelona) for unfolding neutron spectrometric measurements with a Bonner spheres system (BSS). One of the main characteristics of this code is that an initial parameterisation of the neutron energy components (thermal, intermediate and fast) is needed. This code uses the Monte Carlo method and the Bayesian theorem to obtain a set of solutions achieving different criteria and conditions between calculated and measured count rates. The final solution is an average of the acceptable solutions. The MITOM code was tested for ISO sources and a good agreement was observed between the reference values and the unfolded ones for global magnitudes. The code was applied recently to characterise both thermal SIGMA and CANEL/T400 sources of the IRSN facilities. The results of these applications were very satisfactory as well.
A confocal laser scanning microscope segmentation method applied to magnetic resonance images.
Anderson, Jeffrey R; Barrett, Steven F
2008-01-01
Segmentation is the process of defining distinct objects in an image. A semi-automatic segmentation method has been developed for biological objects that have been recorded with a confocal laser scanning microscope (CLSM). The CLSM produces a sequence of thinly "sliced" images that represent cross-sectional views of the sample containing the object of interest. The cross-sectional representation, or "seed" is created of the object of interest within a single slice of the image stack. The segmentation method uses this "seed" to segment the same object in the adjacent image slice. The new "seed" is used for the next image slice and so on, until the object of interest is segmented in all images of the data set. The segmentation method is based on the idea that the object of interest does not change significantly from one image slice to the next. The segmented information is then used to create 3D renderings of the object. These renderings can be studied and analyzed on the computer screen. Previous work has demonstrated the usefulness of the algorithm as applied to the CLSM images. This paper explores the application of the segmentation method to a standard sequence of magnet resonance imaging (MRI) images. Typical MRI machines can produce impressive images of the human body. The resulting data set is often a sequence, or "stack" of cross-sectional slice images of a particular region of the body. The goal then, is to use the previously described segmentation method on a standard sequence of MRI images. This process will expose limitations with the segmentation method and areas where further work can be directed. This paper illustrates and discusses some of the differences between the data sets that make the current segmentation method inadequate for segmentation of MRI data set. Some of the differences can be corrected with modification of the segmentation algorithm, but other differences are beyond the capabilities of the segmentation method, and can possibly be
NASA Astrophysics Data System (ADS)
Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga
2014-05-01
Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually
Benavidez, A.
1986-01-01
The pattern recognition method is applied to the Andean seismic region that extends from southern latitudes 2 to 27 in the South American continent, to set a criterion for the prediction of the potential sites of strong earthquakes epicenters in the zone. It is assumed that two hypothesis hold. First, the strong earthquake epicenters typically cluster around the intersection of morphostructural lineaments. Second, the rules of recognition obtained for neighboring zones which exhibit distinctive neotectonic evolution, state of stress, spatial earthquake distribution and geological development, may be different in spite of the fact that the morphostructural zoning does not reflect a separation between them. Hence, the region is divided into two broad-scale tectonic segments located above slabs of similar scale in the Nazca plate in which subduction takes place almost subhorizontally (dipping at an angle of about 10) between latitudes 2S and 15S, and at a steeper angle (of approximately 30) within latitudes 15S to 27S. The morphostructural zoning is carried out for both zones with the determination of the lineaments and the corresponding disjunctive knots which are defined as the objects of recognition when applying the pattern recognition method. The Cora-3 algorithm is used as the computational procedure for the search of the rule of recognition of dangerous and non-dangerous sites for each zone. The set criteria contain in each case several characteristic features that represent the topography, geology and tectonics of each region. Also, it is shown that they have a physical meaning that mostly reflects the style of tectonic deformation in the related regions.
Puricelli, Edela; Fonseca, Jun Sérgio Ono; de Paris, Marcel Fasolo; Sant'Anna, Hervandil
2007-01-01
Background Surgical orthopedic treatment of the mandible depends on the development of techniques resulting in adequate healing processes. In a new technical and conceptual alternative recently introduced by Puricelli, osteotomy is performed in a more distal region, next to the mental foramen. The method results in an increased area of bone contact, resulting in larger sliding rates among bone segments. This work aimed to investigate the mechanical stability of the Puricelli osteotomy design. Methods Laboratory tests complied with an Applied Mechanics protocol, in which results from the Control group (without osteotomy) were compared with those from Test I (Obwegeser-Dal Pont osteotomy) and Test II (Puricelli osteotomy) groups. Mandible edentulous prototypes were scanned using computerized tomography, and digitalized images were used to build voxel-based finite element models. A new code was developed for solving the voxel-based finite elements equations, using a reconditioned conjugate gradients iterative solver. The Magnitude of Displacement and von Mises equivalent stress fields were compared among the three groups. Results In Test Group I, maximum stress was seen in the region of the rigid internal fixation plate, with value greater than those of Test II and Control groups. In Test Group II, maximum stress was in the same region as in Control group, but was lower. The results of this comparative study using the Finite Element Analysis suggest that Puricelli osteotomy presents better mechanical stability than the original Obwegeser-Dal Pont technique. The increased area of the proximal segment and consequent decrease of the size of lever arm applied to the mandible in the modified technique yielded lower stress values, and consequently greater stability of the bone segments. Conclusion This work showed that Puricelli osteotomy of the mandible results in greater mechanical stability when compared to the original technique introduced by Obwegeser-Dal Pont. The
Analysis of dolines using multiple methods applied to airborne laser scanning data
NASA Astrophysics Data System (ADS)
Bauer, Christian
2015-12-01
Delineating dolines is not a straightforward process especially in densely vegetated areas. This paper deals quantitatively with the surface karst morphology of a Miocene limestone occurrence in the Styrian Basin, Austria. The study area is an isolated karst mountain with a smooth morphology (former planation surface of Pliocene age), densely vegetated (mixed forest) and with a surface area of 1.3 km2. The study area is located near the city of Wildon and is named "Wildoner Buchkogel". The aim of this study was to test three different approaches in order to automatically delineate dolines. The data basis for this was a high resolution digital terrain model (DTM) derived from airborne laser scanning (ALS) and with a raster resolution of 1 × 1 m. The three different methods for doline boundary delineation are: (a) the "traditional" method based on the outermost closed contour line; (b) boundary extraction based on a drainage correction algorithm (filling up pits), and (c) boundary extraction based on hydrologic modelling (watershed). Extracted features are integrated in a GIS environment and analysed statistically regarding spatial distribution, shape geometry, elongation direction and volume. The three methods lead to different doline boundaries and therefore investigated parameters show significant variations. The applied methods have been compared with respect to their application purpose. Depending on delineation process, between 118 and 189 dolines could be defined. The high density of surface karst features demonstrates that solutional processes are major factors in the landscape development of the Wildoner Buchkogel. Furthermore the correlation to the landscape evolution of the Grazer Bergland is discussed.
The generalized cross-validation method applied to geophysical linear traveltime tomography
NASA Astrophysics Data System (ADS)
Bassrei, A.; Oliveira, N. P.
2009-12-01
The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.
Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method
Bhagwat, Nikhil V.
2005-01-01
In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.
Technique for sensing inductor and dc output currents of PWM dc-dc converter
Ma, K.; Lee, Y. . Dept. of Electronic Engineering)
1994-05-01
The design, analysis and trade-offs of a novel method to sense the inductor and dc output currents of PWM converters are presented. By sensing and adding appropriately the currents in the transistor, rectifier and capacitors of a converter using current transformers, the waveforms of inductor and dc output currents can be reconstructed accurately while maintaining isolation. This method offers high bandwidth, clean waveform, practically zero power dissipation and simple circuit. The technique is applicable to all PWM converters in both continuous and discontinuous modes, and is most suitable for the implementation of current mode control schemes like hysteretic, PWM conductance control, and output current feedforward. This approach has been experimentally verified at a wide range of current levels, duty cycles, and switching frequencies up to 1.4 MHz.
DC-Compensated Current Transformer †
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-01
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830
Intelligent dc-dc Converter Technology Developed and Tested
NASA Technical Reports Server (NTRS)
Button, Robert M.
2001-01-01
The NASA Glenn Research Center and the Cleveland State University have developed a digitally controlled dc-dc converter to research the benefits of flexible, digital control on power electronics and systems. Initial research and testing has shown that conventional dc-dc converters can benefit from improved performance by using digital-signal processors and nonlinear control algorithms.
Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.
1985-01-01
In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%.
Independent Identification Method applied to EDMOND and SonotaCo databases
NASA Astrophysics Data System (ADS)
Rudawska, R.; Matlovic, P.; Toth, J.; Kornos, L.; Hajdukova, M.
2015-10-01
In recent years, networks of low-light-level video cameras have contributed many new meteoroid orbits. As a result of cooperation and data sharing among national networks and International Meteor Organization Video Meteor Database (IMO VMDB), European Video Meteor Network Database (EDMOND; [2, 3]) has been created. Its current version contains 145 830 orbits collected from 2001 to 2014. Another productive camera network has been that of the Japanese SonotaCo consortium [5], which at present made available 168 030 meteoroid orbits collected from 2007 to 2013. In our survey we used EDMOND database with SonotaCo database together, in order to identify existing meteor showers in both databases (Figure 1 and 2). For this purpose we applied recently intoduced independed identification method [4]. In the first step of the survey we used criterion based on orbital parameters (e, q, i, !, and) to find groups around each meteor within the similarity threshold. Mean parameters of the groups were calculated usingWelch method [6], and compared using a new function based on geocentric parameters (#, #, #, and Vg). Similar groups were merged into final clusters (representing meteor showers), and compared with the IAU Meteor Data Center list of meteor showers [1]. This poster presents the results obtained by the proposed methodology.
Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P
2012-10-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.
Symmetry analysis for nonlinear time reversal methods applied to nonlinear acoustic imaging
NASA Astrophysics Data System (ADS)
Dos Santos, Serge; Chaline, Jennifer
2015-10-01
Using symmetry invariance, nonlinear Time Reversal (TR) and reciprocity properties, the classical NEWS methods are supplemented and improved by new excitations having the intrinsic property of enlarging frequency analysis bandwidth and time domain scales, with now both medical acoustics and electromagnetic applications. The analysis of invariant quantities is a well-known tool which is often used in nonlinear acoustics in order to simplify complex equations. Based on a fundamental physical principle known as symmetry analysis, this approach consists in finding judicious variables, intrinsically scale dependant, and able to describe all stages of behaviour on the same theoretical foundation. Based on previously published results within the nonlinear acoustic areas, some practical implementation will be proposed as a new way to define TR-NEWS based methods applied to NDT and medical bubble based non-destructive imaging. This paper tends to show how symmetry analysis can help us to define new methodologies and new experimental set-up involving modern signal processing tools. Some example of practical realizations will be proposed in the context of biomedical non-destructive imaging using Ultrasound Contrast Agents (ACUs) where symmetry and invariance properties allow us to define a microscopic scale-invariant experimental set-up describing intrinsic symmetries of the microscopic complex system.
Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P
2012-10-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924
Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.
2012-01-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924
NASA Astrophysics Data System (ADS)
Cukier, Robert I.
2011-01-01
Leucine zippers consist of alpha helical monomers dimerized (or oligomerized) into alpha superhelical structures known as coiled coils. Forming the correct interface of a dimer from its monomers requires an exploration of configuration space focused on the side chains of one monomer that must interdigitate with sites on the other monomer. The aim of this work is to generate good interfaces in short simulations starting from separated monomers. Methods are developed to accomplish this goal based on an extension of a previously introduced [Su and Cukier, J. Phys. Chem. B 113, 9595, (2009)] Hamiltonian temperature replica exchange method (HTREM), which scales the Hamiltonian in both potential and kinetic energies that was used for the simulation of dimer melting curves. The new method, HTREM_MS (MS designates mean square), focused on interface formation, adds restraints to the Hamiltonians for all but the physical system, which is characterized by the normal molecular dynamics force field at the desired temperature. The restraints in the nonphysical systems serve to prevent the monomers from separating too far, and have the dual aims of enhancing the sampling of close in configurations and breaking unwanted correlations in the restrained systems. The method is applied to a 31-residue truncation of the 33-residue leucine zipper (GCN4-p1) of the yeast transcriptional activator GCN4. The monomers are initially separated by a distance that is beyond their capture length. HTREM simulations show that the monomers oscillate between dimerlike and monomerlike configurations, but do not form a stable interface. HTREM_MS simulations result in the dimer interface being faithfully reconstructed on a 2 ns time scale. A small number of systems (one physical and two restrained with modified potentials and higher effective temperatures) are sufficient. An in silico mutant that should not dimerize because it lacks charged residues that provide electrostatic stabilization of the dimer
Applying Sequential Analytic Methods to Self-Reported Information to Anticipate Care Needs
Bayliss, Elizabeth A.; Powers, J. David; Ellis, Jennifer L.; Barrow, Jennifer C.; Strobel, MaryJo; Beck, Arne
2016-01-01
Purpose: Identifying care needs for newly enrolled or newly insured individuals is important under the Affordable Care Act. Systematically collected patient-reported information can potentially identify subgroups with specific care needs prior to service use. Methods: We conducted a retrospective cohort investigation of 6,047 individuals who completed a 10-question needs assessment upon initial enrollment in Kaiser Permanente Colorado (KPCO), a not-for-profit integrated delivery system, through the Colorado State Individual Exchange. We used responses from the Brief Health Questionnaire (BHQ), to develop a predictive model for cost for receiving care in the top 25 percent, then applied cluster analytic techniques to identify different high-cost subpopulations. Per-member, per-month cost was measured from 6 to 12 months following BHQ response. Results: BHQ responses significantly predictive of high-cost care included self-reported health status, functional limitations, medication use, presence of 0–4 chronic conditions, self-reported emergency department (ED) use during the prior year, and lack of prior insurance. Age, gender, and deductible-based insurance product were also predictive. The largest possible range of predicted probabilities of being in the top 25 percent of cost was 3.5 percent to 96.4 percent. Within the top cost quartile, examples of potentially actionable clusters of patients included those with high morbidity, prior utilization, depression risk and financial constraints; those with high morbidity, previously uninsured individuals with few financial constraints; and relatively healthy, previously insured individuals with medication needs. Conclusions: Applying sequential predictive modeling and cluster analytic techniques to patient-reported information can identify subgroups of individuals within heterogeneous populations who may benefit from specific interventions to optimize initial care delivery. PMID:27563684
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Shearer, Peter M.
2016-05-01
Understanding earthquake clustering in space and time is important but also challenging because of complexities in earthquake patterns and the large and diverse nature of earthquake catalogues. Swarms are of particular interest because they likely result from physical changes in the crust, such as slow slip or fluid flow. Both swarms and clusters resulting from aftershock sequences can span a wide range of spatial and temporal scales. Here we test and implement a new method to identify seismicity clusters of varying sizes and discriminate them from randomly occurring background seismicity. Our method searches for the closest neighbouring earthquakes in space and time and compares the number of neighbours to the background events in larger space/time windows. Applying our method to California's San Jacinto Fault Zone (SJFZ), we find a total of 89 swarm-like groups. These groups range in size from 0.14 to 7.23 km and last from 15 min to 22 d. The most striking spatial pattern is the larger fraction of swarms at the northern and southern ends of the SJFZ than its central segment, which may be related to more normal-faulting events at the two ends. In order to explore possible driving mechanisms, we study the spatial migration of events in swarms containing at least 20 events by fitting with both linear and diffusion migration models. Our results suggest that SJFZ swarms are better explained by fluid flow because their estimated linear migration velocities are far smaller than those of typical creep events while large values of best-fitting hydraulic diffusivity are found.
The expanding photosphere method applied to SN 1992am AT cz = 14 600 km/s
NASA Technical Reports Server (NTRS)
Schmidt, Brian P.; Kirshner, Robert P.; Eastman, Ronald G.; Hamuy, Mario; Phillips, Mark M.; Suntzeff, Nicholas B.; Maza, Jose; Filippenko, Alexei V.; Ho, Luis C.; Matheson, Thomas
1994-01-01
We present photometry and spectroscopy of Supernova (SN) 1992am for five months following its discovery by the Calan Cerro-Tololo Inter-American Observatory (CTIO) SN search. These data show SN 1992am to be type II-P, displaying hydrogen in its spectrum and the typical shoulder in its light curve. The photometric data and the distance from our own analysis are used to construct the supernova's bolometric light curve. Using the bolometric light curve, we estimate SN 1992am ejected approximately 0.30 solar mass of Ni-56, an amount four times larger than that of other well studied SNe II. SN 1992am's; host galaxy lies at a redshift of cz = 14 600 km s(exp -1), making it one of the most distant SNe II discovered, and an important application of the Expanding Photsphere Method. Since z = 0.05 is large enough for redshift-dependent effects to matter, we develop the technique to derive luminosity distances with the Expanding Photosphere Method at any redshift, and apply this method to SN 1992am. The derived distance, D = 180(sub -25) (sup +30) Mpc, is independent of all other rungs in the extragalactic distance ladder. The redshift of SN 1992am's host galaxy is sufficiently large that uncertainties due to perturbations in the smooth Hubble flow should be smaller than 10%. The Hubble ratio derived from the distance and redshift of this single object is H(sub 0) = 81(sub -15) (sup +17) km s(exp -1) Mpc(exp -1). In the future, with more of these distant objects, we hope to establish an independent and statistically robust estimate of H(sub 0) based solely on type II supernovae.
NASA Astrophysics Data System (ADS)
Shin, Y.; Lee, E.
2015-12-01
Under the influence of recent climate change, abnormal weather condition such as floods and droughts has issued frequently all over the world. The occurrence of abnormal weather in major crop production areas leads to soaring world grain prices because it influence the reduction of crop yield. Development of crop yield estimation method is important means to accommodate the global food crisis caused by abnormal weather. However, due to problems with the reliability of the seasonal climate prediction, application research on agricultural productivity has not been much progress yet. In this study, it is an object to develop long-term crop yield estimation method in major crop production countries worldwide using multi seasonal climate prediction data collected by APEC Climate Center. There are 6-month lead seasonal predictions produced by six state-of-the-art global coupled ocean-atmosphere models(MSC_CANCM3, MSC_CANCM4, NASA, NCEP, PNU, POAMA). First of all, we produce a customized climate data through temporal and spatial downscaling methods for use as a climatic input data to the global scale crop model. Next, we evaluate the uncertainty of climate prediction by applying multi seasonal climate prediction in the crop model. Because rice is the most important staple food crop in the Asia-Pacific region, we assess the reliability of the rice yields using seasonal climate prediction for main rice production countries. RMSE(Root Mean Squire Error) and TCC(Temporal Correlation Coefficient) analysis is performed in Asia-Pacific countries, major 14 rice production countries, to evaluate the reliability of the rice yield according to the climate prediction models. We compare the rice yield data obtained from FAOSTAT and estimated using the seasonal climate prediction data in Asia-Pacific countries. In addition, we show that the reliability of seasonal climate prediction according to the climate models in Asia-Pacific countries where rice cultivation is being carried out.
Applying a weighted random forests method to extract karst sinkholes from LiDAR data
NASA Astrophysics Data System (ADS)
Zhu, Junfeng; Pierskalla, William P.
2016-02-01
Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.
NASA Astrophysics Data System (ADS)
Fujimoto, K.; Yanagisawa, T.; Uetsuhara, M.
Automated detection and tracking of faint objects in optical, or bearing-only, sensor imagery is a topic of immense interest in space surveillance. Robust methods in this realm will lead to better space situational awareness (SSA) while reducing the cost of sensors and optics. They are especially relevant in the search for high area-to-mass ratio (HAMR) objects, as their apparent brightness can change significantly over time. A track-before-detect (TBD) approach has been shown to be suitable for faint, low signal-to-noise ratio (SNR) images of resident space objects (RSOs). TBD does not rely upon the extraction of feature points within the image based on some thresholding criteria, but rather directly takes as input the intensity information from the image file. Not only is all of the available information from the image used, TBD avoids the computational intractability of the conventional feature-based line detection (i.e., "string of pearls") approach to track detection for low SNR data. Implementation of TBD rooted in finite set statistics (FISST) theory has been proposed recently by Vo, et al. Compared to other TBD methods applied so far to SSA, such as the stacking method or multi-pass multi-period denoising, the FISST approach is statistically rigorous and has been shown to be more computationally efficient, thus paving the path toward on-line processing. In this paper, we intend to apply a multi-Bernoulli filter to actual CCD imagery of RSOs. The multi-Bernoulli filter can explicitly account for the birth and death of multiple targets in a measurement arc. TBD is achieved via a sequential Monte Carlo implementation. Preliminary results with simulated single-target data indicate that a Bernoulli filter can successfully track and detect objects with measurement SNR as low as 2.4. Although the advent of fast-cadence scientific CMOS sensors have made the automation of faint object detection a realistic goal, it is nonetheless a difficult goal, as measurements
Lesellier, E; Mith, D; Dubrulle, I
2015-12-01
necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. PMID:26553956
Lesellier, E; Mith, D; Dubrulle, I
2015-12-01
necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed.
Gliding Box method applied to trace element distribution of a geochemical data set
NASA Astrophysics Data System (ADS)
Paz González, Antonio; Vidal Vázquez, Eva; Rosario García Moreno, M.; Paz Ferreiro, Jorge; Saa Requejo, Antonio; María Tarquis, Ana
2010-05-01
The application of fractal theory to process geochemical prospecting data can provide useful information for evaluating mineralization potential. A geochemical survey was carried out in the west area of Coruña province (NW Spain). Major elements and trace elements were determined by standard analytical techniques. It is well known that there are specific elements or arrays of elements, which are associated with specific types of mineralization. Arsenic has been used to evaluate the metallogenetic importance of the studied zone. Moreover, as can be considered as a pathfinder of Au, as these two elements are genetically associated. The main objective of this study was to use multifractal analysis to characterize the distribution of three trace elements, namely Au, As, and Sb. Concerning the local geology, the study area comprises predominantly acid rocks, mainly alkaline and calcalkaline granites, gneiss and migmatites. The most significant structural feature of this zone is the presence of a mylonitic band, with an approximate NE-SW orientation. The data set used in this study comprises 323 samples collected, with standard geochemical criteria, preferentially in the B horizon of the soil. Occasionally where this horizon was not present, samples were collected from the C horizon. Samples were taken in a rectilinear grid. The sampling lines were perpendicular to the NE-SW tectonic structures. Frequency distributions of the studied elements departed from normal. Coefficients of variation ranked as follows: Sb < As < Au. Significant correlation coefficients between Au, Sb, and As were found, even if these were low. The so-called ‘gliding box' algorithm (GB) proposed originally for lacunarity analysis has been extended to multifractal modelling and provides an alternative to the ‘box-counting' method for implementing multifractal analysis. The partitioning method applied in GB algorithm constructs samples by gliding a box of certain size (a) over the grid map in all
A novel wireless power and data transmission AC to DC converter for an implantable device.
Liu, Jhao-Yan; Tang, Kea-Tiong
2013-01-01
This article presents a novel AC to DC converter implemented by standard CMOS technology, applied for wireless power transmission. This circuit combines the functions of the rectifier and DC to DC converter, rather than using the rectifier to convert AC to DC and then supplying the required voltage with regulator as in the transitional method. This modification can reduce the power consumption and the area of the circuit. This circuit also transfers the loading condition back to the external circuit by the load shift keying(LSK), determining if the input power is not enough or excessive, which increases the efficiency of the total system. The AC to DC converter is fabricated with the TSMC 90nm CMOS process. The circuit area is 0.071mm(2). The circuit can produce a 1V DC voltage with maximum output current of 10mA from an AC input ranging from 1.5V to 2V, at 1MHz to 10MHz. PMID:24110077
A novel wireless power and data transmission AC to DC converter for an implantable device.
Liu, Jhao-Yan; Tang, Kea-Tiong
2013-01-01
This article presents a novel AC to DC converter implemented by standard CMOS technology, applied for wireless power transmission. This circuit combines the functions of the rectifier and DC to DC converter, rather than using the rectifier to convert AC to DC and then supplying the required voltage with regulator as in the transitional method. This modification can reduce the power consumption and the area of the circuit. This circuit also transfers the loading condition back to the external circuit by the load shift keying(LSK), determining if the input power is not enough or excessive, which increases the efficiency of the total system. The AC to DC converter is fabricated with the TSMC 90nm CMOS process. The circuit area is 0.071mm(2). The circuit can produce a 1V DC voltage with maximum output current of 10mA from an AC input ranging from 1.5V to 2V, at 1MHz to 10MHz.
Review of methods used by chiropractors to determine the site for applying manipulation
2013-01-01
Background With the development of increasing evidence for the use of manipulation in the management of musculoskeletal conditions, there is growing interest in identifying the appropriate indications for care. Recently, attempts have been made to develop clinical prediction rules, however the validity of these clinical prediction rules remains unclear and their impact on care delivery has yet to be established. The current study was designed to evaluate the literature on the validity and reliability of the more common methods used by doctors of chiropractic to inform the choice of the site at which to apply spinal manipulation. Methods Structured searches were conducted in Medline, PubMed, CINAHL and ICL, supported by hand searches of archives, to identify studies of the diagnostic reliability and validity of common methods used to identify the site of treatment application. To be included, studies were to present original data from studies of human subjects and be designed to address the region or location of care delivery. Only English language manuscripts from peer-reviewed journals were included. The quality of evidence was ranked using QUADAS for validity and QAREL for reliability, as appropriate. Data were extracted and synthesized, and were evaluated in terms of strength of evidence and the degree to which the evidence was favourable for clinical use of the method under investigation. Results A total of 2594 titles were screened from which 201 articles met all inclusion criteria. The spectrum of manuscript quality was quite broad, as was the degree to which the evidence favoured clinical application of the diagnostic methods reviewed. The most convincing favourable evidence was for methods which confirmed or provoked pain at a specific spinal segmental level or region. There was also high quality evidence supporting the use, with limitations, of static and motion palpation, and measures of leg length inequality. Evidence of mixed quality supported the use
Efficient Nonnegative Matrix Factorization by DC Programming and DCA.
Le Thi, Hoai An; Vo, Xuan Thanh; Dinh, Tao Pham
2016-06-01
In this letter, we consider the nonnegative matrix factorization (NMF) problem and several NMF variants. Two approaches based on DC (difference of convex functions) programming and DCA (DC algorithm) are developed. The first approach follows the alternating framework that requires solving, at each iteration, two nonnegativity-constrained least squares subproblems for which DCA-based schemes are investigated. The convergence property of the proposed algorithm is carefully studied. We show that with suitable DC decompositions, our algorithm generates most of the standard methods for the NMF problem. The second approach directly applies DCA on the whole NMF problem. Two algorithms-one computing all variables and one deploying a variable selection strategy-are proposed. The proposed methods are then adapted to solve various NMF variants, including the nonnegative factorization, the smooth regularization NMF, the sparse regularization NMF, the multilayer NMF, the convex/convex-hull NMF, and the symmetric NMF. We also show that our algorithms include several existing methods for these NMF variants as special versions. The efficiency of the proposed approaches is empirically demonstrated on both real-world and synthetic data sets. It turns out that our algorithms compete favorably with five state-of-the-art alternating nonnegative least squares algorithms. PMID:27136704
Applying Schwarzschild's orbit superposition method to barred or non-barred disc galaxies
NASA Astrophysics Data System (ADS)
Vasiliev, Eugene; Athanassoula, E.
2015-07-01
We present an implementation of the Schwarzschild orbit superposition method, which can be used for constructing self-consistent equilibrium models of barred or non-barred disc galaxies, or of elliptical galaxies with figure rotation. This is a further development of the publicly available code SMILE; its main improvements include a new efficient representation of an arbitrary gravitational potential using two-dimensional spline interpolation of Fourier coefficients in the meridional plane, as well as the ability to deal with rotation of the density profile and with multicomponent mass models. We compare several published methods for constructing composite axisymmetric disc-bulge-halo models and demonstrate that our code produces the models that are closest to equilibrium. We also apply it to create models of triaxial elliptical galaxies with cuspy density profiles and figure rotation, and find that such models can be found and are stable over many dynamical times in a wide range of pattern speeds and angular momenta, covering both slow- and fast-rotator classes. We then attempt to create models of strongly barred disc galaxies, using an analytic three-component potential, and find that it is not possible to make a stable dynamically self-consistent model for this density profile. Finally, we take snapshots of two N-body simulations of barred disc galaxies embedded in nearly-spherical haloes, and construct equilibrium models using only information on the density profile of the snapshots. We demonstrate that such reconstructed models are in near-stationary state, in contrast with the original N-body simulations, one of which displayed significant secular evolution.
Applying Automated MR-Based Diagnostic Methods to the Memory Clinic: A Prospective Study.
Klöppel, Stefan; Peter, Jessica; Ludl, Anna; Pilatus, Anne; Maier, Sabrina; Mader, Irina; Heimbach, Bernhard; Frings, Lars; Egger, Karl; Dukart, Juergen; Schroeter, Matthias L; Perneczky, Robert; Häussermann, Peter; Vach, Werner; Urbach, Horst; Teipel, Stefan; Hüll, Michael; Abdulkadir, Ahmed
2015-01-01
Several studies have demonstrated that fully automated pattern recognition methods applied to structural magnetic resonance imaging (MRI) aid in the diagnosis of dementia, but these conclusions are based on highly preselected samples that significantly differ from that seen in a dementia clinic. At a single dementia clinic, we evaluated the ability of a linear support vector machine trained with completely unrelated data to differentiate between Alzheimer's disease (AD), frontotemporal dementia (FTD), Lewy body dementia, and healthy aging based on 3D-T1 weighted MRI data sets. Furthermore, we predicted progression to AD in subjects with mild cognitive impairment (MCI) at baseline and automatically quantified white matter hyperintensities from FLAIR-images. Separating additionally recruited healthy elderly from those with dementia was accurate with an area under the curve (AUC) of 0.97 (according to Fig. 4). Multi-class separation of patients with either AD or FTD from other included groups was good on the training set (AUC > 0.9) but substantially less accurate (AUC = 0.76 for AD, AUC = 0.78 for FTD) on 134 cases from the local clinic. Longitudinal data from 28 cases with MCI at baseline and appropriate follow-up data were available. The computer tool discriminated progressive from stable MCI with AUC = 0.73, compared to AUC = 0.80 for the training set. A relatively low accuracy by clinicians (AUC = 0.81) illustrates the difficulties of predicting conversion in this heterogeneous cohort. This first application of a MRI-based pattern recognition method to a routine sample demonstrates feasibility, but also illustrates that automated multi-class differential diagnoses have to be the focus of future methodological developments and application studies.
An alternative method of gas boriding applied to the formation of borocarburized layer
Kulka, M. Makuch, N.; Pertek, A.; Piasecki, A.
2012-10-15
The borocarburized layers were produced by tandem diffusion processes: carburizing followed by boriding. An alternative method of gas boriding was proposed. Two-stage gas boronizing in N{sub 2}-H{sub 2}-BCl{sub 3} atmosphere was applied to the formation of iron borides on a carburized substrate. This process consisted in two stages, which were alternately repeated: saturation by boron and diffusion annealing. The microstructure and microhardness of produced layer were compared to those-obtained in case of continuous gas boriding in H{sub 2}-BCl{sub 3} atmosphere, earlier used. The first objective of two-stage boronizing, consisting in acceleration of boron diffusion, has been efficiently implemented. Despite the lower temperature and shorter duration of boronizing, about 1.5 times larger iron borides' zone has been formed on carburized steel. Second objective, the absolute elimination of brittle FeB phase, has failed. However, the amount of FeB phase has been considerably limited. Longer diffusion annealing should provide the boride layer with single-phase microstructure, without FeB phase. - Highlights: Black-Right-Pointing-Pointer Alternative method of gas boriding in H{sub 2}-N{sub 2}-BCl{sub 3} atmosphere was proposed. Black-Right-Pointing-Pointer The process consisted in two stages: saturation by boron and diffusion annealing. Black-Right-Pointing-Pointer These stages of short duration were alternately repeated. Black-Right-Pointing-Pointer The acceleration of boron diffusion was efficiently implemented. Black-Right-Pointing-Pointer The amount of FeB phase in the boride zone was limited.
Applying Automated MR-Based Diagnostic Methods to the Memory Clinic: A Prospective Study.
Klöppel, Stefan; Peter, Jessica; Ludl, Anna; Pilatus, Anne; Maier, Sabrina; Mader, Irina; Heimbach, Bernhard; Frings, Lars; Egger, Karl; Dukart, Juergen; Schroeter, Matthias L; Perneczky, Robert; Häussermann, Peter; Vach, Werner; Urbach, Horst; Teipel, Stefan; Hüll, Michael; Abdulkadir, Ahmed
2015-01-01
Several studies have demonstrated that fully automated pattern recognition methods applied to structural magnetic resonance imaging (MRI) aid in the diagnosis of dementia, but these conclusions are based on highly preselected samples that significantly differ from that seen in a dementia clinic. At a single dementia clinic, we evaluated the ability of a linear support vector machine trained with completely unrelated data to differentiate between Alzheimer's disease (AD), frontotemporal dementia (FTD), Lewy body dementia, and healthy aging based on 3D-T1 weighted MRI data sets. Furthermore, we predicted progression to AD in subjects with mild cognitive impairment (MCI) at baseline and automatically quantified white matter hyperintensities from FLAIR-images. Separating additionally recruited healthy elderly from those with dementia was accurate with an area under the curve (AUC) of 0.97 (according to Fig. 4). Multi-class separation of patients with either AD or FTD from other included groups was good on the training set (AUC > 0.9) but substantially less accurate (AUC = 0.76 for AD, AUC = 0.78 for FTD) on 134 cases from the local clinic. Longitudinal data from 28 cases with MCI at baseline and appropriate follow-up data were available. The computer tool discriminated progressive from stable MCI with AUC = 0.73, compared to AUC = 0.80 for the training set. A relatively low accuracy by clinicians (AUC = 0.81) illustrates the difficulties of predicting conversion in this heterogeneous cohort. This first application of a MRI-based pattern recognition method to a routine sample demonstrates feasibility, but also illustrates that automated multi-class differential diagnoses have to be the focus of future methodological developments and application studies. PMID:26401773
NASA Astrophysics Data System (ADS)
Jaure, S.; Duchaine, F.; Staffelbach, G.; Gicquel, L. Y. M.
2013-01-01
Optimizing gas turbines is a complex multi-physical and multi-component problem that has long been based on expensive experiments. Today, computer simulation can reduce design process costs and is acknowledged as a promising path for optimization. However, performing such computations using high-fidelity methods such as a large eddy simulation (LES) on gas turbines is challenging. Nevertheless, such simulations become accessible for specific components of gas turbines. These stand-alone simulations face a new challenge: to improve the quality of the results, new physics must be introduced. Therefore, an efficient massively parallel coupling methodology is investigated. The flow solver modeling relies on the LES code AVBP which has already been ported on massively parallel architectures. The conduction solver is based on the same data structure and thus shares its scalability. Accurately coupling these solvers while maintaining their scalability is challenging and is the actual objective of this work. To obtain such goals, a methodology is proposed and different key issues to code the coupling are addressed: convergence, stability, parallel geometry mapping, transfers and interpolation. This methodology is then applied to a real burner configuration, hence demonstrating the possibilities and limitations of the solution.
An Online Gravity Modeling Method Applied for High Precision Free-INS.
Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao
2016-01-01
For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261
Pillet, N.; Berger, J.-F.; Caurier, E.
2008-08-15
Applying a variational multiparticle-multihole configuration mixing method whose purpose is to include correlations beyond the mean field in a unified way without particle number and Pauli principle violations, we investigate pairing-like correlations in the ground states of {sup 116}Sn, {sup 106}Sn, and {sup 100}Sn. The same effective nucleon-nucleon interaction, namely, the D1S parametrization of the Gogny force, is used to derive both the mean field and correlation components of nuclear wave functions. Calculations are performed using an axially symmetric representation. The structure of correlated wave functions, their convergence with respect to the number of particle-hole excitations, and the influence of correlations on single-particle level spectra and occupation probabilities are analyzed and compared with results obtained with the same two-body effective interaction from BCS, Hartree-Fock-Bogoliubov, and particle number projected after variation BCS approaches. Calculations of nuclear radii and the first theoretical excited 0{sup +} states are compared with experimental data.
Conductivity of molten sodium chloride in an arbitrarily weak dc electric field.
Delhommelle, Jerome; Cummings, Peter T; Petravic, Janka
2005-09-15
We use nonequilibrium molecular-dynamics (NEMD) simulations to characterize the response of a fluid subjected to an electric field. We focus on the response for very weak fields. Fields accessible by conventional NEMD methods are typically of the order of 10(9) V m(-1), i.e., several orders of magnitude larger than those typically used in experiments. Using the transient time-correlation function, we show how NEMD simulations can be extended to study systems subjected to a realistic dc electric field. We then apply this approach to study the response of molten sodium chloride for a wide range of dc electric fields.
Myers, S; Larsen, S; Wagoner, J; Henderer, B; McCallen, D; Trebes, J; Harben, P; Harris, D
2003-10-29
Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization of full three-dimensional (3D) finite difference modeling, as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project, in support of LLNL's national-security mission, benefits the U.S. military and intelligence community. Fiscal year (FY) 2003 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A three-seismic-array vehicle tracking testbed was installed on site
Applying Terzaghi's method of slope characterization to the recognition of Holocene land slippage
NASA Astrophysics Data System (ADS)
Rogers, J. David; Chung, Jae-won
2016-07-01
-placed trenches across headscarp grabens can provide more detailed structure of old landslides and are usually a cost-effective approach. Additional subsurface exploration can often be employed to characterize landslides. Small diameter borings are usually employed for geotechnical investigations but can easily be applied to landslides, depending on the mean particle size diameter (D50). Downhole logging of large diameter holes is the best method to evaluate complex subsurface conditions, such as dormant bedrock landslides.
Babili, Fatiha El; Fouraste, I.; Rougaignon, C.; Moulis, C.; Chatelain, C.
2012-01-01
Aim and Background: A botanical study is conducted to provide a standard diagnostic tool. In order to improve the quality assurance of the secondary tuberized roots of Harpagophytum procumbens, derived extract and phytomedicine, a simple, rapid, and accurate high-performance liquid chromatography (HPLC) method was developed to assess the harpagoside. Material and Mehods: This HPLC assay was performed on a reversedphase C18 column with methanol and water (50/50–V/V) as the mobile phase with a flow rate of 1.5 mL/min and using a monitoring wavelength at 278 nm. Results and Conclusion: This method was successfully applied to quantify these bioactive iridoid in an aqueous extract of H. procumbens and in its related phytomedicine “harpagophyton”. The result demonstrated that the quantification of harpagoside, indicating that the quality control of the bioactive ingredient in H. procumbens, derived extract and phytomedicine, is critical to ensure its clinical benefits. PMID:22701294
NASA Technical Reports Server (NTRS)
Dias, W. C.
1994-01-01
RISK D/C is a prototype program which attempts to do program risk modeling for the Space Exploration Initiative (SEI) architectures proposed in the Synthesis Group Report. Risk assessment is made with respect to risk events, their probabilities, and the severities of potential results. The program allows risk mitigation strategies to be proposed for an exploration program architecture and to be ranked with respect to their effectiveness. RISK D/C allows for the fact that risk assessment in early planning phases is subjective. Although specific to the SEI in its present form, RISK D/C can be used as a framework for developing a risk assessment program for other specific uses. RISK D/C is organized into files, or stacks, of information, including the architecture, the hazard, and the risk event stacks. Although predefined, all stacks can be upgraded by a user. The architecture stack contains information concerning the general program alternatives, which are subsequently broken down into waypoints, missions, and mission phases. The hazard stack includes any background condition which could result in a risk event. A risk event is anything unfavorable that could happen during the course of a specific point within an architecture, and the risk event stack provides the probabilities, consequences, severities, and any mitigation strategies which could be used to reduce the risk of the event, and how much the risk is reduced. RISK D/C was developed for Macintosh series computers. It requires HyperCard 2.0 or later, as well as 2Mb of RAM and System 6.0.8 or later. A Macintosh II series computer is recommended due to speed concerns. The standard distribution medium for this package is one 3.5 inch 800K Macintosh format diskette. RISK D/C was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Macintosh and HyperCard are trademarks of Apple Computer, Inc.
Calatroni, S.; Descoeudres, A.; Levinsen, Y.; Taborelli, M.; Wuensch, W.
2009-01-22
In the context of the CLIC (Compact Linear Collider) project investigations of DC breakdown in ultra high vacuum are carried out in parallel with high power RF tests. From the point of view of saturation breakdown field the best material tested so far is stainless steel, followed by titanium. Copper shows a four times weaker breakdown field than stainless steel. The results indicate clearly that the breakdown events are initiated by field emission current and that the breakdown field is limited by the cathode. In analogy to RF, the breakdown probability has been measured in DC and the data show similar behaviour as a function of electric field.
Report on Non-Contact DC Electric Field Sensors
Miles, R; Bond, T; Meyer, G
2009-06-16
This document reports on methods used to measure DC electrostatic fields in the range of 100 to 4000 V/m using a non-contact method. The project for which this report is written requires this capability. Non-contact measurements of DC fields is complicated by the effect of the accumulation of random space-charges near the sensors which interfere with the measurement of the field-of-interest and consequently, many forms of field measurements are either limited to AC measurements or use oscillating devices to create pseudo-AC fields. The intent of this document is to report on methods discussed in the literature for non-contact measurement of DC fields. Electric field meters report either the electric field expressed in volts per distance or the voltage measured with respect to a ground reference. Common commercial applications for measuring static (DC) electric fields include measurement of surface charge on materials near electronic equipment to prevent arcing which can destroy sensitive electronic components, measurement of the potential for lightning to strike buildings or other exposed assets, measurement of the electric fields under power lines to investigate potential health risks from exposure to EM fields and measurement of fields emanating from the brain for brain diagnostic purposes. Companies that make electric field sensors include Trek (Medina, NY), MKS Instruments, Boltek, Campbell Systems, Mission Instruments, Monroe Electronics, AlphaLab, Inc. and others. In addition to commercial vendors, there are research activities continuing in the MEMS and optical arenas to make compact devices using the principles applied to the larger commercial sensors.
NASA Astrophysics Data System (ADS)
Zamanian, M.; Khadem, S. E.
2010-01-01
This paper studies the nonlinear vibration of a clamped-clamped microresonator under combined electric and piezoelectric actuations. The electric actuation is induced by applying an AC-DC voltage between the microbeam and the electrode plate that lies on opposite sides of the microbeam, and the piezoelectric actuation is induced by applying the DC voltage between upper and lower sides of the piezoelectric layer deposited on the microbeam length. It is assumed that the neutral axis of bending is stretched when the microbeam is deflected. The equations of motion are derived using Newton's second law, and are solved using the multiple-scale perturbation method. It is shown that, depending on the value of DC electric and piezoelectric actuations, geometry and the bending stiffness of the system. A softening or hardening behavior may be realized. It demonstrates that nonlinear behavior of an electrically actuated microresonator may be tuned to a linear behavior by applying a convenient DC electric voltage to the piezoelectric layer, and so an undesirable shift of resonance frequency may be removed. If one lets the applied voltage to the piezoelectric layer be equal to zero, this paper would be an effort to tailor the linear and nonlinear stiffness coefficients of two layered electrically actuated microresonators without the assumption that the lengths of the two layers are equal.
Struble, Julie M.; Gill, Ryan T.
2009-01-01
Background The ability of bacteria to rapidly evolve resistance to antibiotics is a critical public health problem. Resistance leads to increased disease severity and death rates, as well as imposes pressure towards the discovery and development of new antibiotic therapies. Improving understanding of the evolution and genetic basis of resistance is a fundamental goal in the field of microbiology. Results We have applied a new genomic method, Scalar Analysis of Library Enrichments (SCALEs), to identify genomic regions that, given increased copy number, may lead to aminoglycoside resistance in Pseudomonas aeruginosa at the genome scale. We report the result of selections on highly representative genomic libraries for three different aminoglycoside antibiotics (amikacin, gentamicin, and tobramycin). At the genome-scale, we show significant (p<0.05) overlap in genes identified for each aminoglycoside evaluated. Among the genomic segments identified, we confirmed increased resistance associated with an increased copy number of several genomic regions, including the ORF of PA5471, recently implicated in MexXY efflux pump related aminoglycoside resistance, PA4943-PA4946 (encoding a probable GTP-binding protein, a predicted host factor I protein, a δ 2-isopentenylpyrophosphate transferase, and DNA mismatch repair protein mutL), PA0960–PA0963 (encoding hypothetical proteins, a probable cold shock protein, a probable DNA-binding stress protein, and aspartyl-tRNA synthetase), a segment of PA4967 (encoding a topoisomerase IV subunit B), as well as a chimeric clone containing two inserts including the ORFs PA0547 and PA2326 (encoding a probable transcriptional regulator and a probable hypothetical protein, respectively). Conclusions The studies reported here demonstrate the application of new a genomic method, SCALEs, which can be used to improve understanding of the evolution of antibiotic resistance in P. aeruginosa. In our demonstration studies, we identified a
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-05-01
Three different spectrophotometric methods were applied for the quantitative analysis of flucloxacillin and amoxicillin in their binary mixture, namely, ratio subtraction, absorbance subtraction and amplitude modulation. A comparative study was done listing the advantages and the disadvantages of each method. All the methods were validated according to the ICH guidelines and the obtained accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can be used for the routine analysis of flucloxacillin and amoxicillin in their binary mixtures.
Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed
2016-05-15
Three different spectrophotometric methods were applied for the quantitative analysis of flucloxacillin and amoxicillin in their binary mixture, namely, ratio subtraction, absorbance subtraction and amplitude modulation. A comparative study was done listing the advantages and the disadvantages of each method. All the methods were validated according to the ICH guidelines and the obtained accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can be used for the routine analysis of flucloxacillin and amoxicillin in their binary mixtures.
The Dylos DC1100 air quality monitor measures particulate matter (PM) to provide a continuous assessment of indoor air quality. The unit counts particles in two size ranges: large and small. According to the manufacturer, large particles have diameters between 2.5 and 10 micromet...
Campiotti, Richard H.; Hopwood, James E.
1990-01-01
A system for starting an arc for welding uses three DC power supplies, a high voltage supply for initiating the arc, an intermediate voltage supply for sustaining the arc, and a low voltage welding supply directly connected across the gap after the high voltage supply is disconnected.
Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling
NASA Technical Reports Server (NTRS)
Fink, P. W.; Wilton, D. R.; Dobbins, J. A.
2002-01-01
In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required
DC-DC powering for the CMS pixel upgrade
NASA Astrophysics Data System (ADS)
Feld, Lutz; Fleck, Martin; Friedrichs, Marcel; Hensch, Richard; Karpinski, Waclaw; Klein, Katja; Rittich, David; Sammet, Jan; Wlochal, Michael
2013-12-01
The CMS experiment plans to replace its silicon pixel detector with a new one with improved rate capability and an additional detection layer at the end of 2016. In order to cope with the increased number of detector modules the new pixel detector will be powered via DC-DC converters close to the sensitive detector volume. This paper reviews the DC-DC powering scheme and reports on the ongoing R&D program to develop converters for the pixel upgrade. Design choices are discussed and results from the electrical and thermal characterisation of converter prototypes are shown. An emphasis is put on system tests with up to 24 converters. The performance of pixel modules powered by DC-DC converters is compared to conventional powering. The integration of the DC-DC powering scheme into the pixel detector is described and system design issues are reviewed.
The range split-spectrum method for ionosphere estimation applied to the 2008 Kyrgyzstan earthquake
NASA Astrophysics Data System (ADS)
Gomba, Giorgio; Eineder, Michael
2015-04-01
L-band remote sensing systems, like the future Tandem-L mission, are disrupted by the ionized upper part of the atmosphere called ionosphere. The ionosphere is a region of the upper atmosphere composed by gases that are ionized by the solar radiation. The extent of the effects induced on a SAR measurement is given by the electron density integrated along the radio-wave paths and on its spatial variations. The main effect of the ionosphere on microwaves is to cause an additional delay, which introduces a phase difference between SAR measurements modifying the interferometric phase. The objectives of the Tandem-L mission are the systematic monitoring of dynamic Earth processes like Earth surface deformations, vegetation structure, ice and glacier changes and ocean surface currents. The scientific requirements regarding the mapping of surface deformation due to tectonic processes, earthquakes, volcanic cycles and anthropogenic factors demand deformation measurements; namely one, two or three dimensional displacement maps with resolutions of a few hundreds of meters and accuracies of centimeter to millimeter level. Ionospheric effects can make impossible to produce deformation maps with such accuracy and must therefore be estimated and compensated. As an example of this process, the implementation of the range split-spectrum method proposed in [1,2] will be presented and applied to an example dataset. The 2008 Kyrgyzstan Earthquake of October 5 is imaged by an ALOS PALSAR interferogram; a part from the earthquake, many fringes due to strong ionospheric variations can also be seen. The compensated interferogram shows how the ionosphere-related fringes were successfully estimated and removed. [1] Rosen, P.A.; Hensley, S.; Chen, C., "Measurement and mitigation of the ionosphere in L-band Interferometric SAR data," Radar Conference, 2010 IEEE , vol., no., pp.1459,1463, 10-14 May 2010 [2] Brcic, R.; Parizzi, A.; Eineder, M.; Bamler, R.; Meyer, F., "Estimation and
Shear-Sensitive Liquid Crystal Coating Method Applied Through Transparent Test Surfaces
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Wilder, Michael C.
1999-01-01
Research conducted at NASA Ames Research Center has shown that the color-change response of a shear-sensitive liquid crystal coating (SSLCC) to aerodynamic shear depends on both the magnitude of the local shear vector and its direction relative to the observer's in-plane line of sight. In conventional applications, the surface of the SSLCC exposed to aerodynamic shear is illuminated with white light from the normal direction and observed from an oblique above-plane view angle of order 30 deg. In this top-light/top-view mode, shear vectors with components directed away from the observer cause the SSLCC to exhibit color-change responses. At any surface point, the maximum color change (measured from the no-shear red or orange color) always occurs when the local vector is aligned with, and directed away from, the observer. The magnitude of the color change at this vector-observer-aligned orientation scales directly with shear stress magnitude. Conversely, any surface point exposed to a shear vector with a component directed toward the observer exhibits a non-color-change response, always characterized by a rusty-red or brown color, independent of both shear magnitude and direction. These unique, highly directional color-change responses of SSLCCs to aerodynamic shear allow for the full-surface visualization and measurement of continuous shear stress vector distributions. The objective of the present research was to investigate application of the SSLCC method through a transparent test surface. In this new back-light/back-view mode, the exposed surface of the SSLCC would be subjected to aerodynamic shear stress while the contact surface between the SSLCC and the solid, transparent wall would be illuminated and viewed in the same geometrical arrangement as applied in conventional applications. It was unknown at the outset whether or not color-change responses would be observable from the contact surface of the SSLCC, and, if seen, how these color-change responses might
187Re - 187Os Nuclear Geochronometry: A New Dating Method Applied to Old Ores
NASA Astrophysics Data System (ADS)
Roller, Goetz
2015-04-01
187Re - 187Os nuclear geochronometry is a newly developed dating method especially (but not only) for PGE hosting magmatic ore deposits. It combines ideas of nuclear astrophysics with geochronology. For this, the concept of sudden nucleosynthesis [1-3] is used to calculate so-called nucleogeochronometric Rhenium-Osmium two-point-isochrone (TPI) ages. Here, the method is applied to the Sudbury Igneous Complex (SIC) and the Stillwater Complex (SC), using a set of two nuclear geochronometers. They are named the BARBERTON ( Re/Os = 0.849, 187Os/186Os = 10.04 ± 0.015 [4]) and the IVREA (Re/Os = 0.951, 187Os/186Os = 1.9360 ± 0.0015 [5]) nuclear geochronometer. Calculated TPI ages are consistent with results from Sm-Nd geochronology, a previously published Re-Os Molybdenum age of 2740 ± 80 Ma for the G-chromitite of the SC [6] and a Re-Os isochrone age of 1689 ± 160 Ma for the Strathcona ores of the SIC [7]. This leads to an alternative explanation of the peculiar and enigmatic 187Os/186Osi isotopic signatures reported from both ore deposits. For example, for a TPI age of 2717 ± 100 Ma the Ultramafic Series of the SC contains both extremely low (subchrondritic) 187Os/186Osi ratios (187Os/186Osi = 0.125 ± 0.067) and extremely radiogenic isotopic signatures (187Os/186Osi = 6.55 ± 1.7, [6]) in mineral separates (chromites) and whole rock samples, respectively. Within the Strathcona ores of the SIC, even more pronounced radiogenic 187Os/186Os initial ratios can be calculated for TPI ages between 1586 ± 63 Ma (187Os/186Osi = 8.998 ± 0.045) and 1733 ± 84 Ma (187Os/186Osi = 8.901 ± 0.059). These results are in line with the recalculated Re-Os isochrone age of 1689 ± 160 Ma (187Os/186Osi = 8.8 ± 2.3 [7]). In the light of nuclear geochronometry, the occurrence of such peculiar isotopic 187Os/186Osi signatures within one and the same lithological horizon are plausible if explained by mingling of the two nucleogeochronometric (BARBERTON and IVREA) reservoirs containing
ERIC Educational Resources Information Center
Storberg-Walker, Julia; Chermack, Thomas J.
2007-01-01
The purpose of this article is to describe four methods for completing the conceptual development phase of theory building research for single or multiparadigm research. The four methods selected for this review are (1) Weick's method of "theorizing as disciplined imagination" (1989); (2) Whetten's method of "modeling as theorizing" (2002); (3)…
[Protein assay by the modified Dumas method applied to preparations of plasma proteins].
Blondel, P; Vian, L
1993-01-01
Quantify protein according Pharmacopoeia method, based on Kjeldahl method, needs a long time to do. The development of an automaton which used the modified Dumas method divide the analysis time by 15 (6 minutes versus over 90 minutes). The results show no statistical differences between official method and this one. PMID:8154798
ERIC Educational Resources Information Center
Paatela-Nieminen, Martina
2008-01-01
In art education we need methods for studying works of art and visual culture interculturally because there are many multicultural art classes and little consensus as to how to interpret art in different cultures. In this article my central aim was to apply the intertextual method that I developed in my doctoral thesis for Western art education to…
Enhancement of Voltage Stability of DC Smart Grid During Islanded Mode by Load Shedding Scheme
NASA Astrophysics Data System (ADS)
Nassor, Thabit Salim; Senjyu, Tomonobu; Yona, Atsushi
2015-10-01
This paper presents the voltage stability of a DC smart grid based on renewable energy resources during grid connected and isolated modes. During the islanded mode the load shedding, based on the state of charge of the battery and distribution line voltage, was proposed for voltage stability and reservation of critical load power. The analyzed power system comprises a wind turbine, a photovoltaic generator, storage battery as controllable load, DC loads, and power converters. A fuzzy logic control strategy was applied for power consumption control of controllable loads and the grid-connected dual active bridge series resonant converters. The proposed DC Smart Grid operation has been verified by simulation using MATLAB® and PLECS® Blockset. The obtained results show the effectiveness of the proposed method.
Synthesis of Few-Layer Graphene Using DC PE-CVD
NASA Astrophysics Data System (ADS)
Kim, Jeong Hyuk; Castro, Edward Joseph D.; Hwang, Yong Gyoo; Lee, Choong Hun
2011-12-01
Few layer graphene (FLG) had been successfully grown on polycrystalline Ni films or foils on a large scale using DC Plasma Enhanced Chemical Vapor Deposition (DC PE-CVD) as a result of the Raman spectra drawn out of the sample. The size of graphene films is dependent on the area of the Ni film as well as the DC PE-CVD chamber size. Synthesis time has an effect on the quality of graphene produced. However, further analysis and experiments must be pursued to further identify the optimum settings and conditions of producing better quality graphene. Applied plasma voltage on the other hand, had an influence on the minimization of defects in the graphene grown. It has also presented a method of producing a free standing PMMA/graphene membrane on a FeCl3(aq) solution which could then be transferred to a desired substrate.
Method of protecting the surface of a substrate. [by applying aluminide coating
NASA Technical Reports Server (NTRS)
Gedwill, M. A. (Inventor); Grisaffe, S. J.
1974-01-01
The surface of a metallic base system is initially coated with a metallic alloy layer that is ductile and oxidation resistant. An aluminide coating is then applied to the metallic alloy layer. The chemistry of the metallic alloy layer is such that the oxidation resistance of the subsequently aluminized outermost layer is not seriously degraded.
Methods in (Applied) Folk Linguistics: Getting into the Minds of the Folk
ERIC Educational Resources Information Center
Preston, Dennis R.
2011-01-01
This paper deals with data gathering and interpretation in folk linguistics, but, as the parenthetical title suggests, it is not limited to any prejudged notion of what approaches or techniques might be most relevant to the wide variety of concerns encompassed by applied linguistics. In this article, the author conceives of folk linguistics…
Applying Item Response Theory Methods to Examine the Impact of Different Response Formats
ERIC Educational Resources Information Center
Hohensinn, Christine; Kubinger, Klaus D.
2011-01-01
In aptitude and achievement tests, different response formats are usually used. A fundamental distinction must be made between the class of multiple-choice formats and the constructed response formats. Previous studies have examined the impact of different response formats applying traditional statistical approaches, but these influences can also…
[Scaling methods applied to set priorities in training programs in organizations].
Sanduvete Chaves, Susana; Barbero García, María Isabel; Chacón Moscoso, Salvador; Pérez-Gil, J Antonio; Holgado Tello, F Pablo; Sánchez Martín, Milagrosa; Lozano Lozano, J Antonio
2009-11-01
Criteria to assess the needs in order to plan training programs are not usually defined explicitly in organizational contexts. We propose scaling methods as a feasible and useful procedure to set priorities of training programs; also, we propose the most suitable method for this intervention context. 404 employees from a public organization completed an ad hoc questionnaire to assess training needs in different areas from 2004 to 2006; concretely, 117, 75 and 286 stimuli were scaled, respectively. Then, four scaling methods were compared: Dunn-Rankin's method and three methods derived from Thurstone's Law of Categorical Judgment--ranking, successive intervals and equal-appearing intervals. The feasibility and utility of these scaling methods to solve the problems described is shown. Taking into account the most accurate compared methods, we propose ranking as the most parsimonious method (with regard to procedure simplicity). Future research developments are described.
Applying Item Response Theory methods to design a learning progression-based science assessment
NASA Astrophysics Data System (ADS)
Chen, Jing
Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1) how to use items in different formats to classify students into levels on the learning progression, (2) how to design a test to give good information about students' progress through the learning progression of a particular construct and (3) what characteristics of test items support their use for assessing students' levels. Data used for this study were collected from 1500 elementary and secondary school students during 2009--2010. The written assessment was developed in several formats such as the Constructed Response (CR) items, Ordered Multiple Choice (OMC) and Multiple True or False (MTF) items. The followings are the main findings from this study. The OMC, MTF and CR items might measure different components of the construct. A single construct explained most of the variance in students' performances. However, additional dimensions in terms of item format can explain certain amount of the variance in student performance. So additional dimensions need to be considered when we want to capture the differences in students' performances on different types of items targeting the understanding of the same underlying progression. Items in each item format need to be improved in certain ways to classify students more accurately into the learning progression levels. This study establishes some general steps that can be followed to design other learning progression-based tests as well. For example, first, the boundaries between levels on the IRT scale can be defined by using the means of the item thresholds across a set of good items. Second, items in multiple formats can be selected to achieve the information criterion at all
Stromer, R
2000-01-01
Lattal and Perone's Handbook of methods used in human operant research on behavioral processes will be a valuable resource for researchers who want to bridge laboratory developments with applied study. As a supplemental resource, investigators are also encouraged to examine the series of papers in the Journal of Applied Behavior Analysis that discuss basic research and its potential for application. Increased knowledge of behavioral processes in laboratory research could lead to innovative solutions to practical problems addressed by applied behavior analysts in the home, classroom, clinic, and community. PMID:10738963
Stromer, R
2000-01-01
Lattal and Perone's Handbook of methods used in human operant research on behavioral processes will be a valuable resource for researchers who want to bridge laboratory developments with applied study. As a supplemental resource, investigators are also encouraged to examine the series of papers in the Journal of Applied Behavior Analysis that discuss basic research and its potential for application. Increased knowledge of behavioral processes in laboratory research could lead to innovative solutions to practical problems addressed by applied behavior analysts in the home, classroom, clinic, and community.
NASA Astrophysics Data System (ADS)
Dhakal, Tilak R.; Zhang, Duan Z.
2016-11-01
Using a simple one-dimensional shock problem as an example, the present paper investigates numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the dual domain material point (DDMP) method. For a weak isothermal shock of ideal gas, the MPM cannot be used with accuracy. With a small number of particles per cell, GIMP and CPDI produce reasonable results. However, as the number of particles increases the methods fail to converge and produce pressure spikes. The DDMP method behaves in an opposite way. With a small number of particles per cell, DDMP results are unsatisfactory. As the number of particles increases, the DDMP results converge to correct solutions, but the large number of particles needed for convergence makes the method very expensive to use in these types of shock wave problems in two- or three-dimensional cases. The cause for producing the unsatisfactory DDMP results is identified. A simple improvement to the method is introduced by using sub-points. With this improvement, the DDMP method produces high quality numerical solutions with a very small number of particles. Although in the present paper, the numerical examples are one-dimensional, all derivations are for multidimensional problems. With the technique of approximately tracking particle domains of CPDI, the extension of this sub-point method to multidimensional problems is straightforward. This new method preserves the conservation properties of the DDMP method, which conserves mass and momentum exactly and conserves energy to the second order in both spatial and temporal discretizations.
A new, safer method of applying antimetabolites during glaucoma filtering surgery.
Melo, António B; Spaeth, George L
2010-01-01
Blebs resulting from glaucoma filtration surgery tend to result in lower intraocular pressure and to be associated with fewer complications when they are diffuse and spread over the globe rather than localized to the area over the scleral flap. One way to achieve this type of bleb morphology is by applying the antimetabolite to a larger area than the one usually used in the past, in which the antimetabolite was placed only over the area of the scleral flap. In this article, the authors present a safe and inexpensive technique, which consists of using sponges with long, colored tails. This allows applying antimetabolite as far under the Tenon's capsule as desired without the risk of losing the sponges in the sub-Tenon's space. PMID:20507025
Learning and applying new quality improvement methods to the school health setting.
Elik, Laurel L
2013-11-01
A school health registered nurse identified medication administration documentation errors by unlicensed assistive personnel (UAP) in a system of school health clinics in an urban setting. This nurse applied the Lean Six Sigma Define, Measure, Analyze, Improve, Control process of improvement methodology to effectively improve the process. The UAP of medication administration documentation error rate improved from 68% to 35%. This methodology may be used by school nurses to collaboratively look at ways to improve processes at the point of care.
A study of the limitations of linear theory methods as applied to sonic boom calculations
NASA Technical Reports Server (NTRS)
Darden, Christine M.
1990-01-01
Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.
NASA Astrophysics Data System (ADS)
Hartikainen, Markus E.; Ojalehto, Vesa; Sahlstedt, Kristian
2015-03-01
Using an interactive multiobjective optimization method called NIMBUS and an approximation method called PAINT, preferable solutions to a five-objective problem of operating a wastewater treatment plant are found. The decision maker giving preference information is an expert in wastewater treatment plant design at the engineering company Pöyry Finland Ltd. The wastewater treatment problem is computationally expensive and requires running a simulator to evaluate the values of the objective functions. This often leads to problems with interactive methods as the decision maker may get frustrated while waiting for new solutions to be computed. Thus, a newly developed PAINT method is used to speed up the iterations of the NIMBUS method. The PAINT method interpolates between a given set of Pareto optimal outcomes and constructs a computationally inexpensive mixed integer linear surrogate problem for the original wastewater treatment problem. With the mixed integer surrogate problem, the time required from the decision maker is comparatively short. In addition, a new IND-NIMBUS® PAINT module is developed to allow the smooth interoperability of the NIMBUS method and the PAINT method.
Purists need not apply: the case for pragmatism in mixed methods research.
Florczak, Kristine L
2014-10-01
The purpose of this column is to describe several different ways of conducting mixed method research. The paradigms that underpin both qualitative and quantitative research are also considered along with a cursory review of classical pragmatism as it relates conducting mixed methods studies. Finally, the idea of loosely coupled systems as a means to support mixed methods studies is proposed along with several caveats to researchers who desire to use this new way of obtaining knowledge.
Maximum entropy method applied to deblurring images on a MasPar MP-1 computer
NASA Technical Reports Server (NTRS)
Bonavito, N. L.; Dorband, John; Busse, Tim
1991-01-01
A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.
Wang, Chao-Chun; Wuu, Dong-Sing; Lin, Yang-Shih; Lien, Shui-Yang; Huang, Yung-Chuan; Liu, Chueh-Yang; Chen, Chia-Fu; Nautiyal, Asheesh; Lee, Shuo-Jen
2011-11-15
In this article, Ga-doped Al-zinc-oxide (GAZO)/titanium-doped indium-tin-oxide (ITIO) bi-layer films were deposited onto glass substrates by direct current (dc) magnetron sputtering. The bottom ITIO film, with a thickness of 200 nm, was sputtered onto the glass substrate. The ITIO film was post-annealed at 350 deg. C for 10-120 min as a seed layer. The effect of post-annealing conditions on the morphologies, electrical, and optical properties of ITIO films was investigated. A GAZO layer with a thickness of 1200 nm was continuously sputtered onto the ITIO bottom layer. The results show that the properties of the GAZO/ITIO films were strongly dependent on the post-annealed conditions. The spectral haze (T{sub diffuse}/T{sub total}) of the GAZO/ITIO bi-layer films increases upon increasing the post-annealing time. The haze and resistivity of the GAZO/ITIO bi-layer films were improved with the post-annealed process. After optimizing the deposition and annealing parameters, the GAZO/ITIO bi-layer film has an average transmittance of 83.20% at the 400-800 nm wavelengths, a maximum haze of 16%, and the lowest resistivity of 1.04 x 10{sup -3}{Omega} cm. Finally, the GAZO/ITIO bi-layer films, as a front electrode for silicon-based thin film solar cells, obtained a maximum efficiency of 7.10%. These encouraging experimental results have potential applications in GAZO/ITIO bi-layer film deposition by in-line sputtering without the wet-etching process and enable the production of highly efficient, low-cost thin film solar cells.
The quasi-exactly solvable potentials method applied to the three-body problem
Chafa, F.; Chouchaoui, A. . E-mail: akchouchaoui@yahoo.fr; Hachemane, M.; Ighezou, F.Z.
2007-05-15
The quasi-exactly solved potentials method is used to determine the energies and the corresponding exact eigenfunctions for three families of potentials playing an important role in the description of interactions occurring between three particles of equal mass. The obtained results may also be used as a test in evaluating the performance of numerical methods.
A New Machine Classification Method Applied to Human Peripheral Blood Leukocytes.
ERIC Educational Resources Information Center
Rorvig, Mark E.; And Others
1993-01-01
Discusses pattern classification of images by computer and describes the Two Domain Method in which expert knowledge is acquired using multidimensional scaling of judgments of dissimilarities and linear mapping. An application of the Two Domain Method that tested its power to discriminate two patterns of human blood leukocyte distribution is…
ERIC Educational Resources Information Center
Weil, Joyce
2015-01-01
As Baby Boomers reach 65 years of age and methods of studying older populations are becoming increasingly varied (e.g., including mixed methods designs, on-line surveys, and video-based environments), there is renewed interest in evaluating methodologies used to collect data with older persons. The goal of this article is to examine…
NASA Astrophysics Data System (ADS)
Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.
2013-01-01
Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.
NASA Technical Reports Server (NTRS)
Bowes, M. A.
1978-01-01
Analytical methods were developed and/or adopted for calculating helicopter component noise, and these methods were incorporated into a unified total vehicle noise calculation model. Analytical methods were also developed for calculating the effects of noise reduction methodology on helicopter design, performance, and cost. These methods were used to calculate changes in noise, design, performance, and cost due to the incorporation of engine and main rotor noise reduction methods. All noise reduction techniques were evaluated in the context of an established mission performance criterion which included consideration of hovering ceiling, forward flight range/speed/payload, and rotor stall margin. The results indicate that small, but meaningful, reductions in helicopter noise can be obtained by treating the turbine engine exhaust duct. Furthermore, these reductions do not result in excessive life cycle cost penalties. Currently available main rotor noise reduction methodology, however, is shown to be inadequate and excessively costly.
21 CFR 74.1206 - D&C Green No. 6.
Code of Federal Regulations, 2012 CFR
2012-04-01
... consistent with current good manufacturing practice. (d) Labeling. The label of the color additive shall... ADDITIVES SUBJECT TO CERTIFICATION Drugs § 74.1206 D&C Green No. 6. (a) Identity. The color additive D&C... additive D&C Green No. 6 for use in coloring externally applied drugs shall conform to the...
21 CFR 74.1206 - D&C Green No. 6.
Code of Federal Regulations, 2013 CFR
2013-04-01
... consistent with current good manufacturing practice. (d) Labeling. The label of the color additive shall... ADDITIVES SUBJECT TO CERTIFICATION Drugs § 74.1206 D&C Green No. 6. (a) Identity. The color additive D&C... additive D&C Green No. 6 for use in coloring externally applied drugs shall conform to the...
NASA Astrophysics Data System (ADS)
Garvey, Michael T.
Computational methods are rapidly becoming a mainstay in the field of chemistry. Advances in computational methods (both theory and implementation), increasing availability of computational resources and the advancement of parallel computing are some of the major forces driving this trend. It is now possible to perform density functional theory (DFT) calculations with chemical accuracy for model systems that can be interrogated experimentally. This allows computational methods to supplement or complement experimental methods. There are even cases where DFT calculations can give insight into processes and interactions that cannot be interrogated directly by current experimental methods. This work presents several examples of the application of computational methods to the interpretation and analysis of experimentally obtained results. First, triobological systems were investigated primarily with full-potential linearized augmented plane wave (FLAPW) method DFT calculations. Second, small organic molecules adsorbed on Pd(111) were studied using projector-augmented wave (PAW) method DFT calculations and scanning tunneling microscopy (STM) image simulations to investigate molecular interactions involved in enantioselective heterogeneous catalysis. A method for method for calculating pressure-dependent shear properties of model boundary-layer lubricants is demonstrated. The calculated values are compared with experimentally obtained results. For the case of methyl pyruvate adsorbed on Pd(111), DFT-calculated adsorption energies and structures are used along with STM simulations to identify species observed by STM imaging. A previously unobserved enol species is discovered to be present along with the expected keto species. The information about methyl pyruvate species on Pd(111) is combined with previously published studies of S-alpha-(1-naphthyl)-ethylamine (NEA) to understand the nature of their interaction upon coadsorption on Pd(111). DFT calculated structures and
Sliding-mode control of single input multiple output DC-DC converter
NASA Astrophysics Data System (ADS)
Zhang, Libo; Sun, Yihan; Luo, Tiejian; Wan, Qiyang
2016-10-01
Various voltage levels are required in the vehicle mounted power system. A conventional solution is to utilize an independent multiple output DC-DC converter whose cost is high and control scheme is complicated. In this paper, we design a novel SIMO DC-DC converter with sliding mode controller. The proposed converter can boost the voltage of a low-voltage input power source to a controllable high-voltage DC bus and middle-voltage output terminals, which endow the converter with characteristics of simple structure, low cost, and convenient control. In addition, the sliding mode control (SMC) technique applied in our converter can enhance the performances of a certain SIMO DC-DC converter topology. The high-voltage DC bus can be regarded as the main power source to the high-voltage facility of the vehicle mounted power system, and the middle-voltage output terminals can supply power to the low-voltage equipment on an automobile. In the respect of control algorithm, it is the first time to propose the SMC-PID (Proportion Integration Differentiation) control algorithm, in which the SMC algorithm is utilized and the PID control is attended to the conventional SMC algorithm. The PID control increases the dynamic ability of the SMC algorithm by establishing the corresponding SMC surface and introducing the attached integral of voltage error, which endow the sliding-control system with excellent dynamic performance. At last, we established the MATLAB/SIMULINK simulation model, tested performance of the system, and built the hardware prototype based on Digital Signal Processor (DSP). Results show that the sliding mode control is able to track a required trajectory, which has robustness against the uncertainties and disturbances.
Karmaliani, Rozina; McFarlane, Judith; Asad, Nargis; Madhani, Farhana; Hirani, Saima; Shehzad, Shireen; Zaidi, Anita
2009-01-01
To achieve health for all, the development of partnerships between community residents and researchers is essential. Community-based participatory research (CBPR) engages community members, uses local knowledge in the understanding of health problems and the design of interventions, and invests community members in the processes and products of research. CBPR pivots on an iterative process of open communication, mutual respect, and power sharing to build community capacity to sustain effective health interventions. This article describes how the tenets of CBPR were applied by a multidisciplinary, international research team of maternal-child health specialists toward better health for women and children in multilingual, multiethnic, low socioeconomic communities in Karachi, Pakistan.
Cochard, E; Aubry, J F; Tanter, M; Prada, C
2011-08-01
An adaptive projection method for ultrasonic focusing through the rib cage, with minimal energy deposition on the ribs, was evaluated experimentally in 3D geometry. Adaptive projection is based on decomposition of the time-reversal operator (DORT method) and projection on the "noise" subspace. It is shown that 3D implementation of this method is straightforward, and not more time-consuming than 2D. Comparisons are made between adaptive projection, spherical focusing, and a previously proposed time-reversal focusing method, by measuring pressure fields in the focal plane and rib region using the three methods. The ratio of the specific absorption rate at the focus over the one at the ribs was found to be increased by a factor of up to eight, versus spherical emission. Beam steering out of geometric focus was also investigated. For all configurations projecting steered emissions were found to deposit less energy on the ribs than steering time-reversed emissions: thus the non-invasive method presented here is more efficient than state-of-the-art invasive techniques. In fact, this method could be used for real-time treatment, because a single acquisition of back-scattered echoes from the ribs is enough to treat a large volume around the focus, thanks to real time projection of the steered beams.
Applying the Taguchi method to river water pollution remediation strategy optimization.
Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju
2014-04-01
Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.
Applying the Taguchi method to river water pollution remediation strategy optimization.
Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju
2014-04-01
Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765
Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization
Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju
2014-01-01
Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765
Optimal error estimates for high order Runge-Kutta methods applied to evolutionary equations
McKinney, W.R.
1989-01-01
Fully discrete approximations to 1-periodic solutions of the Generalized Korteweg de-Vries and the Cahn-Hilliard equations are analyzed. These approximations are generated by an Implicit Runge-Kutta method for the temporal discretization and a Galerkin Finite Element method for the spatial discretization. Furthermore, these approximations may be of arbitrarily high order. In particular, it is shown that the well-known order reduction phenomenon afflicting Implicit Runge Kutta methods does not occur. Numerical results supporting these optimal error estimates for the Korteweg-de Vries equation and indicating the existence of a slow motion manifold for the Cahn-Hilliard equation are also provided.
Improving the accuracy of multiple integral evaluation by applying Romberg's method
NASA Astrophysics Data System (ADS)
Zhidkov, E. P.; Lobanov, Yu. Yu.; Rushai, V. D.
2009-02-01
Romberg’s method, which is used to improve the accuracy of one-dimensional integral evaluation, is extended to multiple integrals if they are evaluated using the product of composite quadrature formulas. Under certain conditions, the coefficients of the Romberg formula are independent of the integral’s multiplicity, which makes it possible to use a simple evaluation algorithm developed for one-dimensional integrals. As examples, integrals of multiplicity two to six are evaluated by Romberg’s method and the results are compared with other methods.
NASA Astrophysics Data System (ADS)
Taylor, J. E.; Che Harun, F. K.; Covington, J. A.; Gardner, J. W.
2009-05-01
Our understanding of the human olfactory system, particularly with respect to the phenomenon of nasal chromatography, has led us to develop a new generation of novel odour-sensitive instruments (or electronic noses). This novel instrument is in need of new approaches to data processing so that the information rich signals can be fully exploited; here, we apply a novel time-series based technique for processing such data. The dual-channel, large array artificial olfactory mucosa consists of 3 arrays of 300 sensors each. The sensors are divided into 24 groups, with each group made from a particular type of polymer. The first array is connected to the other two arrays by a pair of retentive columns. One channel is coated with Carbowax 20 M, and the other with OV-1. This configuration partly mimics the nasal chromatography effect, and partly augments it by utilizing not only polar (mucus layer) but also non-polar (artificial) coatings. Such a device presents several challenges to multi-variate data processing: a large, redundant dataset, spatio-temporal output, and small sample space. By applying a novel convolution approach to this problem, it has been demonstrated that these problems can be overcome. The artificial mucosa signals have been classified using a probabilistic neural network and gave an accuracy of 85%. Even better results should be possible through the selection of other sensors with lower correlation.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
Computational performance of Free Mesh Method applied to continuum mechanics problems
YAGAWA, Genki
2011-01-01
The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753
A Generalized Weizsacker-Williams Method Applied to Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Ahern, Sean C.; Poyser, William J.; Norbury, John W.; Tripathi, R. K.
2002-01-01
A new "Generalized" Weizsacker-Williams method (GWWM) is used to calculate approximate cross sections for relativistic peripheral proton-proton collisions. Instead of a mass less photon mediator, the method allows for the mediator to have mass for short range interactions. This method generalizes the Weizsacker-Williams method (WWM) from Coulomb interactions to GWWM for strong interactions. An elastic proton-proton cross section is calculated using GWWM with experimental data for the elastic p+p interaction, where the mass p+ is now the mediator. The resulting calculated cross sections is compared to existing data for the elastic proton-proton interaction. A good approximate fit is found between the data and the calculation.
Li, Gang; Zhao, Longfei; Zhou, Mei; Wang, Mengjun; Lin, Ling
2013-11-20
This paper presents an optimum method that exploits the principle of diffuse scattering and employs the least squares method (LSM) to apply and remove a shaped-function signal for low-light-level image detection. With the help of a sawtooth-shaped-function light signal applied to an image sensor, the LSM is employed to remove the sawtooth signal from the captured images and restore the weak image signal. The experiment process and result verify that this method can not only maintain the capability of upgrading the image sensor's sensitivity and signal-to-noise ratio like the previous method, but it also can improve the imaging speed in the low-light level, decrease the computation cost of the extraction process, and eliminate the influence of the environment light to satisfy the requirement of long-distance detection.
Discontinuous Galerkin finite element method applied to the 1-D spherical neutron transport equation
Machorro, Eric . E-mail: machorro@amath.washington.edu
2007-04-10
Discontinuous Galerkin finite element methods are used to estimate solutions to the non-scattering 1-D spherical neutron transport equation. Various trial and test spaces are compared in the context of a few sample problems whose exact solution is known. Certain trial spaces avoid unphysical behaviors that seem to plague other methods. Comparisons with diamond differencing and simple corner-balancing are presented to highlight these improvements.
A fast three-dimensional reconstruction method applied for the fabric defect detection
NASA Astrophysics Data System (ADS)
Song, Limei; Zhang, Chunbo; Xiong, Hui; Wei, Yiying; Chen, Huawei
2010-11-01
The fabric quality defect detection is very useful for improving the qualities of the products. It is also very important to increase the reputation and the economic benefits of a company. However, there are some shortcomings in the traditional manual detection methods, such as the low detection efficiency, the fatigue problem of the operator, and the detection inaccuracy, etc. The existing 2D image processing methods are difficult to solve the interference which is caused by non-defect case, just like the cloth folds, the flying thick silk floss, the noise from the background light and ambient light, etc. In order to solve those problem, the BCCSL (Binocular Camera Color Structure Light) method and SFMS (Shape from Multi Shading) method is proposed in this paper. The three-dimensional color coordinates of the fabric can be quickly and highly-precision obtained, thus to judge the defects shape and location. The BCCSL method and SFMS method can quickly obtain the three-dimensional coordinates' information of the fabric defects. The BCCSL method collects the 3D skeleton's information of a fabric image through the binocular video capture device and the color structured light projection device in real-time. And the details 3D coordinates of fabric outside strip structural are obtained through the proposed method SFMS. The interference information, such as the cloth fold, the flying thick silk floss, and the noise from the background light and ambient light can be excluded by using the three-dimensional defect identification. What is more, according to the characteristics of 3D structure of the defect, the fabric can be identified and classified. Further more, the possible problems from the production line can be summarized.
A partly-contacted epitaxial lateral overgrowth method applied to GaN material
Xiao, Ming; Zhang, Jincheng; Duan, Xiaoling; Shan, Hengsheng; Yu, Ting; Ning, Jing; Hao, Yue
2016-01-01
We have discussed a new crystal epitaxial lateral overgrowth (ELO) method, partly-contacted ELO (PC-ELO) method, of which the overgrowth layer partly-contacts with underlying seed layer. The passage also illustrates special mask structures with and without lithography and provides three essential conditions to achieve the PC-ELO method. What is remarkable in PC-ELO method is that the tilt angle of overgrowth stripes could be eliminated by contacting with seed layer. Moreover, we report an improved monolayer microsphere mask method without lithography of PC-ELO method, which was used to grow GaN. From the results of scanning electron microscopy, cathodoluminescence, x-ray diffraction (XRD), transmission electron microscopy, and atomic force microscope (AFM), overgrowth layer shows no tilt angle relative to the seed layer and high quality coalescence front (with average linear dislocation density <6.4 × 103 cm−1). Wing stripes peak splitting of the XRD rocking curve due to tilt is no longer detectable. After coalescence, surface steps of AFM show rare discontinuities due to the low misorientation of the overgrowth regions. PMID:27033154
Economic valuation of informal care: the contingent valuation method applied to informal caregiving.
van den Berg, Bernard; Brouwer, Werner; van Exel, Job; Koopmanschap, Marc
2005-02-01
This paper reports the results of the application of the contingent valuation method (CVM) to determine a monetary value of informal care. We discuss the current practice in valuing informal care and a theoretical model of the costs and benefits related to the provision of informal care. In addition, we developed a survey in which informal caregivers' willingness to accept (WTA) to provide an additional hour of informal care was elicited. This method is better than normally recommended valuation methods able to capture the heterogeneity and dynamics of informal care. Data were obtained from postal surveys. A total of 153 informal caregivers and 149 care recipients with rheumatoid arthritis returned a completed survey. Informal caregivers reported a mean WTA to provide a hypothetical additional hour of informal care of 9.52 Euro (n=124). Many hypotheses derived from the theoretical model and the literature were supported by the data.CVM is a promising alternative for existing methods like the opportunity cost method and the proxy good method to determine a monetary value of informal care that can be incorporated in the numerator of any economic evaluation.
ERIC Educational Resources Information Center
Ozturk, Mehmet; Duru, Mehmet Emin; Ozler, Mehmet Ali; Harmandar, Mansur
2007-01-01
The purpose of the present laboratory study is to make it possible to internalize the concepts, principles, theories and the laws of chemistry taught in the courses by observing the experiments, give information about the methods used and various techniques and tools applied and introduce some substances and their characteristics. The purpose of…
ERIC Educational Resources Information Center
Neokleous, Rania
2015-01-01
This study examined the effects of a music methods course offered at a Cypriot university on the singing skills of 33 female preservice kindergarten teachers. To systematically measure and analyze student progress, the research design was both experimental and descriptive. As an applied study which was carried out "in situ," the normal…
NASA Astrophysics Data System (ADS)
Cerpa, Nestor; Hassani, Riad; Gerbault, Muriel
2014-05-01
A large variety of geodynamical problems can be viewed as a solid/fluid interaction problem coupling two bodies with different physics. In particular the lithosphere/asthenosphere mechanical interaction in subduction zones belongs to this kind of problem, where the solid lithosphere is embedded in the asthenospheric viscous fluid. In many fields (Industry, Civil Engineering,etc.), in which deformations of solid and fluid are "small", numerical modelers consider the exact discretization of both domains and fit as well as possible the shape of the interface between the two domains, solving the discretized physic problems by the Finite Element Method (FEM). Although, in a context of subduction, the lithosphere is submitted to large deformation, and can evolve into a complex geometry, thus leading to important deformation of the surrounding asthenosphere. To alleviate the precise meshing of complex geometries, numerical modelers have developed non-matching interface methods called Fictitious Domain Methods (FDM). The main idea of these methods is to extend the initial problem to a bigger (and simpler) domain. In our version of FDM, we determine the forces at the immersed solid boundary required to minimize (at the least square sense) the difference between fluid and solid velocities at this interface. This method is first-order accurate and the stability depends on the ratio between the fluid background mesh size and the interface discretization. We present the formulation and provide benchmarks and examples showing the potential of the method : 1) A comparison with an analytical solution of a viscous flow around a rigid body. 2) An experiment of a rigid sphere sinking in a viscous fluid (in two and three dimensional cases). 3) A comparison with an analog subduction experiment. Another presentation aims at describing the geodynamical application of this method to Andean subduction dynamics, studying cyclic slab folding on the 660 km discontinuity, and its relationship
Comparison of 15 evaporation methods applied to a small mountain lake in the northeastern USA
Rosenberry, D.O.; Winter, T.C.; Buso, D.C.; Likens, G.E.
2007-01-01
Few detailed evaporation studies exist for small lakes or reservoirs in mountainous settings. A detailed evaporation study was conducted at Mirror Lake, a 0.15 km2 lake in New Hampshire, northeastern USA, as part of a long-term investigation of lake hydrology. Evaporation was determined using 14 alternate evaporation methods during six open-water seasons and compared with values from the Bowen-ratio energy-budget (BREB) method, considered the standard. Values from the Priestley-Taylor, deBruin-Keijman, and Penman methods compared most favorably with BREB-determined values. Differences from BREB values averaged 0.19, 0.27, and 0.20 mm d-1, respectively, and results were within 20% of BREB values during more than 90% of the 37 monthly comparison periods. All three methods require measurement of net radiation, air temperature, change in heat stored in the lake, and vapor pressure, making them relatively data intensive. Several of the methods had substantial bias when compared with BREB values and were subsequently modified to eliminate bias. Methods that rely only on measurement of air temperature, or air temperature and solar radiation, were relatively cost-effective options for measuring evaporation at this small New England lake, outperforming some methods that require measurement of a greater number of variables. It is likely that the atmosphere above Mirror Lake was affected by occasional formation of separation eddies on the lee side of nearby high terrain, although those influences do not appear to be significant to measured evaporation from the lake when averaged over monthly periods. ?? 2007 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.
2014-03-01
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
NASA Astrophysics Data System (ADS)
Matsuo, Miyuki; Yokoyama, Misao; Umemura, Kenji; Gril, Joseph; Yano, Ken'ichiro; Kawai, Shuichi
2010-04-01
This paper deals with the kinetics of the color properties of hinoki ( Chamaecyparis obtusa Endl.) wood. Specimens cut from the wood were heated at 90-180°C as accelerated aging treatment. The specimens completely dried and heated in the presence of oxygen allowed us to evaluate the effects of thermal oxidation on wood color change. Color properties measured by a spectrophotometer showed similar behavior irrespective of the treatment temperature with each time scale. Kinetic analysis using the time-temperature superposition principle, which uses the whole data set, was successfully applied to the color changes. The calculated values of the apparent activation energy in terms of L *, a *, b *, and Δ E^{*}_{ab} were 117, 95, 114, and 113 kJ/mol, respectively, which are similar to the values of the literature obtained for other properties such as the physical and mechanical properties of wood.
DETECTORS AND EXPERIMENTAL METHODS: Simulation of a modified neutron detector applied in CSNS
NASA Astrophysics Data System (ADS)
Ma, Zhong-Jian; Wang, Qing-Bin; Wu, Qing-Biao
2009-01-01
We simulate the response of a modified Anderson-Braun rem counter in the energy range from thermal energy to about 10 GeV using the FLUKA code. Also, we simulate the lethargy spectrum of CSNS outside the beam dump. Traditional BF3 tube is replaced by the 3He tube, a layer of 0.6 cm lead is added outside the boron doped plastic attenuator and a sphere configuration is adopted. The simulation result shows that its response is exactly fit to H*(10) in the neutron energies between 10 keV and approximately 1 GeV, although the monitor slightly underestimates H*(10) in the energy range from thermal energy to about 10 keV. According to the characteristics of the CSNS, this modified counter increases the neutron energy response by 30% compared with the traditional monitors, and it can be applied in other kinds of stray field rich of high energy neutrons.
Advanced panel-type influence coefficient methods applied to subsonic flows
NASA Technical Reports Server (NTRS)
Johnson, F. T.; Rubbert, P. E.
1975-01-01
An advanced technique for solving the linear integral equations of three-dimensional subsonic potential flows (steady, inviscid, irrotational and incompressible) about arbitrary configurations is presented. It involves assembling select, logically consistent networks whose construction comprises four tasks, which are described in detail: surface geometry definition; singularity strength definition; control point and boundary condition specification; and calculation of induced potential or velocity. The technique is applied to seven wing examples approached by four network types: source/analysis, doublet/analysis, source/design, and doublet/design. The results demonstrate the forgiveness of the model to irregular paneling and the practicality of combined analysis/design boundary conditions. The appearance of doublet strength mismatch is a valuable indicator of locally inadequate paneling.
Silva, V.C.; Marechal, Y.; Foggia, A.
1995-05-01
A three-dimensional finite-element analysis is employed to investigate the losses of hydrogenerator stator end regions by using surface-impedance boundary conditions. The three-dimensional complexity of the end-winding geometry is fully taken into account by the model. The purpose of this work is also to evaluate the eddy-current paths allowing for slits in the stator teeth and their effectiveness in reducing stator core-end losses. The methodology was applied to the stator core of a 52-pole 300-MVA hydrogenerator in two different cases: (1) with non-slit teeth; (2) with fully slit teeth. It may also be extended to large turbogenerators end regions.
Investigation to develop a method to apply diffusion barrier to high strength fibers
NASA Technical Reports Server (NTRS)
Veltri, R. D.; Paradis, R. D.; Douglas, F. C.
1975-01-01
A radio frequency powered ion plating process was used to apply the diffusion barriers of aluminum oxide, yttrium oxide, hafnium oxide and titanium carbide to a substrate tungsten fiber. Each of the coatings was examined as to its effect on both room temperature strength and tensile strength of the base tungsten fiber. The coated fibers were then overcoated with a nickel alloy to become single cell diffusion couples. These diffusion couples were exposed to 1093 C for 24 hours, cycled between room temperature and 1093 C, and given a thermal anneal for 100 hours at 1200 C. Tensile testing and metallographic examinations determined that the hafnium oxide coating produced the best high temperature diffusion barrier for tungsten of the four coatings.
Limitations of the method of characteristics when applied to axisymmetric hypersonic nozzle design
NASA Technical Reports Server (NTRS)
Edwards, Anne C.; Perkins, John N.; Benton, James R.
1990-01-01
A design study of axisymmetric hypersonic wind tunnel nozzles was initiated by NASA Langley Research Center with the objective of improving the flow quality of their ground test facilities. Nozzles for Mach 6 air, Mach 13.5 nitrogen, and Mach 17 nitrogen were designed using the Method of Characteristics/Boundary Layer (MOC/BL) approach and were analyzed with a Navier-Stokes solver. Results of the analysis agreed well with design for the Mach 6 case, but revealed oblique shock waves of increasing strength originating from near the inflection point of the Mach 13.5 and Mach 17 nozzles. The findings indicate that the MOC/BL design method has a fundamental limitation that occurs at some Mach number between 6 an 13.5. In order to define the limitation more exactly and attempt to discover the cause, a parametric study of hypersonic ideal air nozzles designed with the current MOC/BL method was done. Results of this study indicate that, while stagnations conditions have a moderate affect on the upper limit of the method, the method fails at Mach numbers above 8.0.
Boundary element method applied to a gas-fired pin-fin-enhanced heat pipe
Andraka, C.E.; Knorovsky, G.A.; Drewien, C.A.
1998-02-01
The thermal conduction of a portion of an enhanced surface heat exchanger for a gas fired heat pipe solar receiver was modeled using the boundary element and finite element methods (BEM and FEM) to determine the effect of weld fillet size on performance of a stud welded pin fin. A process that could be utilized by others for designing the surface mesh on an object of interest, performing a conversion from the mesh into the input format utilized by the BEM code, obtaining output on the surface of the object, and displaying visual results was developed. It was determined that the weld fillet on the pin fin significantly enhanced the heat performance, improving the operating margin of the heat exchanger. The performance of the BEM program on the pin fin was measured (as computational time) and used as a performance comparison with the FEM model. Given similar surface element densities, the BEM method took longer to get a solution than the FEM method. The FEM method creates a sparse matrix that scales in storage and computation as the number of nodes (N), whereas the BEM method scales as N{sup 2} in storage and N{sup 3} in computation.
Comparative analysis of optimisation methods applied to thermal cycle of a coal fired power plant
NASA Astrophysics Data System (ADS)
Kowalczyk, Łukasz; Elsner, Witold
2013-12-01
The paper presents a thermodynamic optimization of 900MW power unit for ultra-supercritical parameters, modified according to AD700 concept. The aim of the study was to verify two optimisation methods, i.e., the finding the minimum of a constrained nonlinear multivariable function (fmincon) and the Nelder-Mead method with their own constrain functions. The analysis was carried out using IPSEpro software combined with MATLAB, where gross power generation efficiency was chosen as the objective function. In comparison with the Nelder-Mead method it was shown that using fmincon function gives reasonable results and a significant reduction of computational time. Unfortunately, with the increased number of decision parameters, the benefit measured by the increase in efficiency is becoming smaller. An important drawback of fmincon method is also a lack of repeatability by using different starting points. The obtained results led to the conclusion, that the Nelder-Mead method is a better tool for optimisation of thermal cycles with a high degree of complexity like the coal-fired power unit.
A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data
Hira, Zena M.; Gillies, Duncan F.
2015-01-01
We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834
Adaptive mesh refinement techniques for the immersed interface method applied to flow problems.
Li, Zhilin; Song, Peng
2013-06-01
In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515-527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method.
Adaptive mesh refinement techniques for the immersed interface method applied to flow problems
Li, Zhilin; Song, Peng
2013-01-01
In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763
Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding
de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.
2013-01-01
Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228
NASA Technical Reports Server (NTRS)
Pacey, P. D.; Polyani, J. C.
1971-01-01
The method of measured relaxation is described for the determination of initial vibrational energy distribution in the products of exothermic reaction. Hydrogen atoms coming from an orifice were diffused into flowing chlorine gas. Measurements were made of the resultant ir chemiluminescence at successive points along the line of flow. The concurrent processes of reaction, diffusion, flow, radiation, and deactivation were analyzed in some detail on a computer. A variety of relaxation models were used in an attempt to place limits on k(nu prime), the rate constant for reaction to form HCl in specified vibrational energy levels: H+Cl2 yields (sup K(nu prime) HCl(sub nu prime) + Cl. The set of k(?) obtained from this work is in satisfactory agreement with those obtained by another experimental method (the method of arrested relaxation described in Parts IV and V of the present series.
Daudelin, Denise H; Selker, Harry P; Leslie, Laurel K
2015-12-01
There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in-person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies.
Prospects of Applying Enhanced Semi-Empirical QM Methods for 2101 Virtual Drug Design.
Yilmazer, Nusret Duygu; Korth, Martin
2016-01-01
The last five years have seen a renaissance of semiempirical quantum mechanical (SQM) methods in the field of virtual drug design, largely due to the increased accuracy of so-called enhanced SQM approaches. These methods make use of additional terms for treating dispersion (D) and hydrogen bond (H) interactions with an accuracy comparable to dispersion-corrected density functional theory (DFT-D). DFT-D in turn was shown to provide an accuracy comparable to the most sophisticated QM approaches when it comes to non-covalent intermolecular forces, which usually dominate the protein/ligand interactions that are central to virtual drug design. Enhanced SQM methods thus offer a very promising way to improve upon the current state of the art in the field of virtual drug design. PMID:27183985
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia
Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len
2016-01-01
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method
Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.
Weirs, V. Gregory
2014-03-01
This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.
Properties of the Feynman-alpha method applied to accelerator-driven subcritical systems.
Taczanowski, S; Domanska, G; Kopec, M; Janczyszyn, J
2005-01-01
A Monte Carlo study of the Feynman-method with a simple code simulating the multiplication chain, confined to pertinent time-dependent phenomena has been done. The significance of its key parameters (detector efficiency and dead time, k-source and spallation neutrons multiplicities, required number of fissions etc.) has been discussed. It has been demonstrated that this method can be insensitive to properties of the zones surrounding the core, whereas is strongly affected by the detector dead time. In turn, the influence of harmonics in the neutron field and of the dispersion of spallation neutrons has proven much less pronounced.
Analysis of the discontinuous Galerkin method applied to the European option pricing problem
NASA Astrophysics Data System (ADS)
Hozman, J.
2013-12-01
In this paper we deal with a numerical solution of a one-dimensional Black-Scholes partial differential equation, an important scalar nonstationary linear convection-diffusion-reaction equation describing the pricing of European vanilla options. We present a derivation of the numerical scheme based on the space semidiscretization of the model problem by the discontinuous Galerkin method with nonsymmetric stabilization of diffusion terms and with the interior and boundary penalty. The main attention is paid to the investigation of a priori error estimates for the proposed scheme. The appended numerical experiments illustrate the theoretical results and the potency of the method, consequently.
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
Teaching to Think: Applying the Socratic Method outside the Law School Setting
ERIC Educational Resources Information Center
Peterson, Evan
2009-01-01
An active learning process has the potential to provide educational benefits above-and-beyond what they might receive from more traditional, passive approaches. The Socratic Method is a unique approach to passive learning that facilitates critical thinking, open-mindedness, and teamwork. By imposing a series of guided questions to students, an…
ERIC Educational Resources Information Center
Merson, Thomas B.
This report describes a 3-week research training institute supported by USOE funds, which was held at the University of California at Los Angeles, July 1967. It was designed to increase the competence of junior college research directors and staff. The method of recruiting and selecting the trainees is explained. Thirty-eight trainees from 12…
The Feasibility of Applying PBL Teaching Method to Surgery Teaching of Chinese Medicine
ERIC Educational Resources Information Center
Tang, Qianli; Yu, Yuan; Jiang, Qiuyan; Zhang, Li; Wang, Qingjian; Huang, Mingwei
2008-01-01
The traditional classroom teaching mode is based on the content of the subject, takes the teacher as the center and gives priority to classroom instruction. While PBL (Problem Based Learning) teaching method breaches the traditional mode, combining the basic science with clinical practice and covering the process from discussion to self-study to…
Automatic and efficient methods applied to the binarization of a subway map
NASA Astrophysics Data System (ADS)
Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan
2015-12-01
The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.
ERIC Educational Resources Information Center
Grammatikopoulos, Vasilis; Zachopoulou, Evridiki; Tsangaridou, Niki; Liukkonen, Jarmo; Pickup, Ian
2008-01-01
The body of research relating to assessment in education suggests that professional developers and seminar administrators have generally paid little attention to evaluation procedures. Scholars have also been critical of evaluations which use a single data source and have favoured the use of a multiple method design to generate a complete picture…
3D magnetospheric parallel hybrid multi-grid method applied to planet-plasma interactions
NASA Astrophysics Data System (ADS)
Leclercq, L.; Modolo, R.; Leblanc, F.; Hess, S.; Mancini, M.
2016-03-01
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet-plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order to conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.
Applying the Deming Method to Higher Education for More Effective Human Resource Management.
ERIC Educational Resources Information Center
Miller, Richard I., Ed.
This book discusses the application of the Deming Management Method to higher education in order to improve the management practices and operations of American colleges and universities. The contributing articles are as follows: (1) "The Parable of the Red Beads" (Joseph A. Burke); (2) Constancy of Purpose for the Improvement of Product and…
ERIC Educational Resources Information Center
Cruzeiro, Vinícius Wilian D.; Roitberg, Adrian; Polfer, Nicolas C.
2016-01-01
In this work we are going to present how an interactive platform can be used as a powerful tool to allow students to better explore a foundational problem in quantum chemistry: the application of the variational method to the dihydrogen molecule using simple Gaussian trial functions. The theoretical approach for the hydrogen atom is quite…
ERIC Educational Resources Information Center
Toland, John; Boyle, Christopher
2008-01-01
This study involves the use of methods derived from cognitive behavioral therapy (CBT) to change the attributions for success and failure of school children with regard to learning. Children with learning difficulties and/or motivational and self-esteem difficulties (n = 29) were identified by their schools. The children then took part in twelve…
ERIC Educational Resources Information Center
Goodson-Espy, Tracy; Cifarelli, Victor V.; Pugalee, David; Lynch-Davis, Kathleen; Morge, Shelby; Salinas, Tracie
2014-01-01
This study explored how mathematics content and methods courses for preservice elementary and middle school teachers could be improved through the integration of a set of instructional materials based on the National Assessment of Educational Progress (NAEP). A set of eight instructional modules was developed and tested. The study involved 7…
Applying Qualitative Methods in Organizations: A Note for Industrial/Organizational Psychologists
ERIC Educational Resources Information Center
Ehigie, Benjamin Osayawe; Ehigie, Rebecca Ibhaguelo
2005-01-01
Early approach to research in industrial and organizational (I/O) psychology was oriented towards quantitative techniques as a result of influences from the social sciences. As the focus of I/O psychology expands from psychological test development to other personnel functions, there has been an inclusion of qualitative methods in I/O psychology…
NASA Astrophysics Data System (ADS)
Michalak, Anna M.; Kitanidis, Peter K.
2004-08-01
As the incidence of groundwater contamination continues to grow, a number of inverse modeling methods have been developed to address forensic groundwater problems. In this work the geostatistical approach to inverse modeling is extended to allow for the recovery of the antecedent distribution of a contaminant at a given point back in time, which is critical to the assessment of historical exposure to contamination. Such problems are typically strongly underdetermined, with a large number of points at which the distribution is to be estimated. To address this challenge, the computational efficiency of the new method is increased through the application of the adjoint state method. In addition, the adjoint problem is presented in a format that allows for the reuse of existing groundwater flow and transport codes as modules in the inverse modeling algorithm. As demonstrated in the presented applications, the geostatistical approach combined with the adjoint state method allow for a historical multidimensional contaminant distribution to be recovered even in heterogeneous media, where a numerical solution is required for the forward problem.
Nanoemulsions prepared by a low-energy emulsification method applied to edible films
Technology Transfer Automated Retrieval System (TEKTRAN)
Catastrophic phase inversion (CPI) was used as a low-energy emulsification method to prepare oil-in-water (O/W) nanoemulsions in a lipid (Acetem)/water/nonionic surfactant (Tween 60) system. CPIs in which water-in-oil emulsions (W/O) are transformed into oil-in-water emulsions (O/W) were induced by ...
Transfer matrix method applied to the parallel assembly of sound absorbing materials.
Verdière, Kévin; Panneton, Raymond; Elkoun, Saïd; Dupont, Thomas; Leclaire, Philippe
2013-12-01
The transfer matrix method (TMM) is used conventionally to predict the acoustic properties of laterally infinite homogeneous layers assembled in series to form a multilayer. In this work, a parallel assembly process of transfer matrices is used to model heterogeneous materials such as patchworks, acoustic mosaics, or a collection of acoustic elements in parallel. In this method, it is assumed that each parallel element can be modeled by a 2 × 2 transfer matrix, and no diffusion exists between elements. The resulting transfer matrix of the parallel assembly is also a 2 × 2 matrix that can be assembled in series with the classical TMM. The method is validated by comparison with finite element (FE) simulations and acoustical tube measurements on different parallel/series configurations at normal and oblique incidence. The comparisons are in terms of sound absorption coefficient and transmission loss on experimental and simulated data and published data, notably published data on a parallel array of resonators. From these comparisons, the limitations of the method are discussed. Finally, applications to three-dimensional geometries are studied, where the geometries are discretized as in a FE concept. Compared to FE simulations, the extended TMM yields similar results with a trivial computation time.
ASIC and FPGA based DPWM architectures for single-phase and single-output DC-DC converter: a review
NASA Astrophysics Data System (ADS)
Chander, Subhash; Agarwal, Pramod; Gupta, Indra
2013-12-01
Pulse width modulation (PWM) has been widely used in power converter control. This paper presents a review of architectures of the Digital Pulse Width Modulators (DPWM) targeting digital control of switching DC-DC converters. An attempt is made to review the reported architectures with emphasis on the ASIC and FPGA implementations in single phase and single-output DC-DC converters. Recent architectures using FPGA's advanced resources for achieving the resolution higher than classical methods have also been discussed. The merits and demerits of different architectures, and their relative comparative performance, are also presented. The Authors intention is to uncover the groundwork and the related references through this review for the benefit of readers and researchers targeting different DPWM architectures for the DC-DC converters.
Advanced signal processing methods applied to guided waves for wire rope defect detection
NASA Astrophysics Data System (ADS)
Tse, Peter W.; Rostami, Javad
2016-02-01
Steel wire ropes, which are usually composed of a polymer core and enclosed by twisted wires, are used to hoist heavy loads. These loads are different structures that can be clamshells, draglines, elevators, etc. Since the loading of these structures is dynamic, the ropes are working under fluctuating forces in a corrosive environment. This consequently leads to progressive loss of the metallic cross-section due to abrasion and corrosion. These defects can be seen in the forms of roughened and pitted surface of the ropes, reduction in diameter, and broken wires. Therefore, their deterioration must be monitored so that any unexpected damage or corrosion can be detected before it causes fatal accident. This is of vital importance in the case of passenger transportation, particularly in elevators in which any failure may cause a catastrophic disaster. At present, the widely used methods for thorough inspection of wire ropes include visual inspection and magnetic flux leakage (MFL). Reliability of the first method is questionable since it only depends on the operators' eyes that fails to determine the integrity of internal wires. The later method has the drawback of being a point by point and time-consuming inspection method. Ultrasonic guided wave (UGW) based inspection, which has proved its capability in inspecting plate like structures such as tubes and pipes, can monitor the cross-section of wire ropes in their entire length from a single point. However, UGW have drawn less attention for defect detection in wire ropes. This paper reports the condition monitoring of a steel wire rope from a hoisting elevator with broken wires as a result of corrosive environment and fatigue. Experiments were conducted to investigate the efficiency of using magnetostrictive based UGW for rope defect detection. The obtained signals were analyzed by two time-frequency representation (TFR) methods, namely the Short Time Fourier Transform (STFT) and the Wavelet analysis. The location of
Auxiliary resonant DC tank converter
Peng, Fang Z.
2000-01-01
An auxiliary resonant dc tank (ARDCT) converter is provided for achieving soft-switching in a power converter. An ARDCT circuit is coupled directly across a dc bus to the inverter to generate a resonant dc bus voltage, including upper and lower resonant capacitors connected in series as a resonant leg, first and second dc tank capacitors connected in series as a tank leg, and an auxiliary resonant circuit comprising a series combination of a resonant inductor and a pair of auxiliary switching devices. The ARDCT circuit further includes first clamping means for holding the resonant dc bus voltage to the dc tank voltage of the tank leg, and second clamping means for clamping the resonant dc bus voltage to zero during a resonant period. The ARDCT circuit resonantly brings the dc bus voltage to zero in order to provide a zero-voltage switching opportunity for the inverter, then quickly rebounds the dc bus voltage back to the dc tank voltage after the inverter changes state. The auxiliary switching devices are turned on and off under zero-current conditions. The ARDCT circuit only absorbs ripples of the inverter dc bus current, thus having less current stress. In addition, since the ARDCT circuit is coupled in parallel with the dc power supply and the inverter for merely assisting soft-switching of the inverter without participating in real dc power transmission and power conversion, malfunction and failure of the tank circuit will not affect the functional operation of the inverter; thus a highly reliable converter system is expected.
A quality function deployment method applied to highly reusable space transportation
Zapata, E.
1997-01-01
This paper will describe a Quality Function Deployment (QFD) currently in work the goal of which is to add definition and insight to the development of long term Highly Reusable Space Transportation (HRST). The objective here is twofold. First, to describe the process, the actual QFD experience as applies to the HRST study. Second, to describe the preliminary results of this process, in particular the assessment of possible directions for future pursuit such as promising candidate technologies or approaches that may finally open the space frontier. The iterative and synergistic nature of QFD provides opportunities in the process for the discovery of what is key in so far as it is useful, what is not, and what is merely true. Key observations on the QFD process will be presented. The importance of a customer definition as well as the similarity of the process of developing a technology portfolio to product development will be shown. Also, the relation of identified cost and operating drivers to future space vehicle designs that are robust to an uncertain future will be discussed. The results in particular of this HRST evaluation will be preliminary given the somewhat long term (or perhaps not?) nature of the task being considered. {copyright} {ital 1997 American Institute of Physics.}
An ISO-surface folding analysis method applied to premature neonatal brain development
NASA Astrophysics Data System (ADS)
Rodriguez-Carranza, Claudia E.; Rousseau, Francois; Iordanova, Bistra; Glenn, Orit; Vigneron, Daniel; Barkovich, James; Studholme, Colin
2006-03-01
In this paper we describe the application of folding measures to tracking in vivo cortical brain development in premature neonatal brain anatomy. The outer gray matter and the gray-white matter interface surfaces were extracted from semi-interactively segmented high-resolution T1 MRI data. Nine curvature- and geometric descriptor-based folding measures were applied to six premature infants, aged 28-37 weeks, using a direct voxelwise iso-surface representation. We have shown that using such an approach it is feasible to extract meaningful surfaces of adequate quality from typical clinically acquired neonatal MRI data. We have shown that most of the folding measures, including a new proposed measure, are sensitive to changes in age and therefore applicable in developing a model that tracks development in premature infants. For the first time gyrification measures have been computed on the gray-white matter interface and on cases whose age is representative of a period of intense brain development.
Improved Methods for Identifying, Applying, and Verifying Industrial Energy Efficiency Measures
NASA Astrophysics Data System (ADS)
Harding, Andrew Chase
Energy efficiency is the least expensive source of additional energy capacity for today's global energy expansion. Energy efficiency offers additional benefits of cost savings for consumers, reduced environmental impacts, and enhanced energy security. The challenges of energy efficiency include identifying potential efficiency measures, quantifying savings, determining cost effectiveness, and verifying savings of installed measures. This thesis presents three separate chapters which address these challenges. The first is a paper presented at the 2014 industrial energy technology conference (IETC) that details a compressed air system project using the systems approach to identify cost effective measures, energy intensity to project savings, and proper measurement and verification (M&V) practices to prove that the savings were achieved. The second is a discussion of proper M&V techniques, how these apply to international M&V protocols, and how M&V professionals can improve the accuracy and efficacy of their M&V activities. The third is an energy intensity analysis of a poultry processing facility at a unit operations level, which details the M&V practices used to determine the intensities at each unit operation and compares these to previous works.
NASA Astrophysics Data System (ADS)
Jacka, Lukas; Pavlasek, Jirka; Pech, Pavel
2016-04-01
In order to obtain infiltration parameters and analytical expressions of the cumulative infiltration and infiltration rate, raw infiltration data are often evaluated using various infiltration equations. Knowledge about the evaluation variability of these equations in the specific case of extremely heterogeneous soils provides important information for many hydrological and engineering applications. This contribution presents an evaluation of measured data using five well-established physically-based equations and empirical equations, and makes a comparison of these procedures. Evaluation procedures were applied to datasets measured on three different sites of hydrologically important mountain podzols. A total of 47 single ring infiltration experiments were evaluated using these procedures. From the quality-of-fit perspective, all of the tested equations characterized most of the raw datasets properly. In a few cases, some of the physically-based equations led to poor fits of the datasets measured on the most heterogeneous site (characterized by the lowest depth of the organic horizon, and more bleached eluvial horizon than on the other tested sites). For the parameters evaluated on this site, the sorptivity estimates and the saturated hydraulic conductivity (Ks) estimates were distinctly different between the tested procedures.
The Enzyme Portal: a case study in applying user-centred design methods in bioinformatics.
de Matos, Paula; Cham, Jennifer A; Cao, Hong; Alcántara, Rafael; Rowland, Francis; Lopez, Rodrigo; Steinbeck, Christoph
2013-03-20
User-centred design (UCD) is a type of user interface design in which the needs and desires of users are taken into account at each stage of the design process for a service or product; often for software applications and websites. Its goal is to facilitate the design of software that is both useful and easy to use. To achieve this, you must characterise users' requirements, design suitable interactions to meet their needs, and test your designs using prototypes and real life scenarios.For bioinformatics, there is little practical information available regarding how to carry out UCD in practice. To address this we describe a complete, multi-stage UCD process used for creating a new bioinformatics resource for integrating enzyme information, called the Enzyme Portal (http://www.ebi.ac.uk/enzymeportal). This freely-available service mines and displays data about proteins with enzymatic activity from public repositories via a single search, and includes biochemical reactions, biological pathways, small molecule chemistry, disease information, 3D protein structures and relevant scientific literature.We employed several UCD techniques, including: persona development, interviews, 'canvas sort' card sorting, user workflows, usability testing and others. Our hope is that this case study will motivate the reader to apply similar UCD approaches to their own software design for bioinformatics. Indeed, we found the benefits included more effective decision-making for design ideas and technologies; enhanced team-working and communication; cost effectiveness; and ultimately a service that more closely meets the needs of our target audience.
[Paternity study applying DNA polymorphism: evaluation of methods traditionally used in Chile].
Armanet, L; Aguirre, R; Vargas, J; Llop, E; Castillo, S; Cifuentes, L
1995-05-01
Simultaneous detection of several VNTR loci using a single DNA probe is the basis of the technique called "DNA fingerprint" (DNAfp) of increasing application in parenthood identification. According to the data gathered by different laboratories worldwide, father exclusion can be made in a larger number of cases when compared with the customary tests based on erythrocyte antigens. The question could then be whether DNAfp will completely replace erythrocyte antigens tests. We report here our experience in applying DNAfp to 92 samples corresponding to 34 paternity cases and comparing these with the results obtained with the antigens of the systems ABO, Rh, MNSs, Duffy and Kidd. Most of the HaeIII/digested DNA samples produced 13 to 16 bands larger than 4.3 Kb (average 14,0761 +/- 2,205). Average band sharing between pairs of unrelated individual was 1,9107 +/- 1,083. Two cases presenting an a posteriori probability of being the father of 80.7% and 76.5% by erythrocyte antigens were clearly excluded by DNAfp. All exclusions made by antigens were confirmed by DNAfp. In the cases reported as father "rather probable" (28 cases) by DNAfp, these shared with the child 6,7407 +/- 1.7 bands on average. Because of time, cost and simplicity we favor a procedure starting with the antigens test and continuing with DNAfp only when an exclusion is not possible. Economy will increase as the number of exclusions increases.
Error-free pathology: applying lean production methods to anatomic pathology.
Condel, Jennifer L; Sharbaugh, David T; Raab, Stephen S
2004-12-01
The current state of our health care system calls for dramatic changes. In their pathology department, the authors believe these changes may be accomplished by accepting the long-term commitment of applying a lean production system. The ideal state of zero pathology errors is one that should be pursued by consistently asking, "Why can't we?" The philosophy of lean production systems began in the manufacturing industry: "All we are doing is looking at the time from the moment the customer gives us an order to the point when we collect the cash. And we are reducing that time line by removing non-value added wastes". The ultimate goals in pathology and overall health care are not so different. The authors' intention is to provide the patient (customer) with the most accurate diagnostic information in a timely and efficient manner. Their lead histotechnologist recently summarized this philosophy: she indicated that she felt she could sleep better at night knowing she truly did the best job she could. Her chances of making an error (in cutting or labeling) were dramatically decreased in the one-by-one continuous flow work process compared with previous practices. By designing a system that enables employees to be successful in meeting customer demand, and by empowering the frontline staff in the development and problem solving processes, one can meet the challenges of eliminating waste and build an improved, efficient system. PMID:15555747
A quality function deployment method applied to highly reusable space transportation
NASA Astrophysics Data System (ADS)
Zapata, Edgar
1997-01-01
This paper will describe a Quality Function Deployment (QFD) currently in work the goal of which is to add definition and insight to the development of long term Highly Reusable Space Transportation (HRST). The objective here is twofold. First, to describe the process, the actual QFD experience as applies to the HRST study. Second, to describe the preliminary results of this process, in particular the assessment of possible directions for future pursuit such as promising candidate technologies or approaches that may finally open the space frontier. The iterative and synergistic nature of QFD provides opportunities in the process for the discovery of what is key in so far as it is useful, what is not, and what is merely true. Key observations on the QFD process will be presented. The importance of a customer definition as well as the similarity of the process of developing a technology portfolio to product development will be shown. Also, the relation of identified cost and operating drivers to future space vehicle designs that are robust to an uncertain future will be discussed. The results in particular of this HRST evaluation will be preliminary given the somewhat long term (or perhaps not?) nature of the task being considered.
Error-free pathology: applying lean production methods to anatomic pathology.
Condel, Jennifer L; Sharbaugh, David T; Raab, Stephen S
2004-12-01
The current state of our health care system calls for dramatic changes. In their pathology department, the authors believe these changes may be accomplished by accepting the long-term commitment of applying a lean production system. The ideal state of zero pathology errors is one that should be pursued by consistently asking, "Why can't we?" The philosophy of lean production systems began in the manufacturing industry: "All we are doing is looking at the time from the moment the customer gives us an order to the point when we collect the cash. And we are reducing that time line by removing non-value added wastes". The ultimate goals in pathology and overall health care are not so different. The authors' intention is to provide the patient (customer) with the most accurate diagnostic information in a timely and efficient manner. Their lead histotechnologist recently summarized this philosophy: she indicated that she felt she could sleep better at night knowing she truly did the best job she could. Her chances of making an error (in cutting or labeling) were dramatically decreased in the one-by-one continuous flow work process compared with previous practices. By designing a system that enables employees to be successful in meeting customer demand, and by empowering the frontline staff in the development and problem solving processes, one can meet the challenges of eliminating waste and build an improved, efficient system.
An automatic image-based modelling method applied to forensic infography.
Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David
2015-01-01
This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628
General algebraic method applied to control analysis of complex engine types
NASA Technical Reports Server (NTRS)
Boksenbom, Aaron S; Hood, Richard
1950-01-01
A general algebraic method of attack on the problem of controlling gas-turbine engines having any number of independent variables was utilized employing operational functions to describe the assumed linear characteristics for the engine, the control, and the other units in the system. Matrices were used to describe the various units of the system, to form a combined system showing all effects, and to form a single condensed matrix showing the principal effects. This method directly led to the conditions on the control system for noninteraction so that any setting disturbance would affect only its corresponding controlled variable. The response-action characteristics were expressed in terms of the control system and the engine characteristics. The ideal control-system characteristics were explicitly determined in terms of any desired response action.
An Automatic Image-Based Modelling Method Applied to Forensic Infography
Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David
2015-01-01
This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628
Electrochemical noise methods applied to the study of organic coatings and pretreatments
Bierwagen, G.P.; Talhnan, D.E.; Touzain, S.; Smith, A.; Twite, R.; Balbyshev, V.; Pae, Y.
1998-12-31
The use of electrochemical noise methods (ENM) to examine organic coatings was first performed in 1986 by Skerry and Eden. The technique uses the spontaneous voltage and current noise that occurs between two identical coated electrodes in electrolyte immersion to determine resistance properties of the coating as well as low frequency noise impedance data for the system. It is a non-perturbing measurement, and one that allows judgment and ranking of coating systems performance. This paper will summarize work in the lab over the past five years on the use of ENM for examining the properties of organic coatings and pretreatment over metals. They have studied marine coatings, pipeline coatings, coil coatings, electrodeposited organic coatings (e-coats), and aircraft coatings by this method and found it to be useful, especially when used in conjunction with impedance and other electrochemical techniques.
Paraxial Wentzel-Kramers-Brillouin method applied to the lower hybrid wave propagationa)
NASA Astrophysics Data System (ADS)
Bertelli, N.; Maj, O.; Poli, E.; Harvey, R.; Wright, J. C.; Bonoli, P. T.; Phillips, C. K.; Smirnov, A. P.; Valeo, E.; Wilson, J. R.
2012-08-01
The paraxial Wentzel-Kramers-Brillouin (pWKB) approximation, also called beam tracing method, has been employed in order to study the propagation of lower hybrid waves in a tokamak plasma. Analogous to the well-know ray tracing method, this approach reduces Maxwell's equations to a set of ordinary differential equations, while, in addition, retains the effects of the finite beam cross-section, and, thus, the effects of diffraction. A new code, LHBEAM (lower hybrid BEAM tracing), is presented, which solves the pWKB equations in tokamak geometry for arbitrary launching conditions and for analytic and experimental plasma equilibria. In addition, LHBEAM includes linear electron Landau damping for the evaluation of the absorbed power density and the reconstruction of the wave electric field in both the physical and Fourier space. Illustrative LHBEAM calculations are presented along with a comparison with the ray tracing code GENRAY and the full wave solver TORIC-LH.
Spherical Elementary Current Systems Method Applied to Geomagnetic Field Modeling for the Adriatic
NASA Astrophysics Data System (ADS)
Vujić, Eugen; Brkić, Mario
2016-08-01
The aim of this work was to derive an accurate regional model of geomagnetic components on the Adriatic. Data of north, east and vertical geomagnetic components at repeat stations and ground survey sites enclosing the Adriatic Sea were used to obtain a geomagnetic model at 2010.5 epoch. The core field was estimated by use of the global Enhanced Magnetic Model, while the crustal field by a mathematical technique for expanding vector systems on a sphere into basis functions, known as spherical elementary current systems method. The results of this method were presented and compared to the crustal field estimations by the Enhanced Magnetic Model. The maps of isolines of the regional model are also presented.
The High-Resolution Wave-Propagation Method Applied to Meso- and Micro-Scale Flows
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2012-01-01
The high-resolution wave-propagation method for computing the nonhydrostatic atmospheric flows on meso- and micro-scales is described. The design and implementation of the Riemann solver used for computing the Godunov fluxes is discussed in detail. The method uses a flux-based wave decomposition in which the flux differences are written directly as the linear combination of the right eigenvectors of the hyperbolic system. The two advantages of the technique are: 1) the need for an explicit definition of the Roe matrix is eliminated and, 2) the inclusion of source term due to gravity does not result in discretization errors. The resulting flow solver is conservative and able to resolve regions of large gradients without introducing dispersion errors. The methodology is validated against exact analytical solutions and benchmark cases for non-hydrostatic atmospheric flows.
Method of preparing and applying single stranded DNA probes to double stranded target DNAs in situ
Gray, J.W.; Pinkel, D.
1991-07-02
A method is provided for producing single stranded non-self-complementary nucleic acid probes, and for treating target DNA for use therewith. The probe is constructed by treating DNA with a restriction enzyme and an exonuclease to form template/primers for a DNA polymerase. The digested strand is resynthesized in the presence of labeled nucleoside triphosphate precursor. Labeled single stranded fragments are separated from the resynthesized fragments to form the probe. Target DNA is treated with the same restriction enzyme used to construct the probe, and is treated with an exonuclease before application of the probe. The method significantly increases the efficiency and specificity of hybridization mixtures by increasing effective probe concentration by eliminating self-hybridization between both probe and target DNAs, and by reducing the amount of target DNA available for mismatched hybridizations. No Drawings
Method of preparing and applying single stranded DNA probes to double stranded target DNAs in situ
Gray, Joe W.; Pinkel, Daniel
1991-01-01
A method is provided for producing single stranded non-self-complementary nucleic acid probes, and for treating target DNA for use therewith. Probe is constructed by treating DNA with a restriction enzyme and an exonuclease to form template/primers for a DNA polymerase. The digested strand is resynthesized in the presence of labeled nucleoside triphosphate precursor. Labeled single stranded fragments are separated from the resynthesized fragments to form the probe. Target DNA is treated with the same restriction enzyme used to construct the probe, and is treated with an exonuclease before application of the probe. The method significantly increases the efficiency and specificity of hybridization mixtures by increasing effective probe concentration by eliminating self-hybridization between both probe and target DNAs, and by reducing the amount of target DNA available for mismatched hybridizations.
Paraxial Wentzel-Kramers-Brillouin method applied to the lower hybrid wave propagation
Bertelli, N.; Phillips, C. K.; Valeo, E.; Wilson, J. R.; Maj, O.; Poli, E.; Harvey, R.; Wright, J. C.; Bonoli, P. T.; Smirnov, A. P.
2012-08-15
The paraxial Wentzel-Kramers-Brillouin (pWKB) approximation, also called beam tracing method, has been employed in order to study the propagation of lower hybrid waves in a tokamak plasma. Analogous to the well-know ray tracing method, this approach reduces Maxwell's equations to a set of ordinary differential equations, while, in addition, retains the effects of the finite beam cross-section, and, thus, the effects of diffraction. A new code, LHBEAM (lower hybrid BEAM tracing), is presented, which solves the pWKB equations in tokamak geometry for arbitrary launching conditions and for analytic and experimental plasma equilibria. In addition, LHBEAM includes linear electron Landau damping for the evaluation of the absorbed power density and the reconstruction of the wave electric field in both the physical and Fourier space. Illustrative LHBEAM calculations are presented along with a comparison with the ray tracing code GENRAY and the full wave solver TORIC-LH.
Autonomous Correction of Sensor Data Applied to Building Technologies Using Filtering Methods
Castello, Charles C; New, Joshua Ryan; Smith, Matt K
2013-01-01
Sensor data validity is extremely important in a number of applications, particularly building technologies where collected data are used to determine performance. An example of this is Oak Ridge National Laboratory s ZEBRAlliance research project, which consists of four single-family homes located in Oak Ridge, TN. The homes are outfitted with a total of 1,218 sensors to determine the performance of a variety of different technologies integrated within each home. Issues arise with such a large amount of sensors, such as missing or corrupt data. This paper aims to eliminate these problems using: (1) Kalman filtering and (2) linear prediction filtering techniques. Five types of data are the focus of this paper: (1) temperature; (2) humidity; (3) energy consumption; (4) pressure; and (5) airflow. Simulations show the Kalman filtering method performed best in predicting temperature, humidity, pressure, and airflow data, while the linear prediction filtering method performed best with energy consumption data.
Hasz, Wayne Charles; Sangeeta, D
2006-04-18
A method for applying a bond coat on a metal-based substrate is described. A slurry which contains braze material and a volatile component is deposited on the substrate. The slurry can also include bond coat material. Alternatively, the bond coat material can be applied afterward, in solid form or in the form of a second slurry. The slurry and bond coat are then dried and fused to the substrate. A repair technique using this slurry is also described, along with related compositions and articles.
Hasz, Wayne Charles; Sangeeta, D
2002-01-01
A method for applying a bond coat on a metal-based substrate is described. A slurry which contains braze material and a volatile component is deposited on the substrate. The slurry can also include bond coat material. Alternatively, the bond coat material can be applied afterward, in solid form or in the form of a second slurry. The slurry and bond coat are then dried and fused to the substrate. A repair technique using this slurry is also described, along with related compositions and articles.
Hasz, Wayne Charles; Borom, Marcus Preston
2002-01-01
A method for applying at least one bond coating on a surface of a metal-based substrate is described. A foil of the bond coating material is first attached to the substrate surface and then fused thereto, e.g., by brazing. The foil is often initially prepared by thermally spraying the bond coating material onto a removable support sheet, and then detaching the support sheet. Optionally, the foil may also include a thermal barrier coating applied over the bond coating. The substrate can be a turbine engine component.
Manufacturing development of DC-10 advanced rudder
NASA Technical Reports Server (NTRS)
Cominsky, A.
1979-01-01
The design, manufacture, and ground test activities during development of production methods for an advanced composite rudder for the DC-10 transport aircraft are described. The advanced composite aft rudder is satisfactory for airline service and a cost saving in a full production manufacturing mode is anticipated.
Kachenoura, Amar; Albera, Laurent; Senhadji, Lotfi
2007-01-01
Blind Source Separation (BSS) problems, under the assumption of static mixture, were extensively explored from the theoretical point of view. Powerful algorithms are now at hand to deal with many concrete BSS applications. Nevertheless, the performances of BSS methods, for a given biomedical application, are rarely investigated. The aim of this paper is to perform quantitative comparisons between various well-known BSS techniques. To do so, synthetic data, reproducing real polysomnographic recordings, are considered. PMID:18002843
New image fusion method applied in two-wavelength detection of biochip spots
NASA Astrophysics Data System (ADS)
Chang, Rang-Seng; Sheu, Jin-Yi; Lin, Ching-Huang
2001-09-01
In the biological systems genetic information is read, stored, modified, transcribed and translated using the rule of molecular recognition. Every nucleic acid strand carries the capacity to recognize complementary sequences through base paring. Molecular biologists commonly use the DNA probes with known sequence to identify the unknown sequence through hybridization. There are many different detection methods for the hybridization results on a genechip. Fluorescent detection is a conventional method. The data analysis based on the fluorescent images and database establishment is necessary for treatment of such a large-amount obtained from a genechip. The unknown sequence has labeled with fluorescent material. Since the excitation and emission band is not a theoretical narrow band. There is a different in emission windows for different microscope. Therefore the data reading is different for different microscope. We combine two narrow band emission data and take it as two wavelengths from one fluorescence. Which by corresponding UV light excitation after we read the fluorescent intensity distribution of two microscope wavelengths for same hybridization DNA sequence spot, we will use image fusion technology to get best resultsDWe introduce a contrast and aberration correction image fusion method by using discrete wavelet transform to two wavelengths identification microarray biochip. This method includes two parts. First, the multiresolution analysis of the two input images are obtained by the discrete wavelet transform, from the ratio of high frequencies to the low frequency on the corresponding spatial resolution level, the directive contrast can be estimated by selecting the suitable subband signals of each input image. The fused image is reconstructed using the inverse wavelet transform.
A study of domain decomposition methods applied to the discretized Helmholtz equation
NASA Astrophysics Data System (ADS)
Tramel, Robert Wallace
2001-09-01
In this work a domain decomposition based preconditioner of the additive Schwarz type is developed and tested on the linear systems which arise out of the application of the Green's Function/Wave Expansion Discretization. (GFD/WED) method to Helmholtz's equation. In order to develop the additive Schwarz preconditioner, use is made of a class of one-sided Artificial Radiation Boundary Conditions (ARBC) developed during the course of this work. These ARBCs are computationally shown to be quite accurate for use on their own. The ARBC's are used to radiatively couple the various sub-domains which are naturally part of domain decomposition based methods in such a manner as to ensure that the system matrix, when restricted to the sub-domains, is non-singular. In addition, the inter-domain ARBC is constructed such that the solution to the global linear system is unaffected by the presence of the artificial boundaries. The efficacy and efficiency of the method is demonstrated on one, two, and three-dimensional test cases.
Multisurface Method of Pattern Separation for Medical Diagnosis Applied to Breast Cytology
NASA Astrophysics Data System (ADS)
Wolberg, William H.; Mangasarian, Olvi L.
1990-12-01
Multisurface pattern separation is a mathematical method for distinguishing between elements of two pattern sets. Each element of the pattern sets is comprised of various scalar observations. In this paper, we use the diagnosis of breast cytology to demonstrate the applicability of this method to medical diagnosis and decision making. Each of 11 cytological characteristics of breast fine-needle aspirates reported to differ between benign and malignant samples was graded 1 to 10 at the time of sample collection. Nine characteristics were found to differ significantly between benign and malignant samples. Mathematically, these values for each sample were represented by a point in a nine-dimensional space of real variables. Benign points were separated from malignant ones by planes determined by linear programming. Correct separation was accomplished in 369 of 370 samples (201 benign and 169 malignant). In the one misclassified malignant case, the fine-needle aspirate cytology was so definitely benign and the cytology of the excised cancer so definitely malignant that we believe the tumor was missed on aspiration. Our mathematical method is applicable to other medical diagnostic and decision-making problems.
Generalized ensemble method applied to study systems with strong first order transitions
Malolepsza, E.; Kim, J.; Keyes, T.
2015-09-28
At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub, where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM).more » This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. Lastly, the method is illustrated in a study of the very strong solid/liquid transition in water.« less
Generalized ensemble method applied to study systems with strong first order transitions
Malolepsza, E.; Kim, J.; Keyes, T.
2015-09-28
At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub, where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. Lastly, the method is illustrated in a study of the very strong solid/liquid transition in water.
Quartic B-spline collocation method applied to Korteweg de Vries equation
NASA Astrophysics Data System (ADS)
Zin, Shazalina Mat; Majid, Ahmad Abd; Ismail, Ahmad Izani Md
2014-07-01
The Korteweg de Vries (KdV) equation is known as a mathematical model of shallow water waves. The general form of this equation is ut+ɛuux+μuxxx = 0 where u(x,t) describes the elongation of the wave at displacement x and time t. In this work, one-soliton solution for KdV equation has been obtained numerically using quartic B-spline collocation method for displacement x and using finite difference approach for time t. Two problems have been identified to be solved. Approximate solutions and errors for these two test problems were obtained for different values of t. In order to look into accuracy of the method, L2-norm and L∞-norm have been calculated. Mass, energy and momentum of KdV equation have also been calculated. The results obtained show the present method can approximate the solution very well, but as time increases, L2-norm and L∞-norm are also increase.
Applying formal methods and object-oriented analysis to existing flight software
NASA Technical Reports Server (NTRS)
Cheng, Betty H. C.; Auernheimer, Brent
1993-01-01
Correctness is paramount for safety-critical software control systems. Critical software failures in medical radiation treatment, communications, and defense are familiar to the public. The significant quantity of software malfunctions regularly reported to the software engineering community, the laws concerning liability, and a recent NRC Aeronautics and Space Engineering Board report additionally motivate the use of error-reducing and defect detection software development techniques. The benefits of formal methods in requirements driven software development ('forward engineering') is well documented. One advantage of rigorously engineering software is that formal notations are precise, verifiable, and facilitate automated processing. This paper describes the application of formal methods to reverse engineering, where formal specifications are developed for a portion of the shuttle on-orbit digital autopilot (DAP). Three objectives of the project were to: demonstrate the use of formal methods on a shuttle application, facilitate the incorporation and validation of new requirements for the system, and verify the safety-critical properties to be exhibited by the software.
Overview of electromagnetic methods applied in active volcanic areas of western United States
NASA Astrophysics Data System (ADS)
Skokan, Catherine K.
1993-06-01
A better understanding of active volcanic areas in the United States through electromagnetic geophysical studies received foundation from the many surveys done for geothermal exploration in the 1970's. Investigations by governmental, industrial, and academic agencies include (but are not limited to) mapping of the Cascades. Long Valley/Mono area, the Jemez volcanic field, Yellowstone Park, and an area in Colorado. For one example — Mt. Konocti in the Mayacamas Mountains, California — gravity, magnetic, and seismic, as well as electromagnetic methods have all been used in an attempt to gain a better understanding of the subsurface structure. In each of these volcanic regions, anomalous zones were mapped. When conductive, these anomalies were interpreted to be correlated with hydrothermal activity and not to represent a magma chamber. Electrical and electromagnetic geophysical methods can offer valuable information in the understanding of volcanoes by being the method which is most sensitive to change in temperature and, therefore, can best map heat budget and hydrological character to aid in prediction of eruptions.
Harden, A.; Garcia, J.; Oliver, S.; Rees, R.; Shepherd, J.; Brunton, G.; Oakley, A.
2004-01-01
Methods for systematic reviews are well developed for trials, but not for non-experimental or qualitative research. This paper describes the methods developed for reviewing research on people's perspectives and experiences ("views" studies) alongside trials within a series of reviews on young people's mental health, physical activity, and healthy eating. Reports of views studies were difficult to locate; could not easily be classified as "qualitative" or "quantitative"; and often failed to meet seven basic methodological reporting standards used in a newly developed quality assessment tool. Synthesising views studies required the adaptation of qualitative analysis techniques. The benefits of bringing together views studies in a systematic way included gaining a greater breadth of perspectives and a deeper understanding of public health issues from the point of view of those targeted by interventions. A systematic approach also aided reflection on study methods that may distort, misrepresent, or fail to pick up people's views. This methodology is likely to create greater opportunities for people's own perspectives and experiences to inform policies to promote their health. PMID:15310807
NASA Astrophysics Data System (ADS)
Zhang, Xueyong; Ma, Jianguo
2006-11-01
A new method for simultaneous measuring the applanation force and area and a device based on this method are presented for intraocular pressure measurement. A photoelectric probe transducer acting as applalation area detector converted the diminished quantity of light returned from applanation surface of the cone prism into one electronic signal, and a micro strain gauge acting as applation force detector converted changing load related to the resilient force of the eye into another electronic signal. A 16-bit single-chip microprocessor with E2PROM in the electronic circuit played the role of a nucleus, which stored the program instructions and the interrelated data. Laboratory experiments were carried out on a stimulated cornea clamped in a Perspex chamber connected to a hydraulic manometer to obtain intraocular pressure at different levels. Preliminary trials were carried out comparing the values obtained with those of the Goldmann tonometer. Diminished quantity of the light is directly proportional to the applanation area of the cornea and the changing load detected by strain gauge is equated to the resilient force of the eye. A new kind of tonometer can be constructed based on this principle. Experimental results on a stimulated eyeball showed the present tonometer reading has good agreement with that of the Goldmann tonometer. Further study including clinical trials and application is required to evaluate the accuracy and usefulness of this method.
Generalized ensemble method applied to study systems with strong first order transitions
NASA Astrophysics Data System (ADS)
Małolepsza, E.; Kim, J.; Keyes, T.
2015-09-01
At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.
Non-standard numerical methods applied to subsurface biobarrier formation models in porous media.
Chen, B M; Kojouharov, H V
1999-07-01
Biofilm forming microbes have complex effects on the flow properties of natural porous media. Subsurface biofilms have the potential for the formation of biobarriers to inhibit contaminant migration in groundwater. Another example of beneficial microbial effects is the biotransformation of organic contaminants to less harmful forms, thereby providing an in situ method for treatment of contaminated groundwater supplies. Mathematical models that describe contaminant transport with biodegradation involve a set of coupled convection-dispersion equations with non-linear reactions. The reactive solute transport equation is one for which numerical solution procedures continue to exhibit significant limitations for certain problems of groundwater hydrology interest. Accurate numerical simulations are crucial to the development of contaminant remediation strategies. A new numerical method is developed for simulation of reactive bacterial transport in porous media. The non-standard numerical approach is based on the ideas of the 'exact' time-stepping scheme. It leads to solutions free from the numerical instabilities that arise from incorrect modeling of derivatives and reaction terms. Applications to different biofilm models are examined and numerical results are presented to demonstrate the performance of the proposed new method.
High performance dc-dc conversion with voltage multipliers
NASA Technical Reports Server (NTRS)
Harrigill, W. T.; Myers, I. T.
1974-01-01
The voltage multipliers using capacitors and diodes first developed by Cockcroft and Walton in 1932 were reexamined in terms of state of the art fast switching transistors and diodes, and high energy density capacitors. Because of component improvements, the voltage multiplier, used without a transformer, now appears superior in weight to systems now in use for dc-dc conversion. An experimental 100-watt 1000-volt dc-dc converter operating at 100 kHz was built, with a component weight of about 1 kg/kW. Calculated and measured values of output voltage and efficiency agreed within experimental error.
M.L. Neubauer, K.B. Beard, R. Sah, C. Hernandez-Garcia, G. Neil
2009-05-01
Many user facilities such as synchrotron light sources and free electron lasers require accelerating structures that support electric fields of 10-100 MV/m, especially at the start of the accelerator chain where ceramic insulators are used for very high gradient DC guns. These insulators are difficult to manufacture, require long commissioning times, and have poor reliability, in part because energetic electrons bury themselves in the ceramic, creating a buildup of charge and causing eventual puncture. A novel ceramic manufacturing process is proposed. It will incorporate bulk resistivity in the region where it is needed to bleed off accumulated charge caused by highly energetic electrons. This process will be optimized to provide an appropriate gradient in bulk resistivity from the vacuum side to the air side of the HV standoff ceramic cylinder. A computer model will be used to determine the optimum cylinder dimensions and required resistivity gradient for an example RF gun application. A ceramic material example with resistivity gradient appropriate for use as a DC gun insulator will be fabricated by glazing using doping compounds and tested.
Effect of DC Offset on the T-Wave Residuum Parameter
NASA Technical Reports Server (NTRS)
Scott, N.; Greco, E. C.; Schlegel, Todd T.
2006-01-01
The T-wave residuum (TWR) is a relatively new 12-lead ECG parameter that may reflect cardiac repolarization heterogeneity. TWR shows clinical promise and may become an important diagnostic tool if accurate, consistent, and convenient methods for its calculation can be developed. However, there are discrepancies between the methods that various investigators have used to calculate TWR, as well as some questions about basic methodology and assumptions that require resolution. The presence of a DC offset or very low frequency AC component to the ECG is often observed. Many researchers have attempted to compensate for these by high pass filters and by median beat techniques. These techniques may help minimize the contribution of a low frequency AC component to the TWR, but they will not eliminate a DC offset inherent within the instrumentation. The present study examined the presence of DC offsets in the ECG record, and their effect on TWR. Specifically, in healthy individuals, a DC offset was added to all 8 channels collectively or to each channel selectively. Even with offsets that were relatively small compared to T-wave amplitude, the addition of either collectively or individually applied offsets was observed to produce very significant changes in the TWR, affecting its value by as much as an order of magnitude. These DC offsets may arise from at least two possible sources: a transient artifact from EMG or electrode movement resulting in a transient baseline offset in one or more channels. Since highpass filters have a settling time of several seconds, these artifacts will contribute to a transitory baseline offset lasting 1020 cycles. The machine hardware may also introduce an offset. Regardless of the cause or source of a DC offset, this study demonstrates that offsets have a very significant impact on TWR, and that future studies must not ignore their presence, but rather more appropriately compensate for them.
Guddorf, Vanessa; Kummerfeld, Norbert; Mischke, Reinhard
2014-01-01
The aim of this study was to examine the suitability of commercially available reagents for measurements of coagulation parameters in citrated plasma from birds. Therefore, plasma samples of 17 healthy donor birds of different species were used to determine prothrombin time (PT), activated partial thromboplastin time (aPTT) and thrombin time (TT) applying various commercial reagents which are routinely used in coagulation diagnostics in humans and mammals. A PT reagent based on human placental thromboplastin yielded not only shorter clotting times than a reagent containing recombinant human tissue factor (median 49 vs. 84 s), but also showed a minor range of distribution of values (43-55 s vs. 30-147 s, minimum-maximum, n = 5 turkeys). An aPTT reagent containing kaolin and phospholipids of animal origin delivered the shortest clotting times and the lowest range of variation in comparison to three other reagents of different composition. However, even when this reagent was used, aPTTs were partially extremely long (> 200 s). Thrombin time was 38 s (28-57 s, n = 5 chicken) when measured with bovine thrombin at a final concentration of 2 IU thrombin/ ml. Coefficients of variation for within-run precision analysis (20 repetitions) of PT was 8.0% and 4.7% for aPTT measurements using selected reagents of mammalian origin. In conclusion, of the commercially available reagents tested, a PT reagent based on human placental thromboplastin and an aPTT reagent including rabbit brain phospholipid and kaolin, show some promise for potential use in birds.
Costs of Rabies Control: An Economic Calculation Method Applied to Flores Island
Wera, Ewaldus; Velthuis, Annet G. J.; Geong, Maria; Hogeveen, Henk
2013-01-01
Background Rabies is a zoonotic disease that, in most human cases, is fatal once clinical signs appear. The disease transmits to humans through an animal bite. Dogs are the main vector of rabies in humans on Flores Island, Indonesia, resulting in about 19 human deaths each year. Currently, rabies control measures on Flores Island include mass vaccination and culling of dogs, laboratory diagnostics of suspected rabid dogs, putting imported dogs in quarantine, and pre- and post-exposure treatment (PET) of humans. The objective of this study was to estimate the costs of the applied rabies control measures on Flores Island. Methodology/principal findings A deterministic economic model was developed to calculate the costs of the rabies control measures and their individual cost components from 2000 to 2011. The inputs for the economic model were obtained from (i) relevant literature, (ii) available data on Flores Island, and (iii) experts such as responsible policy makers and veterinarians involved in rabies control measures in the past. As a result, the total costs of rabies control measures were estimated to be US$1.12 million (range: US$0.60–1.47 million) per year. The costs of culling roaming dogs were the highest portion, about 39 percent of the total costs, followed by PET (35 percent), mass vaccination (24 percent), pre-exposure treatment (1.4 percent), and others (1.3 percent) (dog-bite investigation, diagnostic of suspected rabid dogs, trace-back investigation of human contact with rabid dogs, and quarantine of imported dogs). Conclusions/significance This study demonstrates that rabies has a large economic impact on the government and dog owners. Control of rabies by culling dogs is relatively costly for the dog owners in comparison with other measures. Providing PET for humans is an effective way to prevent rabies, but is costly for government and does not provide a permanent solution to rabies in the future. PMID:24386244
NASA Astrophysics Data System (ADS)
Maynes, Jonathan R.
Syneresis is an integral part of cheese manufacture. The rate and extent of syneresis affect the properties of cheese. There are many factors that affect syneresis, but measured results vary because of inaccuracies in measuring techniques. To better control syneresis, an accurate mathematical description must be developed. Current mathematical models describing syneresis are limited because of inherent error in measuring techniques used to develop them. Developing an accurate model requires an accurate way to measure syneresis. The curd becomes a particle in a whey suspension when the coagulum is cut. The most effective technique to measure particle size, without interference, is with light. Approximations to rigorous Maxwellian theory render useable results for a variety of particle sizes. Assumptions of Fraunhofer diffraction theory relate absorption to the cross sectional area of a particle that is much larger than the wavelength of light being used. By applying diffraction theory to the curd-whey system, this researcher designed a new apparatus to permit measurement of large particle systems. The apparatus was tested, and calibrated, with polyacrylic beads. Then the syneresis of curd was measured with this apparatus. The apparatus was designed to measure particles in suspension. Until some syneresis takes place, curd does not satisfy this condition. Theoretical assumptions require a monolayer of scattering centers. The sample container must be thin enough to preclude stacking of the particles. This presents a unique problem with curd. If the coagulum is cut in the sample cell, it adheres to the front and back surfaces and does not synerese. The curd must be coagulated and cut externally and transferred to the sample cell with a large amount of whey. This measurement technique has other limitations that may be overcome with commercially available accessories.
The Enzyme Portal: a case study in applying user-centred design methods in bioinformatics.
de Matos, Paula; Cham, Jennifer A; Cao, Hong; Alcántara, Rafael; Rowland, Francis; Lopez, Rodrigo; Steinbeck, Christoph
2013-01-01
User-centred design (UCD) is a type of user interface design in which the needs and desires of users are taken into account at each stage of the design process for a service or product; often for software applications and websites. Its goal is to facilitate the design of software that is both useful and easy to use. To achieve this, you must characterise users' requirements, design suitable interactions to meet their needs, and test your designs using prototypes and real life scenarios.For bioinformatics, there is little practical information available regarding how to carry out UCD in practice. To address this we describe a complete, multi-stage UCD process used for creating a new bioinformatics resource for integrating enzyme information, called the Enzyme Portal (http://www.ebi.ac.uk/enzymeportal). This freely-available service mines and displays data about proteins with enzymatic activity from public repositories via a single search, and includes biochemical reactions, biological pathways, small molecule chemistry, disease information, 3D protein structures and relevant scientific literature.We employed several UCD techniques, including: persona development, interviews, 'canvas sort' card sorting, user workflows, usability testing and others. Our hope is that this case study will motivate the reader to apply similar UCD approaches to their own software design for bioinformatics. Indeed, we found the benefits included more effective decision-making for design ideas and technologies; enhanced team-working and communication; cost effectiveness; and ultimately a service that more closely meets the needs of our target audience. PMID:23514033
The Enzyme Portal: a case study in applying user-centred design methods in bioinformatics
2013-01-01
User-centred design (UCD) is a type of user interface design in which the needs and desires of users are taken into account at each stage of the design process for a service or product; often for software applications and websites. Its goal is to facilitate the design of software that is both useful and easy to use. To achieve this, you must characterise users’ requirements, design suitable interactions to meet their needs, and test your designs using prototypes and real life scenarios. For bioinformatics, there is little practical information available regarding how to carry out UCD in practice. To address this we describe a complete, multi-stage UCD process used for creating a new bioinformatics resource for integrating enzyme information, called the Enzyme Portal (http://www.ebi.ac.uk/enzymeportal). This freely-available service mines and displays data about proteins with enzymatic activity from public repositories via a single search, and includes biochemical reactions, biological pathways, small molecule chemistry, disease information, 3D protein structures and relevant scientific literature. We employed several UCD techniques, including: persona development, interviews, ‘canvas sort’ card sorting, user workflows, usability testing and others. Our hope is that this case study will motivate the reader to apply similar UCD approaches to their own software design for bioinformatics. Indeed, we found the benefits included more effective decision-making for design ideas and technologies; enhanced team-working and communication; cost effectiveness; and ultimately a service that more closely meets the needs of our target audience. PMID:23514033
Methods for applying accurate digital PCR analysis on low copy DNA samples.
Whale, Alexandra S; Cowen, Simon; Foy, Carole A; Huggett, Jim F
2013-01-01
Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA.
A new multiresolution method applied to the 3D reconstruction of small bodies
NASA Astrophysics Data System (ADS)
Capanna, C.; Jorda, L.; Lamy, P. L.; Gesquiere, G.
2012-12-01
The knowledge of the three-dimensional (3D) shape of small solar system bodies, such as asteroids and comets, is essential in determining their global physical properties (volume, density, rotational parameters). It also allows performing geomorphological studies of their surface through the characterization of topographic features, such as craters, faults, landslides, grooves, hills, etc.. In the case of small bodies, the shape is often only constrained by images obtained by interplanetary spacecrafts. Several techniques are available to retrieve 3D global shapes from these images. Stereography which relies on control points has been extensively used in the past, most recently to reconstruct the nucleus of comet 9P/Tempel 1 [Thomas (2007)]. The most accurate methods are however photogrammetry and photoclinometry, often used in conjunction with stereography. Stereophotogrammetry (SPG) has been used to reconstruct the shapes of the nucleus of comet 19P/Borrelly [Oberst (2004)] and of the asteroid (21) Lutetia [Preusker (2012)]. Stereophotoclinometry (SPC) has allowed retrieving an accurate shape of the asteroids (25143) Itokawa [Gaskell (2008)] and (2867) Steins [Jorda (2012)]. We present a new photoclinometry method based on the deformation of a 3D triangular mesh [Capanna (2012)] using a multi-resolution scheme which starts from a sphere of 300 facets and yields a shape model with 100; 000 facets. Our strategy is inspired by the "Full Multigrid" method [Botsch (2007)] and consists in going alternatively between two resolutions in order to obtain an optimized shape model at a given resolution before going to the higher resolution. In order to improve the robustness of our method, we use a set of control points obtained by stereography. Our method has been tested on images acquired by the OSIRIS visible camera, aboard the Rosetta spacecraft of the European Space Agency, during the fly-by of asteroid (21) Lutetia in July 2010. We present the corresponding 3D shape
Novel methods for physical mapping of the human genome applied to the long arm of chromosome 5
McClelland, M.
1991-12-01
The object of our current grant is to develop novel methods for mapping of the human genome. The techniques to be assessed were: (1) three methods for the production of unique sequence clones from the region of interest; (2) novel methods for the production and separation of multi-megabase DNA fragments; (3) methods for the production of physical linking clones'' that contain rare restriction sites; (4) application of these methods and available resources to map the region of interest. Progress includes: In the first two years methods were developed for physical mapping and the production of arrayed clones; We have concentrated on developing rare- cleavage tools based or restriction endonucleases and methylases; We studied the effect of methylation on enzymes used for PFE mapping of the human genome; we characterized two new isoschizomers of rare cutting endonucleases; we developed a reliable way to produce partial digests of DNA in agarose plugs and applied it to the human genome; and we applied a method to double the apparent specificity of the rare-cutter'' endonucleases.
McClelland, M.
1991-12-01
The object of our current grant is to develop novel methods for mapping of the human genome. The techniques to be assessed were: (1) three methods for the production of unique sequence clones from the region of interest; (2) novel methods for the production and separation of multi-megabase DNA fragments; (3) methods for the production of ``physical linking clones`` that contain rare restriction sites; (4) application of these methods and available resources to map the region of interest. Progress includes: In the first two years methods were developed for physical mapping and the production of arrayed clones; We have concentrated on developing rare- cleavage tools based or restriction endonucleases and methylases; We studied the effect of methylation on enzymes used for PFE mapping of the human genome; we characterized two new isoschizomers of rare cutting endonucleases; we developed a reliable way to produce partial digests of DNA in agarose plugs and applied it to the human genome; and we applied a method to double the apparent specificity of the ``rare-cutter`` endonucleases.
Method for applying a diffusion barrier interlayer for high temperature components
Wei, Ronghua; Cheruvu, Narayana S.
2016-03-08
A coated substrate and a method of forming a diffusion barrier coating system between a substrate and a MCrAl coating, including a diffusion barrier coating deposited onto at least a portion of a substrate surface, wherein the diffusion barrier coating comprises a nitride, oxide or carbide of one or more transition metals and/or metalloids and a MCrAl coating, wherein M includes a transition metal or a metalloid, deposited on at least a portion of the diffusion barrier coating, wherein the diffusion barrier coating restricts the inward diffusion of aluminum of the MCrAl coating into the substrate.
Applying a Mixed-Methods Evaluation to Healthy Kids, Healthy Communities
Brownson, Ross C.; Kemner, Allison L.; Brennan, Laura K.
2016-01-01
From 2008 to 2014, the Healthy Kids, Healthy Communities (HKHC) national program funded 49 communities across the United States and Puerto Rico to implement healthy eating and active living policy, system, and environmental changes to support healthier communities for children and families, with special emphasis on reaching children at highest risk for obesity on the basis of race, ethnicity, income, or geographic location. Evaluators designed a mixed-methods evaluation to capture the complexity of the HKHC projects, understand implementation, and document perceived and actual impacts of these efforts. PMID:25828217
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.