Triple voltage dc-to-dc converter and method
Su, Gui-Jia
2008-08-05
A circuit and method of providing three dc voltage buses and transforming power between a low voltage dc converter and a high voltage dc converter, by coupling a primary dc power circuit and a secondary dc power circuit through an isolation transformer; providing the gating signals to power semiconductor switches in the primary and secondary circuits to control power flow between the primary and secondary circuits and by controlling a phase shift between the primary voltage and the secondary voltage. The primary dc power circuit and the secondary dc power circuit each further comprising at least two tank capacitances arranged in series as a tank leg, at least two resonant switching devices arranged in series with each other and arranged in parallel with the tank leg, and at least one voltage source arranged in parallel with the tank leg and the resonant switching devices, said resonant switching devices including power semiconductor switches that are operated by gating signals. Additional embodiments having a center-tapped battery on the low voltage side and a plurality of modules on both the low voltage side and the high voltage side are also disclosed for the purpose of reducing ripple current and for reducing the size of the components.
Center for Applied Linguistics, Washington DC, USA
ERIC Educational Resources Information Center
Sugarman, Julie; Fee, Molly; Donovan, Anne
2015-01-01
The Center for Applied Linguistics (CAL) is a private, nonprofit organization with over 50 years' experience in the application of research on language and culture to educational and societal concerns. CAL carries out its mission to improve communication through better understanding of language and culture by engaging in a variety of projects in…
Center for Applied Linguistics, Washington DC, USA
ERIC Educational Resources Information Center
Sugarman, Julie; Fee, Molly; Donovan, Anne
2015-01-01
The Center for Applied Linguistics (CAL) is a private, nonprofit organization with over 50 years' experience in the application of research on language and culture to educational and societal concerns. CAL carries out its mission to improve communication through better understanding of language and culture by engaging in a variety of projects in…
Evaporation of nanofluid droplets with applied DC potential.
Orejon, Daniel; Sefiane, Khellil; Shanahan, Martin E R
2013-10-01
A considerable growth of interest in electrowetting (EW) has stemmed from the potential exploitation of this technique in numerous industrial and biological applications, such as microfluidics, lab-on-a-chip, electronic paper, and bioanalytical techniques. The application of EW to droplets of liquids containing nanoparticles (nanofluids) is a new area of interest. Understanding the effects of electrowetting at the fundamental level and being able to manipulate deposits from nanofluid droplets represents huge potential. In this work, we study the complete evaporation of nanofluid droplets under DC conditions. Different evolutions of contact angle and contact radius, as well as deposit patterns, are revealed. When a DC potential is applied, continuous and smoother receding of the contact line during the drying out of TiO2 nanofluids and more uniform patterning of the deposit are observed, in contrast to the typical "stick-slip" behavior and rings stains. Furthermore, the mechanisms for nanoparticle interactions with the applied DC potential differ from those proposed for the EW of droplets under AC conditions. The more uniform patterns of particle deposits resulting from DC potential are a consequence of a shorter timescale for electrophoretic mobility than advection transport driven by evaporation. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Schoenfeld, A. D.; Yu, Y.
1973-01-01
Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.
Refining Diagnoses: Applying the DC-LD to an Irish Population with Intellectual Disability
ERIC Educational Resources Information Center
Felstrom, A.; Mulryan, N.; Reidy, J.; Staines, M.; Hillery, J.
2005-01-01
Background: The diagnostic criteria for psychiatric disorders for use with adults with learning disabilities/mental retardation (DC-LD) is a diagnostic tool developed in 2001 to improve upon existing classification systems for adults with learning disability. The aim of this study was to apply the classification system described by the DC-LD to a…
Refining Diagnoses: Applying the DC-LD to an Irish Population with Intellectual Disability
ERIC Educational Resources Information Center
Felstrom, A.; Mulryan, N.; Reidy, J.; Staines, M.; Hillery, J.
2005-01-01
Background: The diagnostic criteria for psychiatric disorders for use with adults with learning disabilities/mental retardation (DC-LD) is a diagnostic tool developed in 2001 to improve upon existing classification systems for adults with learning disability. The aim of this study was to apply the classification system described by the DC-LD to a…
NASA Technical Reports Server (NTRS)
Rheinfurth, M. H.; Wilson, H. B.
1991-01-01
The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and applied to the dynamic modeling of aerospace structures using the modal synthesis technique.
George Goodheart, Jr., D.C., and a history of applied kinesiology.
Gin, R H; Green, B N
1997-06-01
Applied Kinesiology (AK), founded by Michigan chiropractor George J. Goodheart, Jr., is a popular diagnostic and therapeutic system used by many health care practitioners. Many of the components in this method were discovered by serendipity and observation. In 1964, Goodheart claimed to have corrected a patient's chronic winged scapula by pressing on nodules found near the origin and insertion of the involved serratus anterior muscle. This finding led to the origin and insertion treatment, the first method developed in AK. Successive diagnostic and therapeutic procedures were developed for neurolymphatic reflexes, neurovascular reflexes and cerebrospinal fluid flow from ideas originally described by Frank Chapman, D.O., Terrence J. Bennett, D.C., and William G. Sutherland, D.O., respectively. Later, influenced by the writings of Felix Mann, M.D., Goodheart incorporated acupuncture meridian therapy into the AK system. Additionally, the vertebral challenge method and therapy localization technique, both based on phenomena proposed by L. L. Truscott, D.C., were added to the AK system. Scholarship has also evolved regarding AK and research on the topic is in its infancy. This paper documents some of the history of AK.
Long-lasting increase in axonal excitability after epidurally applied DC.
Jankowska, Elzbieta; Kaczmarek, Dominik; Bolzoni, Francesco; Hammar, Ingela
2017-08-01
Effects of direct current (DC) on nerve fibers have primarily been investigated during or just after DC application. However, locally applied cathodal DC was recently demonstrated to increase the excitability of intraspinal preterminal axonal branches for >1 h. The aim of this study was therefore to investigate whether DC evokes a similarly long-lasting increase in the excitability of myelinated axons within the dorsal columns. The excitability of dorsal column fibers stimulated epidurally was monitored by recording compound action potentials in peripheral nerves in acute experiments in deeply anesthetized rats. The results show that 1) cathodal polarization (0.8-1.0 µA) results in a severalfold increase in the number of epidurally activated fibers and 2) the increase in the excitability appears within seconds, 3) lasts for >1 h, and 4) is activity independent, as it does not require fiber stimulation during the polarization. These features demonstrate an unexplored form of plasticity of myelinated fibers and indicate the conditions under which it develops. They also suggest that therapeutic effects of epidural stimulation may be significantly enhanced if it is combined with DC polarization. In particular, by using DC to increase the number of fibers activated by low-intensity epidural stimuli, the low clinical tolerance to higher stimulus intensities might be overcome. The activity independence of long-lasting DC effects would also allow the use of only brief periods of DC polarization preceding epidural stimulation to increase the effect.NEW & NOTEWORTHY The study indicates a new form of plasticity of myelinated fibers. The differences in time course of DC-evoked increases in the excitability of myelinated nerve fibers in the dorsal columns and in preterminal axonal branches suggest that distinct mechanisms are involved in them. The results show that combining epidural stimulation and transspinal DC polarization may dramatically improve their outcome and result
NASA Astrophysics Data System (ADS)
Sato, K.; Sato, S.; Ichikawa, K.; Watanabe, M.; Honma, T.; Tanaka, Y.; Oikawa, S.; Saito, A.; Ohshima, S.
2014-05-01
We investigated the dc magnetic field and temperature dependences of microwave surface resistance (Rs) of high-temperature superconductor (HTS) films. Previously, we reported that the surface resistance RsRs(n) under a dc magnetic field applied normaly to the substrate increased when increasing the applied magnetic field. For NMR application, we have to examine the Rs(p) under the dc magnetic field parallel to the substrate. We measured the Rs(p) of the YBCO and DyBCO thin films with a thickness of 500 nm deposited on a MgO (100) substrate using the dielectric resonator method at 21.8 GHz, and a dc magnetic field of up to 5 T. In a zero magnetic field, the values of Rs(n) and Rs(p) were 0.35 mQ at 20 K. Under the dc magnetic field, the Rs(n) and the Rs(p) also increased with increasing magnetic field, however, the Rs(p) had a lower magnetic field dependence and the value was about 1/10 of that of the Rs(n). The Rs(p) at 16.4 T and at 700 MHz could be estimated by the two-fluid model. The Rs(p) value was about 1/2600 compared with that of copper at 20 K. As a result, we clarified that 500 nm thick YBCO and DyBCO thin films could provide advantages for NMR application.
Thermal conductivity measurement of thin films by a dc method.
Yang, Junyou; Zhang, Jiansheng; Zhang, Hui; Zhu, Yunfeng
2010-11-01
A dc method, which needs no complex numerical calculation and expensive hardware configuration, was developed to measure the cross-plane thermal conductivity of thin films in this paper. Two parallel metallic heaters, which were deposited on different parts of the sample, serve simultaneously as the heaters and temperature sensors during the measurement. A direct current was flowed through the same two metallic strips to heat the thin-film sample. The heating power and the heater's temperature were obtained by a data acquisition device, and the thermal conductivity of thin film was calculated. To verify the validity of the dc method, several SiO(2) films with different thicknesses were deposited on Si wafers, respectively, and their thermal conductivities were measured by both the dc method and 3ω method. The results of two methods are in good agreement within an acceptable error, and they are also inconsistent with some of previously published data.
DC to DC power converters and methods of controlling the same
Steigerwald, Robert Louis; Elasser, Ahmed; Sabate, Juan Antonio; Todorovic, Maja Harfman; Agamy, Mohammed
2012-12-11
A power generation system configured to provide direct current (DC) power to a DC link is described. The system includes a first power generation unit configured to output DC power. The system also includes a first DC to DC converter comprising an input section and an output section. The output section of the first DC to DC converter is coupled in series with the first power generation unit. The first DC to DC converter is configured to process a first portion of the DC power output by the first power generation unit and to provide an unprocessed second portion of the DC power output of the first power generation unit to the output section.
Design of piezoelectric transformer for DC/DC converter with stochastic optimization method
NASA Astrophysics Data System (ADS)
Vasic, Dejan; Vido, Lionel
2016-04-01
Piezoelectric transformers were adopted in recent year due to their many inherent advantages such as safety, no EMI problem, low housing profile, and high power density, etc. The characteristics of the piezoelectric transformers are well known when the load impedance is a pure resistor. However, when piezoelectric transformers are used in AC/DC or DC/DC converters, there are non-linear electronic circuits connected before and after the transformer. Consequently, the output load is variable and due to the output capacitance of the transformer the optimal working point change. This paper starts from modeling a piezoelectric transformer connected to a full wave rectifier in order to discuss the design constraints and configuration of the transformer. The optimization method adopted here use the MOPSO algorithm (Multiple Objective Particle Swarm Optimization). We start with the formulation of the objective function and constraints; then the results give different sizes of the transformer and the characteristics. In other word, this method is looking for a best size of the transformer for optimal efficiency condition that is suitable for variable load. Furthermore, the size and the efficiency are found to be a trade-off. This paper proposes the completed design procedure to find the minimum size of PT in need. The completed design procedure is discussed by a given specification. The PT derived from the proposed design procedure can guarantee both good efficiency and enough range for load variation.
A method for simulating a flux-locked DC SQUID
NASA Technical Reports Server (NTRS)
Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.
1993-01-01
The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.
A method for simulating a flux-locked DC SQUID
NASA Technical Reports Server (NTRS)
Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.
1993-01-01
The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.
Construction and evaluation of rats' tolerogenic dendritic cells (DC) induced by NF-κB Decoy method.
Jiang, HongMei; Zhang, YaLi; Yin, XiangFei; Hu, HengGui; Hu, XiaoLei; Fei, Ying; Tu, Yanyang; Zhang, Yongsheng
2014-09-01
To construct and evaluate rats' tolerogenic dendritic cells (DC) through induction by NF-κB Decoy method. GM-CSF and IL-4 were used to transform rats's monocytes into DC, and DC were stimulated with LPS, NF-κB Decoy ODN, and loaded with Bovine Type II Collagen. The following methods were employed to phenotype DC: 1) Observation of cell morphology; 2) Evaluation of cell viability using trypan blue staining; 3) Purity determination of DC through detection of specific markers OX-62; 4) Evaluation of mature state of DC via the determination of the expression of CD80 and CD86; 5) Determination of stimulation capability towards the proliferation of lymphocyte and the secretion of INF-r and IL-10. The activity of DC was more than 92%, and the expression of OX-62 was more than 70%. Most of DC exhibited the phenotype of CD80(+)/CD86(-). Compared with control group and LPS-stimulation group, the less mature adhered cells and hairlike DC were observed in NF-κB decoy group. Significant reduction (p < 0.05) was observed for the positive expression and extension of CD80 and CD86 in cell surface. After loaded with calf type II collagen, the low expression of CD80 and CD86 remains to be existed. The stimulation capability of DC towards lymphocyte in NF-κB decoy group was lower than that in control group (p<0.05) and LPS stimulation group (p < 0.05). NF-κB Decoy ODN method can be successfully applied for construct rats' tolerogenic dendritic cells (DC) with stable morphology and phenotype. The tolerogenic DC exhibited immature immune phenotype, and low capability to stimulate lymphocytes.
Directing migration of endothelial progenitor cells with applied DC electric fields.
Zhao, Zhiqiang; Qin, Lu; Reid, Brian; Pu, Jin; Hara, Takahiko; Zhao, Min
2012-01-01
Naturally-occurring, endogenous electric fields (EFs) have been detected at skin wounds, damaged tissue sites and vasculature. Applied EFs guide migration of many types of cells, including endothelial cells to migrate directionally. Homing of endothelial progenitor cells (EPCs) to an injury site is important for repair of vasculature and also for angiogenesis. However, it has not been reported whether EPCs respond to applied EFs. Aiming to explore the possibility to use electric stimulation to regulate the progenitor cells and angiogenesis, we tested the effects of direct-current (DC) EFs on EPCs. We first used immunofluorescence to confirm the expression of endothelial progenitor markers in three lines of EPCs. We then cultured the progenitor cells in EFs. Using time-lapse video microscopy, we demonstrated that an applied DC EF directs migration of the EPCs toward the cathode. The progenitor cells also align and elongate in an EF. Inhibition of vascular endothelial growth factor (VEGF) receptor signaling completely abolished the EF-induced directional migration of the progenitor cells. We conclude that EFs are an effective signal that guides EPC migration through VEGF receptor signaling in vitro. Applied EFs may be used to control behaviors of EPCs in tissue engineering, in homing of EPCs to wounds and to an injury site in the vasculature.
AC/DC Power Flow Computation Based on Improved Levenberg-Marquardt Method
NASA Astrophysics Data System (ADS)
Cao, Jia; Yan, Zheng; Fan, Xiang; Xu, Xiaoyuan; Li, Jianhua; Cao, Lu
2015-02-01
Under the case of ill-conditioning system, this paper is concerned with the AC/DC power flow calculation. The improved Levenberg-Marquardt (ILM) method with adaptive damping factor selection is applied to solve the AC/DC power flow problem. The main purpose of this paper is as follows: one is to provide comparison reference between Newton method, classical LM method (CLM) and ILM method under the well-conditioning system; the other is to research what is the maximal load withstood by power system, under the case of ill-conditioning. Finally, those methods are tested on the 22-bus, the IEEE 118-bus AC/DC system, respectively. Numerical results indicate that the ILM method has the advantage of fast convergent speed. When expanding loads in a certain extent, ILM method can at least find least square solutions, whereas Newton method and CLM method would divergent, and the convergent property of Newton method can be improved by taking some measurements using the information of a least square solution obtained by ILM method.
NASA Astrophysics Data System (ADS)
Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu
2017-03-01
To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.
Photovoltaic dependence of photorefractive grating on the externally applied dc electric field
NASA Astrophysics Data System (ADS)
Maurya, M. K.; Yadav, R. A.
2013-04-01
Photovoltaic dependence of photorefractive grating (i.e., space-charge field and phase-shift of the index grating) on the externally applied dc electric field in photovoltaic-photorefractive materials has been investigated. The influence of photovoltaic field (EPhN), diffusion field and carrier concentration ratio r (donor/acceptor impurity concentration ratio) on the space-charge field (SCF) and phase-shift of the index grating in the presence and absence of the externally applied dc electric field have also been studied in details. Our results show that, for a given value of EPhN and r, the magnitude of the SCF and phase-shift of the index grating can be enhanced significantly by employing the lower dc electric field (EON<10) across the photovoltaic-photorefractive crystal and higher value of diffusion field (EDN>40). Such an enhancement in the magnitude of the SCF and phase-shift of the index grating are responsible for the strongest beam coupling in photovoltaic-photorefractive materials. This sufficiently strong beam coupling increases the two-beam coupling gain that may be exceed the absorption and reflection losses of the photovoltaic-photorefractive sample, and optical amplification can occur. The higher value of optical amplification in photovoltaic-photorefractive sample is required for the every applications of photorefractive effect so that technology based on the photorefractive effect such as holographic storage devices, optical information processing, acousto-optic tunable filters, gyro-sensors, optical modulators, optical switches, photorefractive-photovoltaic solitons, biomedical applications, and frequency converters could be improved.
Pohlmann, André; Hameyer, Kay
2012-01-01
Ventricular Assist Devices (VADs) are mechanical blood pumps that support the human heart in order to maintain a sufficient perfusion of the human body and its organs. During VAD operation blood damage caused by hemolysis, thrombogenecity and denaturation has to be avoided. One key parameter causing the blood's denaturation is its temperature which must not exceed 42 °C. As a temperature rise can be directly linked to the losses occuring in the drive system, this paper introduces an efficiency prediction chain for Brushless DC (BLDC) drives which are applied in various VAD systems. The presented chain is applied to various core materials and operation ranges, providing a general overview on the loss dependencies.
NASA Astrophysics Data System (ADS)
Hanlon, C. J.; Small, A.; Bose, S.; Young, G. S.; Verlinde, J.
2013-12-01
undertaken by DC3 investigators, depending on the scoring method used. Reliability diagram for the algorithmic system used to forecast isolated convective thunderstorms for the DC3 field campaign. The clustering of points around the 45-degree line indicates that the forecasting system is well-calibrated -- a critical requirement for an algorithmic flight decision recommendation system.
Modelling of stress fields during LFEM DC casting of aluminium billets by a meshless method
NASA Astrophysics Data System (ADS)
Mavrič, B.; Šarler, B.
2015-06-01
Direct Chill (DC) casting of aluminium alloys is a widely established technology for efficient production of aluminium billets and slabs. The procedure is being further improved by the application of Low Frequency Electromagnetic Field (LFEM) in the area of the mold. Novel LFEM DC processing technique affects many different phenomena which occur during solidification, one of them being the stresses and deformations present in the billet. These quantities can have a significant effect on the quality of the cast piece, since they impact porosity, hot-tearing and cold cracking. In this contribution a novel local radial basis function collocation method (LRBFCM) is successfully applied to the problem of stress field calculation during the stationary state of DC casting of aluminium alloys. The formulation of the method is presented in detail, followed by the presentation of the tackled physical problem. The model describes the deformations of linearly elastic, inhomogeneous isotropic solid with a given temperature field. The temperature profile is calculated using the in-house developed heat and mass transfer model. The effects of low frequency EM casting process parameters on the vertical, circumferential and radial stress and on the deformation of billet surface are presented. The application of the LFEM appears to decrease the amplitudes of the tensile stress occurring in the billet.
Circuit and Method for Communication Over DC Power Line
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.
2007-01-01
A circuit and method for transmitting and receiving on-off-keyed (OOK) signals with fractional signal-to-noise ratios uses available high-temperature silicon- on-insulator (SOI) components to move computational, sensing, and actuation abilities closer to high-temperature or high-ionizing radiation environments such as vehicle engine compartments, deep-hole drilling environments, industrial control and monitoring of processes like smelting, and operations near nuclear reactors and in space. This device allows for the networking of multiple, like nodes to each other and to a central processor. It can do this with nothing more than the already in-situ power wiring of the system. The device s microprocessor allows it to make intelligent decisions within the vehicle operational loop and to effect control outputs to its associated actuators. The figure illustrates how each node converts digital serial data to OOK 18-kHz in transmit mode and vice-versa in receive mode; though operations at lower frequencies or up to a megahertz are within reason using this method and these parts. This innovation s technique modulates a DC power bus with millivolt-level signals through a MOSFET (metal oxide semiconductor field effect transistor) and resistor by OOK. It receives and demodulates this signal from the DC power bus through capacitive coupling at high temperature and in high ionizing radiation environments. The demodulation of the OOK signal is accomplished by using an asynchronous quadrature detection technique realized by a quasi-discrete Fourier transform through use of the quadrature components (0 and 90 phases) of the carrier frequency as generated by the microcontroller and as a function of the selected crystal frequency driving its oscillator. The detected signal is rectified using an absolute-value circuit containing no diodes (diodes being non-operational at high temperatures), and only operational amplifiers. The absolute values of the two phases of the received signal
Su, Gui-Jia
2003-06-10
A multilevel DC link inverter and method for improving torque response and current regulation in permanent magnet motors and switched reluctance motors having a low inductance includes a plurality of voltage controlled cells connected in series for applying a resulting dc voltage comprised of one or more incremental dc voltages. The cells are provided with switches for increasing the resulting applied dc voltage as speed and back EMF increase, while limiting the voltage that is applied to the commutation switches to perform PWM or dc voltage stepping functions, so as to limit current ripple in the stator windings below an acceptable level, typically 5%. Several embodiments are disclosed including inverters using IGBT's, inverters using thyristors. All of the inverters are operable in both motoring and regenerating modes.
Nakamura, Keiji; Kohda, Tomoko; Seto, Yoshiyuki; Mukamoto, Masafumi; Kozaki, Shunji
2013-03-23
Clostridium botulinum type C and D strains produce serotype-specific or mosaic botulinum neurotoxin (BoNT). Botulinum C/D and D/C mosaic neurotoxins (BoNT/CD and /DC) are related to avian and bovine botulism, respectively. The two mosaic BoNTs cannot be differentiated from authentic type C and D BoNTs by the conventional serotyping method. In this study, we attempted to establish novel methods for the specific detection of BoNT/CD or/DC. Comparison with nontoxic component genes in type C and D strains revealed that the nucleotide sequence of the ha70 gene is well conserved among either serotype-specific or mosaic BoNT-producing strains. A multiplex PCR method with primers for the light chain of boNT, ntnh, and ha70 gene detection was developed for typing of the boNT gene in type C and D strains. Upon applying this method, twenty-seven type C and D strains, including authentic strains and the isolates from avian and bovine botulism, were successfully divided into type C, C/D mosaic, type D, and D/C mosaic BoNT-producing strains. We then prepared an immunochromatography kit with specific monoclonal antibody showing high binding affinity to each mosaic BoNT. BoNT/CD and /DC in the culture supernatant were detected with limits of detection of 2.5 and 10 LD(50), respectively. Furthermore, we confirmed the applicability of the kit for BoNT/DC using crude culture supernatant from a specimen from a bovine suspected of having botulism. These results indicate that the genetic and immunological detection methods are useful for the diagnosis of avian and bovine botulism.
NASA Astrophysics Data System (ADS)
Hou, Hao; Wang, Xuanze; Zhai, Zhongsheng
2016-01-01
A circuit processing method is present to restrain DC drift after analyzing the traditional signal processing method of interferometry for micro vibration measurement. At first, the circuit diagram is designed and its mathematical model is built, then the theoretical equations of the output signal are derived with the practical parameters. By using SIMULINK simulation, the process for restraining DC drift is present on the conditions of the variations of background intensity. The validity of feedback circuit was verified through analyzing the real experiment data. Theoretical predictions match simulation results, showing that this method effectively restrains DC drift for interferometry of micro vibration measurement and it greatly improves the system's stability.
NASA Astrophysics Data System (ADS)
Hanlon, Christopher J.; Small, Arthur A.; Bose, Satyajit; Young, George S.; Verlinde, Johannes
2014-10-01
Automated decision systems have shown the potential to increase data yields from field experiments in atmospheric science. The present paper describes the construction and performance of a flight decision system designed for a case in which investigators pursued multiple, potentially competing objectives. The Deep Convective Clouds and Chemistry (DC3) campaign in 2012 sought in situ airborne measurements of isolated deep convection in three study regions: northeast Colorado, north Alabama, and a larger region extending from central Oklahoma through northwest Texas. As they confronted daily flight launch decisions, campaign investigators sought to achieve two mission objectives that stood in potential tension to each other: to maximize the total amount of data collected while also collecting approximately equal amounts of data from each of the three study regions. Creating an automated decision system involved understanding how investigators would themselves negotiate the trade-offs between these potentially competing goals, and representing those preferences formally using a utility function that served to rank-order the perceived value of alternative data portfolios. The decision system incorporated a custom-built method for generating probabilistic forecasts of isolated deep convection and estimated climatologies calibrated to historical observations. Monte Carlo simulations of alternative future conditions were used to generate flight decision recommendations dynamically consistent with the expected future progress of the campaign. Results show that a strict adherence to the recommendations generated by the automated system would have boosted the data yield of the campaign by between 10 and 57%, depending on the metrics used to score success, while improving portfolio balance.
Method to Eliminate Flux Linkage DC Component in Load Transformer for Static Transfer Switch
2014-01-01
Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2~30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method. PMID:25133255
Method to eliminate flux linkage DC component in load transformer for static transfer switch.
He, Yu; Mao, Chengxiong; Lu, Jiming; Wang, Dan; Tian, Bing
2014-01-01
Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2 ~ 30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method.
Three dimensional finite element methods: Their role in the design of DC accelerator systems
NASA Astrophysics Data System (ADS)
Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.
2013-04-01
High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 × 300 mm2. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.
DC Potentials Applied to an End-cap Electrode of a 3-D Ion Trap for Enhanced MSn Functionality
Prentice, Boone M.; Xu, Wei; Ouyang, Zheng; McLuckey, Scott A.
2010-01-01
The effects of the application of various DC magnitudes and polarities to an end-cap of a 3-D quadrupole ion trap throughout a mass spectrometry experiment were investigated. Application of a monopolar DC field was achieved by applying a DC potential to the exit end-cap electrode, while maintaining the entrance end-cap electrode at ground potential. Control over the monopolar DC magnitude and polarity during time periods associated with ion accumulation, mass analysis, ion isolation, ion/ion reaction, and ion activation can have various desirable effects. Included amongst these are increased ion capture efficiency, increased ion ejection efficiency during mass analysis, effective isolation of ions using lower AC resonance ejection amplitudes, improved temporal control of the overlap of oppositely charged ion populations, and the performance of “broad-band” collision induced dissociation (CID). These results suggest general means to improve the performance of the 3-D ion trap in a variety of mass spectrometry and tandem mass spectrometry experiments. PMID:21927573
Two-dimensional resistivity imaging in the Kestelek boron area by VLF and DC resistivity methods
NASA Astrophysics Data System (ADS)
Bayrak, Murat; Şenel, Leyla
2012-07-01
A VLF and DC resistivity investigation was conducted in the Kestelek area, western Turkey, to determine the two-dimensional images of the boron deposits. The two-dimensional resistivity images were obtained by the inversion of tipper and resistivity data for VLF and DC resistivity methods, respectively. The VLF tipper data also were improved applying the Fraser and Karous & Hjelt (K&H) filtering to delineate the boundaries of the subsurface boron deposits. The main findings are: (1) moderate (> 25 Ωm) and relatively high (> 40 Ωm) resistivity zones in the two-dimensional models, which is mostly supported by the K&H real part of tipper as the negative current density peaks, may be interpreted as middle level of potatoes type colemanite and lower level of crystal type colemanite boron deposits inside the conductive units, respectively. (2) Transition from positive peaks (conductive zones) to negative peaks (resistive zones) in the K&H real part of tipper current density pseudosections may indicate the potential locations of the boron deposits. (3) Drilling well results obtained around two profiles of the study area are consistent with distribution of the resistive boron deposits in the two-dimensional resistivity models and K&H real part of tipper filtering images.
[Montessori method applied to dementia - literature review].
Brandão, Daniela Filipa Soares; Martín, José Ignacio
2012-06-01
The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.
System and Method for Determining Rate of Rotation Using Brushless DC Motor
NASA Technical Reports Server (NTRS)
Howard, David E. (Inventor); Smith, Dennis A. (Inventor)
2000-01-01
A system and method are provided for measuring rate of rotation. A brushless DC motor is rotated and produces a back electromagnetic force (emf) on each winding thereof. Each winding's back-emf is squared. The squared outputs associated with each winding are combined, with the square root being taken of such combination, to produce a DC output proportional only to the rate of rotation of the motor's shaft.
System and Method for Determining Rate of Rotation Using Brushless DC Motor
NASA Technical Reports Server (NTRS)
Howard, David E. (Inventor); Smith, Dennis A. (Inventor)
2000-01-01
A system and method are provided for measuring rate of rotation. A brushless DC motor is rotated and produces a back electromagnetic force (emf) on each winding thereof. Each winding's back-emf is squared. The squared outputs associated with each winding are combined, with the square root being taken of such combination, to produce a DC output proportional only to the rate of rotation of the motor's shaft.
Wind tunnel investigation of active controls technology applied to a DC-10 derivative
NASA Technical Reports Server (NTRS)
Winther, B. A.; Shirley, W. A.; Heimbaugh, R. M.
1980-01-01
Application of active controls technology to reduce aeroelastic response offers a potential for significant payoffs in terms of aerodynamic efficiency and structural weight. As part of the NASA Energy Efficient Transport program, the impact upon flutter and gust load characteristics has been investigated by means of analysis and low-speed wind tunnel tests of a semispan model. The model represents a DC-10 derivative with increased wing span and an active aileron surface, responding to vertical acceleration at the wing tip. A control law satisfying both flutter and gust load constraints is presented and evaluated. In general, the beneficial effects predicted by analysis are in good agreement with experimental data.
A new method for speed control of a DC motor using magnetorheological clutch
NASA Astrophysics Data System (ADS)
Nguyen, Quoc Hung; Choi, Seung-Bok
2014-03-01
In this research, a new method to control speed of DC motor using magnetorheological (MR) clutch is proposed and realized. Firstly, the strategy of a DC motor speed control using MR clutch is proposed. The MR clutch configuration is then proposed and analyzed based on Bingham-plastic rheological model of MR fluid. An optimal designed of the MR clutch is then studied to find out the optimal geometric dimensions of the clutch that can transform a required torque with minimum mass. A prototype of the optimized MR clutch is then manufactured and its performance characteristics are experimentally investigated. A DC motor speed control system featuring the optimized MR clutch is designed and manufactured. A PID controller is then designed to control the output speed of the system. In order to evaluate the effectiveness of the proposed DC motor speed control system, experimental results of the system such as speed tracking performance are obtained and presented with discussions.
Method of applying coatings to substrates
Hendricks, Charles D.
1991-01-01
A method for applying novel coatings to substrates is provided. The ends of multiplicity of rods of different materials are melted by focused beams of laser light. Individual electric fields are applied to each of the molten rod ends, thereby ejecting charged particles that include droplets, atomic clusters, molecules, and atoms. The charged particles are separately transported, by the accelerations provided by electric potentials produced by an electrode structure, to substrates where they combine and form the coatings. Layered and thickness graded coatings comprised of hithereto unavailable compositions, are provided.
The averaging method in applied problems
NASA Astrophysics Data System (ADS)
Grebenikov, E. A.
1986-04-01
The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.
NASA Technical Reports Server (NTRS)
Mendrek, M. J.; Higgins, R. H.; Danford, M. D.
1988-01-01
To investigate metal surface corrosion and the breakdown of metal protective coatings, the ac impedance method is applied to six systems of primer coated and primer topcoated 4130 steel. Two primers were used: a zinc-rich epoxy primer and a red lead oxide epoxy primer. The epoxy-polyamine topcoat was used in four of the systems. The EG and G-PARC Model 368 ac impedance measurement system, along with dc measurements with the same system using the polarization resistance method, were used to monitor changing properties of coated 4230 steel disks immersed in 3.5 percent NaCl solutions buffered at pH 5.4 over periods of 40 to 60 days. The corrosion system can be represented by an electronic analog called an equivalent circuit consisting of resistors and capacitors in specific arrangements. This equivalent circuit parallels the impedance behavior of the corrosion system during a frequency scan. Values for the resistors and capacitors, that can be assigned in the equivalent circuit following a least-squares analysis of the data, describe changes that occur on the corroding metal surface and in the protective coatings. Two equivalent circuits have been determined that predict the correct Bode phase and magnitude of the experimental sample at different immersion times. The dc corrosion current density data are related to equivalent circuit element parameters. Methods for determining corrosion rate with ac impedance parameters are verified by the dc method.
Entropy viscosity method applied to Euler equations
Delchini, M. O.; Ragusa, J. C.; Berry, R. A.
2013-07-01
The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)
Adaptable DC offset correction
NASA Technical Reports Server (NTRS)
Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)
2009-01-01
Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.
A novel method for simulation of brushless DC motor servo-control system based on MATLAB
NASA Astrophysics Data System (ADS)
Tao, Keyan; Yan, Yingmin
2006-11-01
This paper provides a research about the simulation of brush-less DC motor (BLDCM) servo control system. Based on the mathematical model of Brush-less DC motor (BLDCM), built the system simulation model with the MATLAB software. When the system model is made, the isolated functional blocks, such as BLDCM block, the rotor's position detection block, change-phase logic block etc. have been modeled. By the organic combination of these blocks, the model of BLDCM can be established easily. The reasonability and validity have been testified by the simulation results and this novel method offers a new thought way for designing and debugging actual motors.
A hierarchical voltage control method for multi-terminal AC/DC distribution system
NASA Astrophysics Data System (ADS)
Ma, Zhoujun; Zhu, Hong; Zhou, Dahong; Wang, Chunning; Tang, Renquan; Xu, Honghua
2017-08-01
A hierarchical control system is proposed in this paper to control the voltage of multi-terminal AC/DC distribution system. The hierarchical control system consists of PCC voltage control system, DG voltage control system and voltage regulator control system. The functions of three systems are to control the voltage of DC distribution network, AC bus voltage and area voltage. A method is proposed to deal with the whole control system. And a case study indicates that when voltage fluctuating, three layers of power flow control system is running orderly, and can maintain voltage stability.
New Current Control Method of DC Power Supply for Magnetic Perturbation Coils on J-TEXT
NASA Astrophysics Data System (ADS)
Zeng, Wubing; Ding, Yonghua; Yi, Bin; Xu, Hangyu; Rao, Bo; Zhang, Ming; Liu, Minghai
2014-11-01
In order to advance the research on suppressing tearing modes and driving plasma rotation, a DC power supply (PS) system has been developed for dynamic resonant magnetic perturbation (DRMP) coils and applied in the J-TEXT experiment. To enrich experimental phenomena in the J-TEXT tokamak, applying the circulating current four-quadrant operation mode in the DRMP DC PS system is proposed. By using the circulating current four-quadrant operation, DRMP coils can be smoothly controlled without the dead-time when the current polarity reverses. Essential circuit analysis, control optimization and simulation of desired scenarios have been performed for normal current. Relevant simulation and test results are also presented.
Novel PWM Modifying Method for Detecting DC-bus Current to Facilitate Noise Adaptation
NASA Astrophysics Data System (ADS)
Arakawa, Yoichiro; Aoyagi, Shigehisa; Nagata, Koichiro; Arao, Yusuke
A well-known problem encountered while using method for the detection of DC-bus current in order to reconstruct three-phase-currents is the short DC pulse duration. In order to increase the pulse duration, several PWM modifying methods have been proposed. “Half Pulse Shift method (HPS)“ is one of the promising methods that are both robust to detection error caused by current ripple and offer the advantage of low acoustic noise. In general, common-mode noise current affects DC-bus current and causes detection error, therefore, pulse duration increased by using the PWM modifying method is longer than the decay time of the common-mode noise. This decay time is dependent on the electrical environment, especially on the power -supply cable from the inverter to the motor. Although the pulse duration required to avoid common-mode noise can be estimated, this pulse duration is limited by the controllable range of output voltages. In this paper, a new PWM modifying method in order to ease the limit of the pulse duration is proposed. The results of numerical analysis confirmed that by using the proposed method the operation area over which current can detected is broadened. From the experimental results, it is confirmed that the proposed method guarantees stable operation and robustness for noisy environments.
Forward modeling of marine DC resistivity method for a layered anisotropic earth
NASA Astrophysics Data System (ADS)
Yin, Chang-Chun; Zhang, Ping; Cai, Jing
2016-06-01
Since the ocean bottom is a sedimentary environment wherein stratification is well developed, the use of an anisotropic model is best for studying its geology. Beginning with Maxwell's equations for an anisotropic model, we introduce scalar potentials based on the divergence-free characteristic of the electric and magnetic (EM) fields. We then continue the EM fields down into the deep earth and upward into the seawater and couple them at the ocean bottom to the transmitting source. By studying both the DC apparent resistivity curves and their polar plots, we can resolve the anisotropy of the ocean bottom. Forward modeling of a high-resistivity thin layer in an anisotropic half-space demonstrates that the marine DC resistivity method in shallow water is very sensitive to the resistive reservoir but is not influenced by airwaves. As such, it is very suitable for oil and gas exploration in shallowwater areas but, to date, most modeling algorithms for studying marine DC resistivity are based on isotropic models. In this paper, we investigate one-dimensional anisotropic forward modeling for marine DC resistivity method, prove the algorithm to have high accuracy, and thus provide a theoretical basis for 2D and 3D forward modeling.
Method and apparatus for generating radiation utilizing DC to AC conversion with a conductive front
Dawson, John M.; Mori, Warren B.; Lai, Chih-Hsiang; Katsouleas, Thomas C.
1998-01-01
Method and apparatus for generating radiation of high power, variable duration and broad tunability over several orders of magnitude from a laser-ionized gas-filled capacitor array. The method and apparatus convert a DC electric field pattern into a coherent electromagnetic wave train when a relativistic ionization front passes between the capacitor plates. The frequency and duration of the radiation is controlled by the gas pressure and capacitor spacing.
Method and apparatus for generating radiation utilizing DC to AC conversion with a conductive front
Dawson, J.M.; Mori, W.B.; Lai, C.H.; Katsouleas, T.C.
1998-07-14
Method and apparatus ar disclosed for generating radiation of high power, variable duration and broad tunability over several orders of magnitude from a laser-ionized gas-filled capacitor array. The method and apparatus convert a DC electric field pattern into a coherent electromagnetic wave train when a relativistic ionization front passes between the capacitor plates. The frequency and duration of the radiation is controlled by the gas pressure and capacitor spacing. 4 figs.
Probability methods applied to electric power systems
Not Available
1989-11-01
The roots of understanding probabilistic phenomena go back to antiquity. We have been willing to bet our money on the roll of dice or similar events for centuries. Yet, when it comes to betting our lives or livelihood, we have been slow to adapt probabilistic methods to describe uncertainty. As a matter of fact, we are loath to admit a probability of failure when it comes to such structures as bridges, buildings or airplanes. Electric utility engineers the world over realize that reliability of structures and systems can be improved and money can be saved if we use a more enlightened approach to uncertainty. The technology and analytical power that is now being made available to the working engineer make that possible. It is for this reason, the International Council on Probability Methods Applied to Power Systems (PMAPS) was formed in 1985. It is important that engineers have a forum for exchange of knowledge and ideas on subjects related to describing and coping with uncertainty. As the world becomes more complex and demands increase, it is the engineers lot to make the most efficient use possible of scarce resources. The papers contained within this document cover the design and analysis of transmission components, systems analysis and reliability assessment, testing of power components, systems operations planning and probabilistic analysis, power distribution systems, cost, and mathematical modeling. The individual papers have been individually cataloged and indexed.
Blood viscometer applying electromagnetically spinning method.
Fukunaga, Kazuyoshi; Onuki, Masaya; Ohtsuka, Yoshinori; Hirano, Taichi; Sakai, Keiji; Ohgoe, Yasuharu; Katoh, Ayako; Yaguchi, Toshiyuki; Funakubo, Akio; Fukui, Yasuhiro
2013-09-01
Viscosity is an important parameter which affects hemodynamics during extracorporeal circulation and long-term cardiac support. In this study, we have aimed to develop a novel viscometer with which we can easily measure blood viscosity by applying the electromagnetically spinning (EMS) method. In the EMS method, we can rotate an aluminum ball 2 mm in diameter indirectly in a test tube with 0.3 ml sample of a liquid by utilizing the moment caused by the Lorentz force as well as separate the test tube from the viscometer body. First, we calibrated the EMS viscometer by means of liquid samples with known viscosities and computational fluid dynamics. Then, when we measured the viscosity of 9.4 mPa s silicone oil in order to evaluate the performance of the EMS viscometer, the mean viscosity was found to be 9.55 ± 0.10 mPa s at available shear rates from 10 to 240 s(-1). Finally, we measured the viscosity of bovine blood. We prepared four blood samples whose hematocrit levels were adjusted to 23, 45, 50, and 70% and a plasma sample without hemocyte components. As a result, the measurements of blood viscosities showed obedience to Casson's equation. We found that the viscosity was approximately constant in Newtonian silicone oil, whereas the viscosity decreased with increasing shear rate in non-Newtonian bovine blood. These results suggest that the EMS viscometer will be useful to measure blood viscosity at the clinical site.
Clinical practice is not applied scientific method.
Cox, K
1995-08-01
Practice is often described as applied science, but real life is far too complex and interactive to be handled by analytical scientific methods. The limitations of usefulness of scientific method in clinical practice result from many factors. The complexity of the large number of ill-defined variables at many levels of the problem. Scientific method focuses on one variable at a time across a hundred identical animals to extract a single, generalizable 'proof' or piece of 'truth'. Clinical practice deals with a hundred variables at one time within one animal from among a clientele of non-identical animals in order to optimize a mix of outcomes intended to satisfy that particular animal's current needs and desires. Interdependence among the variables. Most factors in the illness, the disease, the patient and the setting are interdependent, and cannot be sufficiently isolated to allow their separate study. Practice as a human transaction involving at least two people is too complex to be analysed one factor at a time when the interaction stimulates unpredictable responses. Ambiguous data. Words have many usages. People not only assign different interpretations to the same words, they assign different 'meanings', especially according to the threat or hope they may imply. The perceptual data gleaned from physical examination may be difficult to specify exactly or to confirm objectively. The accuracy and precision of investigational data and their reporting can be low, and are frequently unknown. Differing goals between science and practice. Science strives for exact points of propositional knowledge, verifiable by logical argument using objective data and repetition of the experiment.(ABSTRACT TRUNCATED AT 250 WORDS)
Computational methods applied to wind tunnel optimization
NASA Astrophysics Data System (ADS)
Lindsay, David
methods, coordinate transformation theorems and techniques including the Method of Jacobians, and a derivation of the fluid flow fundamentals required for the model. It applies the methods to study the effect of cross-section and fillet variation, and to obtain a sample design of a high-uniformity nozzle.
Wang, Panbao; Lu, Xiaonan; Yang, Xu; Wang, Wei; Xu, Dianguo
2016-09-01
This paper proposes an improved distributed secondary control scheme for dc microgrids (MGs), aiming at overcoming the drawbacks of conventional droop control method. The proposed secondary control scheme can remove the dc voltage deviation and improve the current sharing accuracy by using voltage-shifting and slope-adjusting approaches simultaneously. Meanwhile, the average value of droop coefficients is calculated, and then it is controlled by an additional controller included in the distributed secondary control layer to ensure that each droop coefficient converges at a reasonable value. Hence, by adjusting the droop coefficient, each participating converter has equal output impedance, and the accurate proportional load current sharing can be achieved with different line resistances. Furthermore, the current sharing performance in steady and transient states can be enhanced by using the proposed method. The effectiveness of the proposed method is verified by detailed experimental tests based on a 3 × 1 kW prototype with three interface converters.
Applying the Scientific Method of Cybersecurity Research
Tardiff, Mark F.; Bonheyo, George T.; Cort, Katherine A.; Edgar, Thomas W.; Hess, Nancy J.; Hutton, William J.; Miller, Erin A.; Nowak, Kathleen E.; Oehmen, Christopher S.; Purvine, Emilie AH; Schenter, Gregory K.; Whitney, Paul D.
2016-09-15
The cyber environment has rapidly evolved from a curiosity to an essential component of the contemporary world. As the cyber environment has expanded and become more complex, so have the nature of adversaries and styles of attacks. Today, cyber incidents are an expected part of life. As a result, cybersecurity research emerged to address adversarial attacks interfering with or preventing normal cyber activities. Historical response to cybersecurity attacks is heavily skewed to tactical responses with an emphasis on rapid recovery. While threat mitigation is important and can be time critical, a knowledge gap exists with respect to developing the science of cybersecurity. Such a science will enable the development and testing of theories that lead to understanding the broad sweep of cyber threats and the ability to assess trade-offs in sustaining network missions while mitigating attacks. The Asymmetric Resilient Cybersecurity Initiative at Pacific Northwest National Laboratory is a multi-year, multi-million dollar investment to develop approaches for shifting the advantage to the defender and sustaining the operability of systems under attack. The initiative established a Science Council to focus attention on the research process for cybersecurity. The Council shares science practices, critiques research plans, and aids in documenting and reporting reproducible research results. The Council members represent ecology, economics, statistics, physics, computational chemistry, microbiology and genetics, and geochemistry. This paper reports the initial work of the Science Council to implement the scientific method in cybersecurity research. The second section describes the scientific method. The third section in this paper discusses scientific practices for cybersecurity research. Section four describes initial impacts of applying the science practices to cybersecurity research.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-05
... Corporation Model DC- 9-14, DC-9-15, and DC-9-15F Airplanes; and Model DC-9-20, DC-9-30, DC- 9-40, and DC-9-50... airworthiness directive (AD) that applies to certain Model DC-9-14 and DC-9-15 airplanes; and Model DC-9-20, DC-9-30, DC-9-40, and DC-9-50 series airplanes. The existing AD currently......
Method of measuring the dc electric field and other tokamak parameters
Fisch, Nathaniel J.; Kirtz, Arnold H.
1992-01-01
A method including externally imposing an impulsive momentum-space flux to perturb hot tokamak electrons thereby producing a transient synchrotron radiation signal, in frequency-time space, and the inference, using very fast algorithms, of plasma parameters including the effective ion charge state Z.sub.eff, the direction of the magnetic field, and the position and width in velocity space of the impulsive momentum-space flux, and, in particular, the dc toroidal electric field.
Applied Mathematical Methods in Theoretical Physics
NASA Astrophysics Data System (ADS)
Masujima, Michio
2005-04-01
All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.
Zhao, Kai; Peng, Ran; Li, Dongqing
2016-12-07
A novel DC-dielectrophoresis (DEP) method employing a pressure-driven flow for the continuous separation of micro/nano-particles is presented in this paper. To generate the DEP force, a small voltage difference is applied to produce a non-uniformity of the electric field across a microchannel via a larger orifice of several hundred microns on one side of the channel wall and a smaller orifice of several hundred nanometers on the opposite channel wall. The particles experience a DEP force when they move with the flow through the vicinity of the small orifice, where the strongest electrical field gradient exists. Experiments were conducted to demonstrate the separation of 1 μm and 3 μm polystyrene particles by size by adjusting the applied electrical potentials. In order to separate smaller nanoparticles, the electrical conductivity of the suspending solution is adjusted so that the polystyrene nanoparticles of a given size experience positive DEP while the polystyrene nanoparticles of another size experience negative DEP. Using this method, the separation of 51 nm and 140 nm nanoparticles and the separation of 140 nm and 500 nm nanoparticles were demonstrated. In comparison with the microfluidic DC-DEP methods reported in the literature which utilize hurdles or obstacles to induce the non-uniformity of an electric field, a pair of asymmetrical orifices on the channel side walls is used in this method to generate a strong electrical field gradient and has advantages such as capability of separating nanoparticles, and locally applied lower electrical voltages to minimize the Joule heating effect.
[Method and apparatus for applying metal cladding
Bosna, A.A.
1989-09-27
This progress report contains information related to the development of durable corrosion resistant coating materials for the hulls of ships and offshore platforms. Also contained are the details of an apparatus used to apply this material. (JEF)
Study of defuzzification methods of fuzzy logic controller for speed control of a DC motor
Rao, D.H.; Saraf, S.S.
1995-12-31
A typical Fuzzy Logic Controller (FLC) has the following components: fuzzification, knowledge base, decision making and defuzzification. Various defuzzification techniques have been proposed in the literature. The efficacy of a FLC depends very much on the defuzzification process. This is so because the overall performance of the system under control is determined by the controlling signal (the defuzzified output of the FLC) the system receives. The aim of this paper is to evaluate qualitatively the performance of the different defuzzification techniques as applied to speed control of a DC motor.
Bootstrapping Methods Applied for Simulating Laboratory Works
ERIC Educational Resources Information Center
Prodan, Augustin; Campean, Remus
2005-01-01
Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…
Bootstrapping Methods Applied for Simulating Laboratory Works
ERIC Educational Resources Information Center
Prodan, Augustin; Campean, Remus
2005-01-01
Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…
Statistical classification methods applied to seismic discrimination
Ryan, F.M.; Anderson, D.N.; Anderson, K.K.; Hagedorn, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.
1996-06-11
To verify compliance with a Comprehensive Test Ban Treaty (CTBT), low energy seismic activity must be detected and discriminated. Monitoring small-scale activity will require regional (within {approx}2000 km) monitoring capabilities. This report provides background information on various statistical classification methods and discusses the relevance of each method in the CTBT seismic discrimination setting. Criteria for classification method selection are explained and examples are given to illustrate several key issues. This report describes in more detail the issues and analyses that were initially outlined in a poster presentation at a recent American Geophysical Union (AGU) meeting. Section 2 of this report describes both the CTBT seismic discrimination setting and the general statistical classification approach to this setting. Seismic data examples illustrate the importance of synergistically using multivariate data as well as the difficulties due to missing observations. Classification method selection criteria are presented and discussed in Section 3. These criteria are grouped into the broad classes of simplicity, robustness, applicability, and performance. Section 4 follows with a description of several statistical classification methods: linear discriminant analysis, quadratic discriminant analysis, variably regularized discriminant analysis, flexible discriminant analysis, logistic discriminant analysis, K-th Nearest Neighbor discrimination, kernel discrimination, and classification and regression tree discrimination. The advantages and disadvantages of these methods are summarized in Section 5.
Energy adjustment methods applied to alcohol analyses.
Johansen, Ditte; Andersen, Per K; Overvad, Kim; Jensen, Gorm; Schnohr, Peter; Sørensen, Thorkild I A; Grønbaek, Morten
2003-01-01
When alcohol consumption is related to outcome, associations between alcohol type and health outcomes may occur simply because of the ethanol in the beverage type. When one analyzes the consequences of consumption of beer, wine, and spirits, the total alcohol intake must therefore be taken into account. However, owing to the linear dependency between total alcohol intake and the alcohol content of each beverage type, the effects cannot be separated from each other or from the effect of ethanol. In nutritional epidemiology, similar problems regarding intake of macronutrients and total energy intake have been addressed, and four methods have been proposed to solve the problem: energy partition, standard, density, and residual. The aim of this study was to evaluate the usefulness of the energy adjustment methods in alcohol analyses by using coronary heart disease as an example. Data obtained from the Copenhagen City Heart Study were used. The standard and energy partition methods yielded similar results for continuous, and almost similar results for categorical, alcohol variables. The results from the density method differed, but nevertheless were concordant with these. Beer and wine drinkers, in comparison with findings for nondrinkers, had lower risk of coronary heart disease. Except for the case of men drinking beer, the effect seemed to be associated with drinking one drink per week. The standard method derives influence of substituting alcohol types at constant total alcohol intake and complements the estimates of adding consumption of a particular alcohol type to the total intake. For most diseases, the effect of ethanol predominates over that of substances in the beverage type, which makes the density method less relevant in alcohol analyses.
Applying Mixed Methods Techniques in Strategic Planning
ERIC Educational Resources Information Center
Voorhees, Richard A.
2008-01-01
In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…
An essay on method in applied psychoanalysis.
Baudry, F
1984-10-01
The author attempts to evaluate critically the application of psychoanalysis to literature by examining problems of method and the assumptions psychoanalysts unwittingly make about texts they are about to interpret. The special advantages of psychoanalysis over other interpretive systems are discussed, and several examples of the possible use of psychoanalysis in the study of literary texts are presented.
Applying Human Computation Methods to Information Science
ERIC Educational Resources Information Center
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
Applying Human Computation Methods to Information Science
ERIC Educational Resources Information Center
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
Applying Mixed Methods Techniques in Strategic Planning
ERIC Educational Resources Information Center
Voorhees, Richard A.
2008-01-01
In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…
[The diagnostic methods applied in mycology].
Kurnatowska, Alicja; Kurnatowski, Piotr
2008-01-01
The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.
Applying New Methods to Diagnose Coral Diseases
Kellogg, Christina A.; Zawada, David G.
2009-01-01
Coral disease, one of the major causes of reef degradation and coral death, has been increasing worldwide since the 1970s, particularly in the Caribbean. Despite increased scientific study, simple questions about the extent of disease outbreaks and the causative agents remain unanswered. A component of the U.S. Geological Survey Coral Reef Ecosystem STudies (USGS CREST) project is focused on developing and using new methods to approach the complex problem of coral disease.
Metal alloy coatings and methods for applying
Merz, Martin D.; Knoll, Robert W.
1991-01-01
A method of coating a substrate comprises plasma spraying a prealloyed feed powder onto a substrate, where the prealloyed feed powder comprises a significant amount of an alloy of stainless steel and at least one refractory element selected from the group consisting of titanium, zirconium, hafnium, niobium, tantalum, molybdenum, and tungsten. The plasma spraying of such a feed powder is conducted in an oxygen containing atmosphere and forms an adherent, corrosion resistant, and substantially homogenous metallic refractory alloy coating on the substrate.
METHOD OF APPLYING COPPER COATINGS TO URANIUM
Gray, A.G.
1959-07-14
A method is presented for protecting metallic uranium, which comprises anodic etching of the uranium in an aqueous phosphoric acid solution containing chloride ions, cleaning the etched uranium in aqueous nitric acid solution, promptly electro-plating the cleaned uranium in a copper electro-plating bath, and then electro-plating thereupon lead, tin, zinc, cadmium, chromium or nickel from an aqueous electro-plating bath.
A hybrid method for inversion of 3D DC resistivity logging measurements.
Gajda-Zagórska, Ewa; Schaefer, Robert; Smołka, Maciej; Paszyński, Maciej; Pardo, David
This paper focuses on the application of hp hierarchic genetic strategy (hp-HGS) for solution of a challenging problem, the inversion of 3D direct current (DC) resistivity logging measurements. The problem under consideration has been formulated as the global optimization one, for which the objective function (misfit between computed and reference data) exhibits multiple minima. In this paper, we consider the extension of the hp-HGS strategy, namely we couple the hp-HGS algorithm with a gradient based optimization method for a local search. Forward simulations are performed with a self-adaptive hp finite element method, hp-FEM. The computational cost of misfit evaluation by hp-FEM depends strongly on the assumed accuracy. This accuracy is adapted to the tree of populations generated by the hp-HGS algorithm, which makes the global phase significantly cheaper. Moreover, tree structure of demes as well as branch reduction and conditional sprouting mechanism reduces the number of expensive local searches up to the number of minima to be recognized. The common (direct and inverse) accuracy control, crucial for the hp-HGS efficiency, has been motivated by precise mathematical considerations. Numerical results demonstrate the suitability of the proposed method for the inversion of 3D DC resistivity logging measurements.
Point of collapse and continuation methods for large ac/dc systems
Canizares, C.A. ); Alvarado, F.L. )
1993-02-01
This paper describes the implementation of both Point of Collapse (PoC) methods and continuation methods for the computation of voltage collapse points (saddle-node bifurcations) in large ac/dc systems. A comparison of the performance of these methods is presented for real systems of up to 2,158 buses. The paper discusses computational details of the implementation of the PoC and continuation methods, and the unique challenges encountered due to the presence of high voltage direct current (HVDC) transmission, area interchange power control regulating transformers, and voltage and reactive power limits. The characteristics of a robust PoC power flow program are presented, and its application to detection and solution of voltage stability problems is demonstrated.
ALLOY COATINGS AND METHOD OF APPLYING
Eubank, L.D.; Boller, E.R.
1958-08-26
A method for providing uranium articles with a pro tective coating by a single dip coating process is presented. The uranium article is dipped into a molten zinc bath containing a small percentage of aluminum. The resultant product is a uranium article covered with a thin undercoat consisting of a uranium-aluminum alloy with a small amount of zinc, and an outer layer consisting of zinc and aluminum. The article may be used as is, or aluminum sheathing may then be bonded to the aluminum zinc outer layer.
METHOD OF APPLYING NICKEL COATINGS ON URANIUM
Gray, A.G.
1959-07-14
A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.
Scanning methods applied to bitemark analysis
NASA Astrophysics Data System (ADS)
Bush, Peter J.; Bush, Mary A.
2010-06-01
The 2009 National Academy of Sciences report on forensics focused criticism on pattern evidence subdisciplines in which statements of unique identity are utilized. One principle of bitemark analysis is that the human dentition is unique to the extent that a perpetrator may be identified based on dental traits in a bitemark. Optical and electron scanning methods were used to measure dental minutia and to investigate replication of detail in human skin. Results indicated that being a visco-elastic substrate, skin effectively reduces the resolution of measurement of dental detail. Conclusions indicate caution in individualization statements.
Versatile Formal Methods Applied to Quantum Information.
Witzel, Wayne; Rudinger, Kenneth Michael; Sarovar, Mohan
2015-11-01
Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
APPLYING NEW METHODS TO RESEARCH REACTOR ANALYSIS.
DIAMOND,D.J.CHENG,L.HANSON,A.XU,J.CAREW,J.F.
2004-02-05
Detailed reactor physics and safety analyses are being performed for the 20 MW D{sub 2}O-moderated research reactor at the National Institute of Standards and Technology (NIST). The analyses employ state-of-the-art calculational methods and will contribute to an update to the Final Safety Analysis Report (FSAR). Three-dimensional MCNP Monte Carlo neutron and photon transport calculations are performed to determine power and reactivity parameters, including feedback coefficients and control element worths. The core depletion and determination of the fuel compositions are performed with MONTEBURNS to model the reactor at the beginning, middle, and end-of-cycle. The time-dependent analysis of the primary loop is determined with a RELAP5 transient analysis model that includes the pump, heat exchanger, fuel element geometry, and flow channels. A statistical analysis used to assure protection from critical heat flux (CHF) is performed using a Monte Carlo simulation of the uncertainties contributing to the CHF calculation. The power distributions used to determine the local fuel conditions and margin to CHF are determined with MCNP. Evaluations have been performed for the following accidents: (1) the control rod withdrawal startup accident, (2) the maximum reactivity insertion accident, (3) loss-of-flow resulting from loss of electrical power, (4) loss-of-flow resulting from a primary pump seizure, (5) loss-of-flow resulting from inadvertent throttling of a flow control valve, (6) loss-of-flow resulting from failure of both shutdown cooling pumps and (7) misloading of a fuel element. These analyses are significantly more rigorous than those performed previously. They have provided insights into reactor behavior and additional assurance that previous analyses were conservative and the reactor was being operated safely.
NASA Astrophysics Data System (ADS)
Bradley, A. M.
2013-12-01
My poster will describe dc3dm, a free open source software (FOSS) package that efficiently forms and applies the linear operator relating slip and traction components on a nonuniformly discretized rectangular planar fault in a homogeneous elastic (HE) half space. This linear operator implements what is called the displacement discontinuity method (DDM). The key properties of dc3dm are: 1. The mesh can be nonuniform. 2. Work and memory scale roughly linearly in the number of elements (rather than quadratically). 3. The order of accuracy of my method on a nonuniform mesh is the same as that of the standard method on a uniform mesh. Property 2 is achieved using my FOSS package hmmvp [AGU 2012]. A nonuniform mesh (property 1) is natural for some problems. For example, in a rate-state friction simulation, nucleation length, and so required element size, scales reciprocally with effective normal stress. Property 3 assures that if a nonuniform mesh is more efficient than a uniform mesh (in the sense of accuracy per element) at one level of mesh refinement, it will remain so at all further mesh refinements. I use the routine DC3D of Y. Okada, which calculates the stress tensor at a receiver resulting from a rectangular uniform dislocation source in an HE half space. On a uniform mesh, straightforward application of this Green's function (GF) yields a DDM I refer to as DDMu. On a nonuniform mesh, this same procedure leads to artifacts that degrade the order of accuracy of the DDM. I have developed a method I call IGA that implements the DDM using this GF for a nonuniformly discretized mesh having certain properties. Importantly, IGA's order of accuracy on a nonuniform mesh is the same as DDMu's on a uniform one. Boundary conditions can be periodic in the surface-parallel direction (in both directions if the GF is for a whole space), velocity on any side, and free surface. The mesh must have the following main property: each uniquely sized element must tile each element
Improved efficient bounding method for DC contingency analysis using reciprocity properties
Carpentier, J.L.; Di Bono, P.J.; Tournebise, P.J. )
1994-02-01
The efficient bounding method for DC contingency analysis is improved using reciprocity properties. Knowing the consequences of the outage of a branch, these properties provide the consequences on that branch of various kinds of outages. This is used in order to reduce computation times and to get rid of some difficulties, such as those occurring when a branch flow is close to its limit before outage. Compensation, sparse vector, sparse inverse and bounding techniques are also used. A program has been implemented for single branch outages and tested on actual French EHV 650 bus network. Computation times are 60% of the Efficient Bounding method. The relevant algorithm is described in detail in the first part of this paper. In the second part, reciprocity properties and bounding formulas are extended for multiple branch outages and for multiple generator or load outages. An algorithm is proposed in order to handle all these cases simultaneously.
NASA Astrophysics Data System (ADS)
Riba, Jordi-Roger
2015-09-01
This paper analyzes the skin and proximity effects in different conductive nonmagnetic straight conductor configurations subjected to applied alternating currents and voltages. These effects have important consequences, including a rise of the ac resistance, which in turn increases power loss, thus limiting the rating for the conductor. Alternating current (ac) resistance is important in power conductors and bus bars for line frequency applications, as well as in smaller conductors for high frequency applications. Despite the importance of this topic, it is not usually analyzed in detail in undergraduate and even in graduate studies. To address this, this paper compares the results provided by available exact formulas for simple geometries with those obtained by means of two-dimensional finite element method (FEM) simulations and experimental results. The paper also shows that FEM results are very accurate and more general than those provided by the formulas, since FEM models can be applied in a wide range of electrical frequencies and configurations.
Reflections on Mixing Methods in Applied Linguistics Research
ERIC Educational Resources Information Center
Hashemi, Mohammad R.
2012-01-01
This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…
Reflections on Mixing Methods in Applied Linguistics Research
ERIC Educational Resources Information Center
Hashemi, Mohammad R.
2012-01-01
This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…
Control method for peak power delivery with limited DC-bus voltage
Edwards, John; Xu, Longya; Bhargava, Brij B.
2006-09-05
A method for driving a neutral point-clamped multi-level voltage source inverter supplying a synchronous motor is provided. A DC current is received at a neutral point-clamped multi-level voltage source inverter. The inverter has first, second, and third output nodes. The inverter also has a plurality of switches. A desired speed of a synchronous motor connected to the inverter by the first second and third nodes is received by the inverter. The synchronous motor has a rotor and the speed of the motor is defined by the rotational rate of the rotor. A position of the rotor is sensed, current flowing to the motor out of at least two of the first, second, and third output nodes is sensed, and predetermined switches are automatically activated by the inverter responsive to the sensed rotor position, the sensed current, and the desired speed.
Characterization of undoped and Co doped ZnO nanoparticles synthesized by DC thermal plasma method
NASA Astrophysics Data System (ADS)
Nirmala, M.; Anukaliani, A.
2011-02-01
ZnO nanopowders doped with 5 and 10 at% cobalt were synthesized and their antibacterial activity was studied. Cobalt doped ZnO powders were prepared using dc thermal plasma method. Crystal structure and grain size of the particles were characterized by X-ray diffractometry and optical properties were studied using UV-vis spectroscopy. The particle size and morphology was observed by SEM and HRTEM, revealing rod like morphology. The antibacterial activity of undoped ZnO and cobalt doped ZnO nanoparticles against a Gram-negative bacterium Escherichia coli and a Gram-positive bacterium Bacillus atrophaeus was investigated. Undoped ZnO and cobalt doped ZnO exhibited antibacterial activity against both E. coli and Staphylococcus aureus but it was considerably more effective in the cobalt doped ZnO.
Photovoltaic system with improved DC connections and method of making same
Cioffi, Philip Michael; Todorovic, Maja Harfman; Herzog, Michael Scott; Korman, Charles Steven; Doherty, Donald M.; Johnson, Neil Anthony
2017-06-20
A micro-inverter assembly includes a housing having an opening formed in a bottom surface thereof, and a direct current (DC)-to-alternating current (AC) micro-inverter disposed within the housing at a position adjacent to the opening. The micro-inverter assembly further includes a micro-inverter DC connector electrically coupled to the DC-to-AC micro-inverter and positioned within the opening of the housing, the micro-inverter DC connector having a plurality of exposed electrical contacts.
PLURAL METALLIC COATINGS ON URANIUM AND METHOD OF APPLYING SAME
Gray, A.G.
1958-09-16
A method is described of applying protective coatings to uranlum articles. It consists in applying chromium plating to such uranium articles by electrolysis in a chromic acid bath and subsequently applying, to this minum containing alloy. This aluminum contalning alloy (for example one of aluminum and silicon) may then be used as a bonding alloy between the chromized surface and an aluminum can.
DC/DC Converter Stability Testing Study
NASA Technical Reports Server (NTRS)
Wang, Bright L.
2008-01-01
This report presents study results on hybrid DC/DC converter stability testing methods. An input impedance measurement method and a gain/phase margin measurement method were evaluated to be effective to determine front-end oscillation and feedback loop oscillation. In particular, certain channel power levels of converter input noises have been found to have high degree correlation with the gain/phase margins. It becomes a potential new method to evaluate stability levels of all type of DC/DC converters by utilizing the spectral analysis on converter input noises.
Golovnev, Anatoly; Trimper, Steffen
2011-04-21
The analytical solution of the Poisson-Nernst-Planck equations is found in the linear regime as response to a dc-voltage. In deriving the results a new approach is suggested, which allows to fulfill all initial and boundary conditions and guarantees the absence of Faradaic processes explicitly. We obtain the spatiotemporal distribution of the electric field and the concentration of the charge carriers valid in the whole time interval and for an arbitrary initial concentration of ions. A different behavior in the short- and the long-time regime is observed. The crossover between these regimes is estimated.
Kurita, A.; Takahashi, E.; Ozawa, J.; Watanabe, M.; Okuyana, K.
1986-07-01
The effects of the laminated paper layer direction on the conductivity and the dc breakdown strength in oil-impregnated paper were experimentally clarified. It was confirmed that not only oil and paper discharges through the laminated paper layers, but also paper discharges along them triggered total flashover in oil and oil-impregnated paper composite insulation. Analyses verified the usefulness of the dc flashover voltage calculation method, based on non-linear directional field calculations and the lowest flashover voltage estimations between paper discharges along the laminated paper layers and oil and paper discharges through them.
Zolper, John C.; Sherwin, Marc E.; Baca, Albert G.
2000-01-01
A method for making compound semiconductor devices including the use of a p-type dopant is disclosed wherein the dopant is co-implanted with an n-type donor species at the time the n-channel is formed and a single anneal at moderate temperature is then performed. Also disclosed are devices manufactured using the method. In the preferred embodiment n-MESFETs and other similar field effect transistor devices are manufactured using C ions co-implanted with Si atoms in GaAs to form an n-channel. C exhibits a unique characteristic in the context of the invention in that it exhibits a low activation efficiency (typically, 50% or less) as a p-type dopant, and consequently, it acts to sharpen the Si n-channel by compensating Si donors in the region of the Si-channel tail, but does not contribute substantially to the acceptor concentration in the buried p region. As a result, the invention provides for improved field effect semiconductor and related devices with enhancement of both DC and high-frequency performance.
Building "Applied Linguistic Historiography": Rationale, Scope, and Methods
ERIC Educational Resources Information Center
Smith, Richard
2016-01-01
In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…
Building "Applied Linguistic Historiography": Rationale, Scope, and Methods
ERIC Educational Resources Information Center
Smith, Richard
2016-01-01
In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…
Applying Mixed Methods Research at the Synthesis Level: An Overview
ERIC Educational Resources Information Center
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
Qualitative Theory and Methods in Applied Linguistics Research.
ERIC Educational Resources Information Center
Davis, Kathryn A.
1995-01-01
This article reviews basic issues of theory and method in qualitative research approaches to applied linguistics research, focusing on the ways in which qualitative research can contribute to an understanding of second-language acquisition and use. (83 references) (MDM)
Applying Mixed Methods Research at the Synthesis Level: An Overview
ERIC Educational Resources Information Center
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
Design and development of DC high current sensor using Hall-Effect method
NASA Astrophysics Data System (ADS)
Dewi, Sasti Dwi Tungga; Panatarani, C.; Joni, I. Made
2016-02-01
This paper report a newly developed high DC current sensor by using a Hall effect method and also the measurement system. The Hall effect sensor receive the magnetic field generated by a current carrying conductor wire. The SS49E (Honeywell) magnetoresistive sensor was employed to sense the magnetic field from the field concentrator. The voltage received from SS49E then converted into digital by using analog to digital converter (ADC-10 bit). The digital data then processed in the microcontroller to be displayed as the value of the electric current in the LCD display. In addition the measurement was interfaced into Personal Computer (PC) using the communication protocols of RS232 which was finally displayed in real-time graphical form on the PC display. The performance test on the range ± 40 Ampere showed that the maximum relative error is 5.26%. It is concluded that the sensors and the measurement system worked properly according to the design with acceptable accuracy.
Lawler, J.S.
2001-10-29
The brushless dc motor (BDCM) has high-power density and efficiency relative to other motor types. These properties make the BDCM well suited for applications in electric vehicles provided a method can be developed for driving the motor over the 4 to 6:1 constant power speed range (CPSR) required by such applications. The present state of the art for constant power operation of the BDCM is conventional phase advance (CPA) [1]. In this paper, we identify key limitations of CPA. It is shown that the CPA has effective control over the developed power but that the current magnitude is relatively insensitive to power output and is inversely proportional to motor inductance. If the motor inductance is low, then the rms current at rated power and high speed may be several times larger than the current rating. The inductance required to maintain rms current within rating is derived analytically and is found to be large relative to that of BDCM designs using high-strength rare earth magnets. Th us, the CPA requires a BDCM with a large equivalent inductance.
NASA Astrophysics Data System (ADS)
Karam, Pascal; Pennathur, Sumita
2016-11-01
Characterization of the electrophoretic mobility and zeta potential of micro and nanoparticles is important for assessing properties such as stability, charge and size. In electrophoretic techniques for such characterization, the bulk fluid motion due to the interaction between the fluid and the charged surface must be accounted for. Unlike current industrial systems which rely on DLS and oscillating potentials to mitigate electroosmotic flow (EOF), we propose a simple alternative electrophoretic method for optically determining electrophoretic mobility using a DC electric fields. Specifically, we create a system where an adverse pressure gradient counters EOF, and design the geometry of the channel so that the flow profile of the pressure driven flow matches that of the EOF in large regions of the channel (ie. where we observe particle flow). Our specific COMSOL-optimized geometry is two large cross sectional areas adjacent to a central, high aspect ratio channel. We show that this effectively removes EOF from a large region of the channel and allows for the accurate optical characterization of electrophoretic particle mobility, no matter the wall charge or particle size.
Applied AC and DC magnetic fields cause alterations in the mitotic cycle of early sea urchin embryos
Levin, M.; Ernst, S.G.
1995-09-01
This study demonstrates that exposure to 60 Hz magnetic fields (3.4--8.8 mt) and magnetic fields over the range DC-600 kHz (2.5--6.5 mT) can alter the early embryonic development of sea urchin embryos by inducing alterations in the timing of the cell cycle. Batches of fertilized eggs were exposed to the fields produced by a coil system. Samples of the continuous cultures were taken and scored for cell division. The times of both the first and second cell divisions were advanced by ELF AC fields and by static fields. The magnitude of the 60 Hz effect appears proportional to the field strength over the range tested. the relationship to field frequency was nonlinear and complex. For certain frequencies above the ELF range, the exposure resulted in a delay of the onset of mitosis. The advance of mitosis was also dependent on the duration of exposure and on the timing of exposure relative to fertilization.
Lareau, Caleb A.; White, Bill C.; Montgomery, Courtney G.; McKinney, Brett A.
2015-01-01
Recent studies have implicated the role of differential co-expression or correlation structure in gene expression data to help explain phenotypic differences. However, few attempts have been made to characterize the function of variants based on their role in regulating differential co-expression. Here, we describe a statistical methodology that identifies pairs of transcripts that display differential correlation structure conditioned on genotypes of variants that regulate co-expression. Additionally, we present a user-friendly, computationally efficient tool, dcVar, that can be applied to expression quantitative trait loci (eQTL) or RNA-Seq datasets to infer differential co-expression variants (dcVars). We apply dcVar to the HapMap3 eQTL dataset and demonstrate the utility of this methodology at uncovering novel function of variants of interest with examples from a height genome-wide association and cancer drug resistance. We provide evidence that differential correlation structure is a valuable intermediate molecular phenotype for further characterizing the function of variants identified in GWAS and related studies. PMID:26539209
Effects of Reality Therapy Methods Applied in the Classroom
ERIC Educational Resources Information Center
Shearn, Donald F.; Randolph, Daniel Lee
1978-01-01
Reality therapy methods in the classroom were examined via a four-group experimental design. The groups were as follows: (a) pretested reality therapy, (b) unpretested reality therapy, (c) pretested placebo, and (d) unpretested placebo. Findings were not supportive of reality therapy methods as applied in the classroom. (Author)
The method of averages applied to the KS differential equations
NASA Technical Reports Server (NTRS)
Graf, O. F., Jr.; Mueller, A. C.; Starke, S. E.
1977-01-01
A new approach for the solution of artificial satellite trajectory problems is proposed. The basic idea is to apply an analytical solution method (the method of averages) to an appropriate formulation of the orbital mechanics equations of motion (the KS-element differential equations). The result is a set of transformed equations of motion that are more amenable to numerical solution.
Consultants' Showcase: Applying the Assessment Center Method to Selection Interviewing
ERIC Educational Resources Information Center
Dapra, Richard A.; Byham, William C.
1978-01-01
Targeted selection applies the five elements of the assessment-center method to selection interviewing of job applicants. Discusses the five reasons for the validity of the assessment-center method along with a parallel list of reasons for the validity of Targeted Selection. (EM)
Effects of Reality Therapy Methods Applied in the Classroom
ERIC Educational Resources Information Center
Shearn, Donald F.; Randolph, Daniel Lee
1978-01-01
Reality therapy methods in the classroom were examined via a four-group experimental design. The groups were as follows: (a) pretested reality therapy, (b) unpretested reality therapy, (c) pretested placebo, and (d) unpretested placebo. Findings were not supportive of reality therapy methods as applied in the classroom. (Author)
Applying an analytical method to study neutron behavior for dosimetry
NASA Astrophysics Data System (ADS)
Shirazi, S. A. Mousavi
2016-12-01
In this investigation, a new dosimetry process is studied by applying an analytical method. This novel process is associated with a human liver tissue. The human liver tissue has compositions including water, glycogen and etc. In this study, organic compound materials of liver are decomposed into their constituent elements based upon mass percentage and density of every element. The absorbed doses are computed by analytical method in all constituent elements of liver tissue. This analytical method is introduced applying mathematical equations based on neutron behavior and neutron collision rules. The results show that the absorbed doses are converged for neutron energy below 15MeV. This method can be applied to study the interaction of neutrons in other tissues and estimating the absorbed dose for a wide range of neutron energy.
Matrix Factorization Methods Applied in Microarray Data Analysis
Kossenkov, Andrew V.
2010-01-01
Numerous methods have been applied to microarray data to group genes into clusters that show similar expression patterns. These methods assign each gene to a single group, which does not reflect the widely held view among biologists that most, if not all, genes in eukaryotes are involved in multiple biological processes and therefore will be multiply regulated. Here, we review several methods that have been developed that are capable of identifying patterns of behavior in transcriptional response and assigning genes to multiple patterns. Broadly speaking, these methods define a series of mathematical approaches to matrix factorization with different approaches to the fitting of the model to the data. We focus on these methods in contrast to traditional clustering methods applied to microarray data, which assign one gene to one cluster. PMID:20376923
Two-beam coupling gain in undoped GaAs with applied dc electric field and moving grating
NASA Technical Reports Server (NTRS)
Liu, Duncan T. H.; Cheng, Li-Jen; Chiou, Arthur E.; Yeh, Pochi
1989-01-01
Experimental results of two-beam coupling gain efficiency in an undoped, semiinsulating GaAs crystal are reported. The highest gain coefficient measured is about 4.5/cm under the condition of an applied electric field of 13 kV/cm and a grating periodicity of 20 microns. The experimental results and theoretical calculations are in reasonable agreement with each other.
Extension of the validation of AOAC Official Method 2005.06 for dc-GTX2,3: interlaboratory study.
Ben-Gigirey, Begoña; Rodríguez-Velasco, María L; Gago-Martínez, Ana
2012-01-01
AOAC Official Method(SM) 2005.06 for the determination of saxitoxin (STX)-group toxins in shellfish by LC with fluorescence detection with precolumn oxidation was previously validated and adopted First Action following a collaborative study. However, the method was not validated for all key STX-group toxins, and procedures to quantify some of them were not provided. With more STX-group toxin standards commercially available and modifications to procedures, it was possible to overcome some of these difficulties. The European Union Reference Laboratory for Marine Biotoxins conducted an interlaboratory exercise to extend AOAC Official Method 2005.06 validation for dc-GTX2,3 and to compile precision data for several STX-group toxins. This paper reports the study design and the results obtained. The performance characteristics for dc-GTX2,3 (intralaboratory and interlaboratory precision, recovery, and theoretical quantification limit) were evaluated. The mean recoveries obtained for dc-GTX2,3 were, in general, low (53.1-58.6%). The RSD for reproducibility (RSD(r)%) for dc-GTX2,3 in all samples ranged from 28.2 to 45.7%, and HorRat values ranged from 1.5 to 2.8. The article also describes a hydrolysis protocol to convert GTX6 to NEO, which has been proven to be useful for the quantification of GTX6 while the GTX6 standard is not available. The performance of the participant laboratories in the application of this method was compared with that obtained from the original collaborative study of the method. Intralaboratory and interlaboratory precision data for several STX-group toxins, including dc-NEO and GTX6, are reported here. This study can be useful for those laboratories determining STX-group toxins to fully implement AOAC Official Method 2005.06 for official paralytic shellfish poisoning control. However the overall quantitative performance obtained with the method was poor for certain toxins.
Modeling of DC spacecraft power systems
NASA Technical Reports Server (NTRS)
Berry, F. C.
1995-01-01
Future spacecraft power systems must be capable of supplying power to various loads. This delivery of power may necessitate the use of high-voltage, high-power dc distribution systems to transmit power from the source to the loads. Using state-of-the-art power conditioning electronics such as dc-dc converters, complex series and parallel configurations may be required at the interface between the source and the distribution system and between the loads and the distribution system. This research will use state-variables to model and simulate a dc spacecraft power system. Each component of the dc power system will be treated as a multiport network, and a state model will be written with the port voltages as the inputs. The state model of a component will be solved independently from the other components using its state transition matrix. A state-space averaging method is developed first in general for any dc-dc switching converter, and then demonstrated in detail for the particular case of the boost power stage. General equations for both steady-state (dc) and dynamic effects (ac) are obtained, from which important transfer functions are derived and applied to a special case of the boost power stage.
Linear algebraic methods applied to intensity modulated radiation therapy.
Crooks, S M; Xing, L
2001-10-01
Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.
Minami, Tadatsugu; Ohtani, Yuusuke; Miyata, Toshihiro; Kuboi, Takeshi
2007-07-15
A newly developed Al-doped ZnO (AZO) thin-film magnetron-sputtering deposition technique that decreases resistivity, improves resistivity distribution, and produces high-rate depositions has been demonstrated by dc magnetron-sputtering depositions that incorporate rf power (dc+rf-MS), either with or without the introduction of H{sub 2} gas into the deposition chamber. The dc+rf-MS preparations were carried out in a pure Ar or an Ar+H{sub 2} (0%-2%) gas atmosphere at a pressure of 0.4 Pa by adding a rf component (13.56 MHz) to a constant dc power of 80 W. The deposition rate in a dc+rf-MS deposition incorporating a rf power of 150 W was approximately 62 nm/min, an increase from the approximately 35 nm/min observed in dc magnetron sputtering with a dc power of 80 W. A resistivity as low as 3x10{sup -4} {omega} cm and an improved resistivity distribution could be obtained in AZO thin films deposited on substrates at a low temperature of 150 deg. C by dc+rf-MS with the introduction of hydrogen gas with a content of 1.5%. This article describes the effects of adding a rf power component (i.e., dc+rf-MS deposition) as well as introducing H{sub 2} gas into dc magnetron-sputtering preparations of transparent conducting AZO thin films.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Applying modern collaboration methods to distributed engineering projects
NASA Astrophysics Data System (ADS)
Conrad, Albert; Weiss, Jason L.; Honey, Allan
2002-11-01
Developing state of the art instrumentation for astronomy is often best done by geographically disparate teams that span several institutions. These efforts necessarily require costly face-to-face meetings and site visits. The benefits of the World Wide Web, video conferencing, and modular design techniques, however, have recently increased the efficiency and lowered the costs of these efforts. In this paper we discuss how these methods were applied during the development an emerging collaboration to produce common detector systems
Algebraic Methods Applied to Network Reliability Problems. Revision.
1986-09-01
RD-RIBs 38? ALGEBRAIC METHODS APPLIED TO NETHORK RELIABILITY 1/1 PROBLEMS REVISIOU(U) CLEMSON UNIV SC DEPT OF MATEMATICAL SCIENCES D R SHIER ET AL...class of directed networks, Oper. Res., 32 (1984), pp. 493-515. -2 " 16 [3] A. AGRAWAL AND A. SATYANARAYANA, Network reliability analysis using 2...Networks, 13 (1983), pp. 107-120. [20] A. SATYANARAYANA AND A. PRABHAKAR, A new topological formula and rapid algorithm for reliability analysis of complex
Applying Taguchi Methods To Brazing Of Rocket-Nozzle Tubes
NASA Technical Reports Server (NTRS)
Gilbert, Jeffrey L.; Bellows, William J.; Deily, David C.; Brennan, Alex; Somerville, John G.
1995-01-01
Report describes experimental study in which Taguchi Methods applied with view toward improving brazing of coolant tubes in nozzle of main engine of space shuttle. Dr. Taguchi's parameter design technique used to define proposed modifications of brazing process reducing manufacturing time and cost by reducing number of furnace brazing cycles and number of tube-gap inspections needed to achieve desired small gaps between tubes.
Applying Taguchi Methods To Brazing Of Rocket-Nozzle Tubes
NASA Technical Reports Server (NTRS)
Gilbert, Jeffrey L.; Bellows, William J.; Deily, David C.; Brennan, Alex; Somerville, John G.
1995-01-01
Report describes experimental study in which Taguchi Methods applied with view toward improving brazing of coolant tubes in nozzle of main engine of space shuttle. Dr. Taguchi's parameter design technique used to define proposed modifications of brazing process reducing manufacturing time and cost by reducing number of furnace brazing cycles and number of tube-gap inspections needed to achieve desired small gaps between tubes.
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-03-10
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton`s method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton`s method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step.
NASA Astrophysics Data System (ADS)
Liu, K.; Hu, H.; Lei, J.; Hu, Y.; Zheng, Z.
2016-12-01
Most air-water plasma jets are rich in hydroxyl radicals (•OH), but the plasma has higher temperatures, compared to that of pure gas, especially when using air as working gas. In this paper, pulsating direct current (PDC) power was used to excite the air-water plasma jet to reduce plume temperature. In addition to the temperature, other differences between PDC and DC plasma jets are not yet clear. Thus, comparative studies of those plasmas are performed to evaluate characteristics, such as breakdown voltage, temperature, and reactive oxygen species. The results show that the plume temperature of PDC plasma is roughly 5-10 °C lower than that of DC plasma in the same conditions. The •OH content of PDC is lower than that of DC plasma, whereas the O content of PDC plasma is higher. The addition of water leads in an increase in the plume temperature and in the production of •OH with two types of power supplies. The production of O inversely shows a declining tendency with higher water ratio. The most important finding is that the PDC plasma with 100% water ratio achieves lower temperature and more abundant production of •OH and O, compared with DC plasma with 0% water ratio.
Staining methods applied to glycol methacrylate embedded tissue sections.
Cerri, P S; Sasso-Cerri, E
2003-01-01
The use of glycol methacrylate (GMA) avoids some technical artifacts, which are usually observed in paraffin-embedded sections, providing good morphological resolution. On the other hand, weak staining have been mentioned during the use of different methods in plastic sections. In the present study, changes in the histological staining procedures have been assayed during the use of staining and histochemical methods in different GMA-embedded tissues. Samples of tongue, submandibular and sublingual glands, cartilage, portions of respiratory tract and nervous ganglion were fixed in 4% formaldehyde and embedded in glycol methacrylate. The sections of tongue and nervous ganglion were stained by H&E. Picrosirius, Toluidine Blue and Sudan Black B methods were applied, respectively, for identification of collagen fibers in submandibular gland, sulfated glycosaminoglycans in cartilage (metachromasia) and myelin lipids in nervous ganglion. Periodic Acid-Schiff (PAS) method was used for detection of glycoconjugates in submandibular gland and cartilage while AB/PAS combined methods were applied for detection of mucins in the respiratory tract. In addition, a combination of Alcian Blue (AB) and Picrosirius methods was also assayed in the sublingual gland sections. The GMA-embedded tissue sections showed an optimal morphological integrity and were favorable to the staining methods employed in the present study. In the sections of tongue and nervous ganglion, a good contrast of basophilic and acidophilic structures was obtained by H&E. An intense eosinophilia was observed either in the striated muscle fibers or in the myelin sheaths in which the lipids were preserved and revealed by Sudan Black B. In the cartilage matrix, a strong metachromasia was revealed by Toluidine Blue in the negatively-charged glycosaminoglycans. In the chondrocytes, glycogen granules were intensely positive to PAS method. Extracellular glycoproteins were also PAS positive in the basal membrane and in the
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
... Model DC-9-81 (MD- 81), DC-9-82 (MD-82), DC-9-83 (MD-83), DC-9-87 (MD-87), and MD-88 Airplanes AGENCY... Jersey Avenue, SE., Washington, DC 20590. Hand Delivery: Deliver to Mail address above between 9 a.m. and...) This AD applies to The Boeing Company Model DC-9-81 (MD-81), DC-9-82 (MD-82), DC-9-83 (MD-83),...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-15
... Model DC-9-81 (MD- 81), DC-9-82 (MD-82), DC-9-83 (MD-83), DC-9-87 (MD-87), and MD-88 Airplanes AGENCY... Ground Floor, Room W12-140, 1200 New Jersey Avenue, SE., Washington, DC 20590. FOR FURTHER INFORMATION...) This AD applies to The Boeing Company Model DC-9-81 (MD-81), DC-9-82 (MD-82), DC-9-83 (MD-83),...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-06
... Model DC-9-81 (MD- 81), DC-9-82 (MD-82), DC-9-83 (MD-83), DC-9-87 (MD-87), and MD-88 Airplanes AGENCY... Jersey Avenue, SE., Washington, DC 20590. FOR FURTHER INFORMATION CONTACT: Roger Durbin, Aerospace.... Applicability (c) This AD applies to all The Boeing Company Model DC-9-81 (MD- 81), DC-9-82 (MD-82), DC-9-83...
Development of a New Method for Assembling a Bipolar DC Motor as a Teaching Material
NASA Astrophysics Data System (ADS)
Matsumoto, Yuki; Sakaki, Kei; Sakaki, Mamoru
2017-05-01
A simple handmade motor is a commonly used teaching aid for explaining the theory of the DC motor in science classes around the world. Kits that can be used by children to craft a simple motor are commercially available, and videos of assembling these motors are easily found on the internet. Although the design of this motor is simple, it is unipolar, meaning that the rotor consists of a single dipole. Thus, the Lorentz force acts only on one side of the coil per revolution. This decreases the energy conversion efficiency and requires the learners to turn the rotor using their hands in order to initiate rotation.
An ultrasonic guided wave method to estimate applied biaxial loads
NASA Astrophysics Data System (ADS)
Shi, Fan; Michaels, Jennifer E.; Lee, Sang Jun
2012-05-01
Guided waves propagating in a homogeneous plate are known to be sensitive to both temperature changes and applied stress variations. Here we consider the inverse problem of recovering homogeneous biaxial stresses from measured changes in phase velocity at multiple propagation directions using a single mode at a specific frequency. Although there is no closed form solution relating phase velocity changes to applied stresses, prior results indicate that phase velocity changes can be closely approximated by a sinusoidal function with respect to angle of propagation. Here it is shown that all sinusoidal coefficients can be estimated from a single uniaxial loading experiment. The general biaxial inverse problem can thus be solved by fitting an appropriate sinusoid to measured phase velocity changes versus propagation angle, and relating the coefficients to the unknown stresses. The phase velocity data are obtained from direct arrivals between guided wave transducers whose direct paths of propagation are oriented at different angles. This method is applied and verified using sparse array data recorded during a fatigue test. The additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of transducer pairs. Results show that applied stresses can be successfully recovered from the measured changes in guided wave signals.
The crowding factor method applied to parafoveal vision
Ghahghaei, Saeideh; Walker, Laura
2016-01-01
Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (<3.5 visual degrees) with spatial uncertainty. We find that eccentricity and hemifield have less impact on crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170
Applying Quantitative Genetic Methods to Primate Social Behavior
Brent, Lauren J. N.
2013-01-01
Increasingly, behavioral ecologists have applied quantitative genetic methods to investigate the evolution of behaviors in wild animal populations. The promise of quantitative genetics in unmanaged populations opens the door for simultaneous analysis of inheritance, phenotypic plasticity, and patterns of selection on behavioral phenotypes all within the same study. In this article, we describe how quantitative genetic techniques provide studies of the evolution of behavior with information that is unique and valuable. We outline technical obstacles for applying quantitative genetic techniques that are of particular relevance to studies of behavior in primates, especially those living in noncaptive populations, e.g., the need for pedigree information, non-Gaussian phenotypes, and demonstrate how many of these barriers are now surmountable. We illustrate this by applying recent quantitative genetic methods to spatial proximity data, a simple and widely collected primate social behavior, from adult rhesus macaques on Cayo Santiago. Our analysis shows that proximity measures are consistent across repeated measurements on individuals (repeatable) and that kin have similar mean measurements (heritable). Quantitative genetics may hold lessons of considerable importance for studies of primate behavior, even those without a specific genetic focus. PMID:24659839
Methods for model selection in applied science and engineering.
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
The Lattice Boltzmann Method applied to neutron transport
Erasmus, B.; Van Heerden, F. A.
2013-07-01
In this paper the applicability of the Lattice Boltzmann Method to neutron transport is investigated. One of the main features of the Lattice Boltzmann method is the simultaneous discretization of the phase space of the problem, whereby particles are restricted to move on a lattice. An iterative solution of the operator form of the neutron transport equation is presented here, with the first collision source as the starting point of the iteration scheme. A full description of the discretization scheme is given, along with the quadrature set used for the angular discretization. An angular refinement scheme is introduced to increase the angular coverage of the problem phase space and to mitigate lattice ray effects. The method is applied to a model problem to investigate its applicability to neutron transport and the results are compared to a reference solution calculated, using MCNP. (authors)
Advancing MODFLOW Applying the Derived Vector Space Method
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.
2015-12-01
The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
About the method of investigation of applied unstable process
NASA Astrophysics Data System (ADS)
Romanova, O. V.; Sapega, V. F.
2003-04-01
ABOUT THE METHOD OF INVESTIGATION OF APPLIED UNSTABLE PROCESS O.V. Romanova (1), V.F. Sapega (1) (1) All-russian geological institute (VSEGEI) zapgeo@mail.wpus.net (mark: for Romanova)/Fax: +7-812-3289283 Samples of Late Proterosoic (Rephean) rocks from Arkhangelsk, Jaroslav and Leningrad regions were prepared by the developed method of sample preparation and researched by X-ray analysis. The presence of mantle fluid process had been previously estabished in some of the samples (injecting tuffizites) (Kazak, Jakobsson, 1999). It appears that unchanged rephean rocks contain the set of low-temperature minerals as illite, chlorite, vermiculite, goethite, indicates conditions of diagenesis with temperature less than 300° C. Presense of corrensite, rectorite, illite-montmorillonite indicates application of the post-diagenesis low-temperature process to the original sediment rock. At the same time the rocks involved in the fluid process, contain such minerals as olivine, pyrope, graphite and indicate application of the high-temperature process not less than 650-800°C. Within these samples a set of low-temperature minerals occur also, this demonstrates the short-timing and disequilibrium of the applied high-temperature process. Therefore implementation of the x-ray method provides unambiguous criterion to the establishment of the fluid process which as a rule is coupled with the development of kimberlite rock fields.
Yang, Ping; Nai, Chang-Xin; Dong, Lu; Wang, Qi; Wang, Yan-Wen
2006-01-01
Two types of double high density polyethylene (HDPE) liners landfill that clay or geogrid was added between the two HDPE liners. The general resistance of the second mode is 15% larger than the general resistance of the first mode in the primary HDPE liner detection, and 20% larger than that of the first one in the secondary HDPE liner detection. High voltage DC method can accomplish the leakage detection and location of these two types of landfill and the error of leakage location is less than 10cm when electrode space is 1m.
"Influence Method" applied to measure a moderated neutron flux
NASA Astrophysics Data System (ADS)
Rios, I. J.; Mayer, R. E.
2016-01-01
The ;Influence Method; is conceived for the absolute determination of a nuclear particle flux in the absence of known detector efficiency. This method exploits the influence of the presence of one detector, in the count rate of another detector when they are placed one behind the other and define statistical estimators for the absolute number of incident particles and for the efficiency. The method and its detailed mathematical description were recently published (Rios and Mayer, 2015 [1]). In this article we apply it to the measurement of the moderated neutron flux produced by an 241AmBe neutron source surrounded by a light water sphere, employing a pair of 3He detectors. For this purpose, the method is extended for its application where particles arriving at the detector obey a Poisson distribution and also, for the case when efficiency is not constant over the energy spectrum of interest. Experimental distributions and derived parameters are compared with theoretical predictions of the method and implications concerning the potential application to the absolute calibration of neutron sources are considered.
NASA Astrophysics Data System (ADS)
Machala, Z.; Jedlovský, I.; Chládeková, L.; Pongrác, B.; Giertl, D.; Janda, M.; Ikurová, L. Å.; Polčic, P.
2009-08-01
Three types of DC electrical discharges in atmospheric air (streamer corona, transient spark and glow discharge) were tested for bio-decontamination of bacteria and yeasts in water solution, and spores on surfaces. Static vs. flowing treatment of contaminated water were compared, in the latter the flowing water either covered the grounded electrode or passed through the high voltage needle electrode. The bacteria were killed most efficiently in the flowing regime by transient spark. Streamer corona was efficient when the treated medium flew through the active corona region. The spores on plastic foil and paper surfaces were successfully inactivated by negative corona. The microbes were handled and their population evaluated by standard microbiology cultivation procedures. The emission spectroscopy of the discharges and TBARS (thiobarbituric acid reactive substances) absorption spectrometric detection of the products of lipid peroxidation of bacterial cell membranes indicated a major role of radicals and reactive oxygen species among the bio-decontamination mechanisms.
NASA Astrophysics Data System (ADS)
Saito, Tatsuhito; Kondo, Keiichiro; Koseki, Takafumi
A DC-electrified railway system that is fed by diode rectifiers at a substation is unable to return the electric power to an AC grid. Accordingly, the braking cars have to restrict regenerative braking power when the power consumption of the powering cars is not sufficient. However, the characteristics of a DC-electrified railway system, including the powering cars, is not known, and a mathematical model for designing a controller has not been established yet. Hence, the object of this study is to obtain the mathematical model for an analytical design method of the regenerative braking control system. In the first part of this paper, the static characteristics of this system are presented to show the position of the equilibrium point. The linearization of this system at the equilibrium point is then performed to describe the dynamic characteristics of the system. An analytical design method is then proposed on the basis of these characteristics. The proposed design method is verified by experimental tests with a 1kW class miniature model, and numerical simulations.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Spectral methods applied to fluidized-bed combustors
Brown, R.C.; Raines, T.S.; Thiede, T.D.
1995-11-01
The goal of this research is to characterize coals and sorbents during the normal operation of an industrial-scale circulating fluidized bed (CFB) boiler. The method determines coal or sorbent properties based on the analysis of transient CO{sub 2} or SO{sub 2} emissions from the boiler. Fourier Transform Infrared (FTIR) spectroscopy is used to qualitatively and quantitatively analyze the gaseous products of combustion. Spectral analysis applied to the transient response of CO{sub 2} and SO{sub 2} resulting from introduction of a batch of coal or limestone into the boiler yields characteristic time constants from which combustion or sorbent models are developed. The method is non-intrusive and is performed under realistic combustion conditions. Results are presented from laboratory studies and power plant monitoring.
Where do Students Go Wrong in Applying the Scientific Method?
NASA Astrophysics Data System (ADS)
Rubbo, Louis; Moore, Christopher
2015-04-01
Non-science majors completing a liberal arts degree are frequently required to take a science course. Ideally with the completion of a required science course, liberal arts students should demonstrate an improved capability in the application of the scientific method. In previous work we have demonstrated that this is possible if explicit instruction is spent on the development of scientific reasoning skills. However, even with explicit instruction, students still struggle to apply the scientific process. Counter to our expectations, the difficulty is not isolated to a single issue such as stating a testable hypothesis, designing an experiment, or arriving at a supported conclusion. Instead students appear to struggle with every step in the process. This talk summarizes our work looking at and identifying where students struggle in the application of the scientific method. This material is based upon work supported by the National Science Foundation under Grant No. 1244801.
Doshi, J B; Ravetkar, S D; Ghole, V S; Rehani, K
2003-09-01
DPT, a combination vaccine against diphtheria, tetanus and pertussis is available since many years and still continued in the national immunisation schedule of many countries. Although highly potent, reactions to DPT vaccine are well known, mainly attributed to the factors like Pertussis component, aluminum adjuvant and lower purity of tetanus and diphtheria toxoids. The latter most important aspect has become a matter of concern, specially for the preparation of next generation combination vaccines with more number of antigens in combination with DPT. Purity of toxoid is expressed as Lf (Limes flocculation) per mg of protein nitrogen. The Kjeldahl method (KM) of protein nitrogen estimation suggested by WHO and British Pharmacopoeia is time consuming and less specific. Need has been felt to explore an alternative method which is quick and more specific for toxoid protein determination. DC (detergent compatible) protein assay, an improved Lowry's method, has been found to be much more advantageous than Kjeldahl method.
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
NASA Astrophysics Data System (ADS)
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
ERIC Educational Resources Information Center
Dynarski, Mark; Betts, Julian; Feldman, Jill
2016-01-01
The DC Opportunity Scholarship Program (OSP), established in 2004, is the only federally-funded private school voucher program for low-income parents in the United States. This evaluation brief describes findings using data from more than 2,000 applicants' parents, who applied to the program from spring 2011 to spring 2013 following…
NASA Astrophysics Data System (ADS)
Pedersen, F.
2008-09-01
The presented bidirectional DC/DC converter design concept is a further development of an already existing converter used for low battery voltage operation.For low battery voltage operation a high efficient low parts count DC/DC converter was developed, and used in a satellite for the battery charge and battery discharge function.The converter consists in a bidirectional, non regulating DC/DC converter connected to a discharge regulating Buck converter and a charge regulating Buck converter.The Bidirectional non regulating DC/DC converter performs with relatively high efficiency even at relatively high currents, which here means up to 35Amps.This performance was obtained through the use of power MOSFET's with on- resistances of only a few mille Ohms connected to a special transformer allowing paralleling several transistor stages on the low voltage side of the transformer. The design is patent protected. Synchronous rectification leads to high efficiency at the low battery voltages considered, which was in the range 2,7- 4,3 Volt DC.The converter performs with low switching losses as zero voltage zero current switching is implemented in all switching positions of the converter.Now, the drive power needed, to switch a relatively large number of low Ohm , hence high drive capacitance, power MOSFET's using conventional drive techniques would limit the overall conversion efficiency.Therefore a resonant drive consuming considerable less power than a conventional drive circuit was implemented in the converter.To the originally built and patent protected bidirectional non regulating DC/DC converter, is added the functionality of regulation.Hereby the need for additional converter stages in form of a Charge Buck regulator and a Discharge Buck regulator is eliminated.The bidirectional DC/DC converter can be used in connection with batteries, motors, etc, where the bidirectional feature, simple design and high performance may be useful.
The virtual fields method applied to spalling tests on concrete
NASA Astrophysics Data System (ADS)
Pierron, F.; Forquin, P.
2012-08-01
For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s-1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula) remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM). First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative `load cell'. This method applied to three spalling tests allowed to identify Young's modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.
P, Rajeeva M.; S, Naveen C.; Lamani, Ashok R.; Jayanna, H. S.; Bothla, V Prasad
2015-06-24
Nanocrystalline tin oxide (SnO{sub 2}) material of different particle size was synthesized using gel combustion method by varying oxidizer (HNO{sub 3}) and keeping fuel as a constant. The prepared samples were characterized by X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM) and Energy Dispersive Analysis X-ray Spectroscope (EDAX). The effect of oxidizer in the gel combustion method was investigated by inspecting the particle size of nano SnO{sub 2} powder. The particle size was found to be increases with the increase of oxidizer from 8 to 12 moles. The X-ray diffraction patterns of the calcined product showed the formation of high purity tetragonal tin (IV) oxide with the particle size in the range of 17 to 31 nm which was calculated by Scherer’s formula. The particles and temperature dependence of direct (DC) electrical conductivity of SnO{sub 2} nanomaterial was studied using Keithley source meter. The DC electrical conductivity of SnO{sub 2} nanomaterial increases with the temperature from 80 to 300K and decrease with the particle size at constant temperature.
NASA Astrophysics Data System (ADS)
Akira, Sasamoto; Krutitskii, P. A.
2012-09-01
Non destructive testing (NDT) has been used to ensure safety in public structure and public space. D.C. potential difference method, one of the NDT, can detect crack in conductive material. According to study for material, depth of crack in metal is most important parameter to judge if the crack will lead catastrophic failure in short time. Therefore estimation of crack depth by measured voltage on surface is key process in the method. However, as authors as know, exact solution of model equation for the method that is Laplace equation with prescribed boundary condition and crack has not been known so far. In this paper we show exact expression of solution and computational results for model problem described in two space dimensional lower half-space with crack vertically propagating from the surface.
Hargrove, Douglas L.
2004-09-14
A portable, hand-held meter used to measure direct current (DC) attenuation in low impedance electrical signal cables and signal attenuators. A DC voltage is applied to the signal input of the cable and feedback to the control circuit through the signal cable and attenuators. The control circuit adjusts the applied voltage to the cable until the feedback voltage equals the reference voltage. The "units" of applied voltage required at the cable input is the system attenuation value of the cable and attenuators, which makes this meter unique. The meter may be used to calibrate data signal cables, attenuators, and cable-attenuator assemblies.
Six Sigma methods applied to cryogenic coolers assembly line
NASA Astrophysics Data System (ADS)
Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René
2009-05-01
Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.
THE EXOPLANET CENSUS: A GENERAL METHOD APPLIED TO KEPLER
Youdin, Andrew N.
2011-11-20
We develop a general method to fit the underlying planetary distribution function (PLDF) to exoplanet survey data. This maximum likelihood method accommodates more than one planet per star and any number of planet or target star properties. We apply the method to announced Kepler planet candidates that transit solar-type stars. The Kepler team's estimates of the detection efficiency are used and are shown to agree with theoretical predictions for an ideal transit survey. The PLDF is fit to a joint power law in planet radius, down to 0.5 R{sub Circled-Plus }, and orbital period, up to 50 days. The estimated number of planets per star in this sample is {approx}0.7-1.4, where the range covers systematic uncertainties in the detection efficiency. To analyze trends in the PLDF we consider four planet samples, divided between shorter and longer periods at 7 days and between large and small radii at 3 R{sub Circled-Plus }. The size distribution changes appreciably between these four samples, revealing a relative deficit of {approx}3 R{sub Circled-Plus} planets at the shortest periods. This deficit is suggestive of preferential evaporation and sublimation of Neptune- and Saturn-like planets. If the trend and explanation hold, it would be spectacular observational support of the core accretion and migration hypotheses, and would allow refinement of these theories.
Analytical methods applied to diverse types of Brazilian propolis
2011-01-01
Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen) can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented. PMID:21631940
Understanding the impulse response method applied to concrete bridge decks
NASA Astrophysics Data System (ADS)
Clem, D. J.; Popovics, J. S.; Schumacher, T.; Oh, T.; Ham, S.; Wu, D.
2013-01-01
The Impulse Response (IR) method is a well-established form of non-destructive testing (NDT) where the dynamic response of an element resulting from an impact event (hammer blow) is measured with a geophone to make conclusions about the element's integrity, stiffness, and/or support conditions. The existing ASTM Standard C1740-10 prescribes a set of parameters that can be used to evaluate the conditions above. These parameters are computed from the so-called `mobility' spectrum which is obtained by dividing the measured bridge deck response by the measured impact force in the frequency domain. While applying the test method in the laboratory as well as on an actual in-service concrete bridge deck, the authors of this paper observed several limitations that are presented and discussed in this paper. In order to better understand the underlying physics of the IR method, a Finite Element (FE) model was created. Parameters prescribed in the Standard were then computed from the FE data and are discussed. One main limitation appears to be the use of a fixed upper frequency of 800 Hz. Test data from the real bridge deck as well as the FE model both show that most energy is found above that limit. This paper presents and discusses limitations of the ASTM Standard found by the authors and suggests ways for improving it.
Applying an Automatic Image-Processing Method to Synoptic Observations
NASA Astrophysics Data System (ADS)
Tlatov, Andrey G.; Vasil'eva, Valeria V.; Makarova, Valentina V.; Otkidychev, Pavel A.
2014-04-01
We used an automatic image-processing method to detect solar-activity features observed in white light at the Kislovodsk Solar Station. This technique was applied to automatically or semi-automatically detect sunspots and active regions. The results of this automated recognition were verified with statistical data available from other observatories and revealed a high detection accuracy. We also provide parameters of sunspot areas, of the umbra, and of faculae as observed in Solar Cycle 23 as well as the magnetic flux of these active elements, calculated at the Kislovodsk Solar Station, together with white-light images and magnetograms from the Michaelson Doppler Imager onboard the Solar and Heliospheric Observatory (SOHO/MDI). The ratio of umbral and total sunspot areas during Solar Cycle 23 is ≈ 0.19. The area of sunspots of the leading polarity was approximately 2.5 times the area of sunspots of the trailing polarity.
Method for applying photographic resists to otherwise incompatible substrates
NASA Technical Reports Server (NTRS)
Fuhr, W. (Inventor)
1981-01-01
A method for applying photographic resists to otherwise incompatible substrates, such as a baking enamel paint surface, is described wherein the uncured enamel paint surface is coated with a non-curing lacquer which is, in turn, coated with a partially cured lacquer. The non-curing lacquer adheres to the enamel and a photo resist material satisfactorily adheres to the partially cured lacquer. Once normal photo etching techniques are employed the lacquer coats can be easily removed from the enamel leaving the photo etched image. In the case of edge lighted instrument panels, a coat of uncured enamel is placed over the cured enamel followed by the lacquer coats and the photo resists which is exposed and developed. Once the etched uncured enamel is cured, the lacquer coats are removed leaving an etched panel.
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
ERIC Educational Resources Information Center
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
ERIC Educational Resources Information Center
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
NASA Astrophysics Data System (ADS)
Syahid, Mohd; Oyama, Seiji; Yasuda, Jun; Yoshizawa, Shin; Umemura, Shin-ichiro
2015-07-01
A fast and accurate ultrasound pressure field measurement is necessary for the progress of ultrasound application in medicine. In general, a hydrophone is used to measure the ultrasound field, which takes a long measurement time and might disturb the ultrasound field. Hence, we proposed a new method categorized in an optical method called Phase Contrast method to overcome the drawback in the hydrophone method. The proposed method makes use of the spatial DC spectrum formed in the focal plane to measure the modulated optical phase induced by ultrasound propagation in water. In this study, we take into account the decreased intensity of the DC spectrum at high ultrasound intensity to increase the measurement accuracy of the modulated optical phase. Then, we apply a non-continuous phase unwrapping algorithm to unwrap the modulated optical phase at high ultrasound intensity. From, the unwrapped result, we evaluate the quantitativeness of the proposed method.
Supervised Machine Learning Methods Applied to Predict Ligand- Binding Affinity.
Heck, Gabriela S; Pintro, Val O; Pereira, Richard R; de Ávila, Mauricio B; Levin, Nayara M B; de Azevedo, Walter F
2017-01-01
Calculation of ligand-binding affinity is an open problem in computational medicinal chemistry. The ability to computationally predict affinities has a beneficial impact in the early stages of drug development, since it allows a mathematical model to assess protein-ligand interactions. Due to the availability of structural and binding information, machine learning methods have been applied to generate scoring functions with good predictive power. Our goal here is to review recent developments in the application of machine learning methods to predict ligand-binding affinity. We focus our review on the application of computational methods to predict binding affinity for protein targets. In addition, we also describe the major available databases for experimental binding constants and protein structures. Furthermore, we explain the most successful methods to evaluate the predictive power of scoring functions. Association of structural information with ligand-binding affinity makes it possible to generate scoring functions targeted to a specific biological system. Through regression analysis, this data can be used as a base to generate mathematical models to predict ligandbinding affinities, such as inhibition constant, dissociation constant and binding energy. Experimental biophysical techniques were able to determine the structures of over 120,000 macromolecules. Considering also the evolution of binding affinity information, we may say that we have a promising scenario for development of scoring functions, making use of machine learning techniques. Recent developments in this area indicate that building scoring functions targeted to the biological systems of interest shows superior predictive performance, when compared with other approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Fast multipole method applied to Lagrangian simulations of vortical flows
NASA Astrophysics Data System (ADS)
Ricciardi, Túlio R.; Wolf, William R.; Bimbato, Alex M.
2017-10-01
Lagrangian simulations of unsteady vortical flows are accelerated by the multi-level fast multipole method, FMM. The combination of the FMM algorithm with a discrete vortex method, DVM, is discussed for free domain and periodic problems with focus on implementation details to reduce numerical dissipation and avoid spurious solutions in unsteady inviscid flows. An assessment of the FMM-DVM accuracy is presented through a comparison with the direct calculation of the Biot-Savart law for the simulation of the temporal evolution of an aircraft wake in the Trefftz plane. The role of several parameters such as time step restriction, truncation of the FMM series expansion, number of particles in the wake discretization and machine precision is investigated and we show how to avoid spurious instabilities. The FMM-DVM is also applied to compute the evolution of a temporal shear layer with periodic boundary conditions. A novel approach is proposed to achieve accurate solutions in the periodic FMM. This approach avoids a spurious precession of the periodic shear layer and solutions are shown to converge to the direct Biot-Savart calculation using a cotangent function.
Random-breakage mapping method applied to human DNA sequences
NASA Technical Reports Server (NTRS)
Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)
1996-01-01
The random-breakage mapping method [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was applied to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping method.
Random-breakage mapping method applied to human DNA sequences
NASA Technical Reports Server (NTRS)
Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)
1996-01-01
The random-breakage mapping method [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was applied to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping method.
RAMSES: Applied research on separation methods using space electrophoresis
NASA Astrophysics Data System (ADS)
Jamin Changeart, F.; Faure, F.; Sanchez, V.; Schoot, B.; Simonis, M.; Renard, A.; Collete, J. P.; Perez, D.; Val, J. M.; de Olano, A. l.
Eight european industrial companies, the CNRS and University Paul Sabatier and CNES/ Centre National d'Etudes Spatiales collaborate on the SBS (Space Bio Separation) project which aims at demonstrating the possibility of preparing high-purity biomaterials under microgravity conditions. As a first step of SBS, the proposal of a cooperative flight of the RAMSES facility on board Spacelab during the IML-2 mission, scheduled January 1993, has been selected by NASA. RAMSES allows basic and applied research on free flow zone electrophoresis, in order to assess the influence of a low-gravity environment on the purification of biological products. Experiments will be performed by European and American scientists. The facility will be integrated in a Spacelab single rack. Using in situ diagnostics with a U.V. photometer and a cross illuminator, RAMSES investigates a wide variety of transport phenomena to better understand the basic mechanisms which govern electrophoresis method. RAMSES should be a basis for a more complete facility dedicated to the purification of biomaterials, associating various separation methods. This paper will provide an overview of this space facility RAMSES with emphasis on continuous flow zone electrophoresis technique, scientific back-ground, RAMSES experimental program, RAMSES main functions and an overall description of the RAMSES main units.
Turbulence profiling methods applied to ESO's adaptive optics facility
NASA Astrophysics Data System (ADS)
Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.
2014-07-01
Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.
Early Oscillation Detection for DC/DC Converter Fault Diagnosis
NASA Technical Reports Server (NTRS)
Wang, Bright L.
2011-01-01
The electrical power system of a spacecraft plays a very critical role for space mission success. Such a modern power system may contain numerous hybrid DC/DC converters both inside the power system electronics (PSE) units and onboard most of the flight electronics modules. One of the faulty conditions for DC/DC converter that poses serious threats to mission safety is the random occurrence of oscillation related to inherent instability characteristics of the DC/DC converters and design deficiency of the power systems. To ensure the highest reliability of the power system, oscillations in any form shall be promptly detected during part level testing, system integration tests, flight health monitoring, and on-board fault diagnosis. The popular gain/phase margin analysis method is capable of predicting stability levels of DC/DC converters, but it is limited only to verification of designs and to part-level testing on some of the models. This method has to inject noise signals into the control loop circuitry as required, thus, interrupts the DC/DC converter's normal operation and increases risks of degrading and damaging the flight unit. A novel technique to detect oscillations at early stage for flight hybrid DC/DC converters was developed.
NASA Technical Reports Server (NTRS)
Mclyman, C. W.
1983-01-01
Compact dc/dc inverter uses single integrated-circuit package containing six inverter gates that generate and amplify 100-kHz square-wave switching signal. Square-wave switching inverts 10-volt local power to isolated voltage at another desired level. Relatively high operating frequency reduces size of filter capacitors required, resulting in small package unit.
Advanced Signal Processing Methods Applied to Digital Mammography
NASA Technical Reports Server (NTRS)
Stauduhar, Richard P.
1997-01-01
without further support. Task 5: Better modeling does indeed make an improvement in the detection output. After the proposal ended, we came up with some new theoretical explanations that helps in understanding when the D4 filter should be better. This work is currently in the review process. Task 6: N/A. This no longer applies in view of Tasks 4-5. Task 7: Comprehensive plans for further work have been completed. These plans are the subject of two proposals, one to NASA and one to HHS. These proposals represent plans for a complete evaluation of the methods for identifying normal mammograms, augmented with significant further theoretical work.
Applying analytical and experimental methods to characterize engineered components
NASA Astrophysics Data System (ADS)
Munn, Brian S.
A variety of analytical and experimental methods were employed to characterize two very different types of engineered components. The engineered components of interest were monolithic silicon carbide tiles and M12 x 1.75 Class 9.8 steel fasteners. A new application applying the hole drilling technique was developed on monolithic silicon-carbide tiles of varying thicknesses. This work was driven by a need to first develop a reliable method to measure residual stresses and, then, to validate the methodology through characterizing residual stresses on the tiles of interest. The residual stresses measured in all tiles were tensile in nature. The highest residual stresses were measured at the surface, and decreased exponentially. There was also a trend for the residual tensile stresses to decrease with increasing specimen thickness. Thermal-mechanical interactions were successfully analyzed via a one-way, coupled FEA modeled approach. The key input for a successful FEA analysis was an appropriate heat transfer rate. By varying the heat transfer rate in the FEA model and, then, comparing stress output to experimental residual stress values, provided a favorable numerical solution in determining a heat transfer rate. Fatigue behavior of a M12 x 1.75 Class 9.8 steel test fastener was extensively studied through the use of a variety of experimental and analytical techniques. Of particular interest, was the underlying interaction between notch plasticity and overall fatigue behavior. A very large data set of fastener fatigue behavior was generated with respect to mean stress. A series of endurance limit curves were established for different mean stress values ranging from minimal to the yield strength of the steel fastener (0 ≤ sigmam ≤ sigmay). This wide range in mean stress values created a change in notch tip plasticity which caused a local diminishing of the mean stress increasing expected fatigue life. The change in notch plasticity was identified by residual stress
Applying sociodramatic methods in teaching transition to palliative care.
Baile, Walter F; Walters, Rebecca
2013-03-01
We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Kernel energy method applied to vesicular stomatitis virus nucleoprotein
Huang, Lulu; Massa, Lou; Karle, Jerome
2009-01-01
The kernel energy method (KEM) is applied to the vesicular stomatitis virus (VSV) nucleoprotein (PDB ID code 2QVJ). The calculations employ atomic coordinates from the crystal structure at 2.8-Å resolution, except for the hydrogen atoms, whose positions were modeled by using the computer program HYPERCHEM. The calculated KEM ab initio limited basis Hartree-Fock energy for the full 33,175 atom molecule (including hydrogen atoms) is obtained. In the KEM, a full biological molecule is represented by smaller “kernels” of atoms, greatly simplifying the calculations. Collections of kernels are well suited for parallel computation. VSV consists of five similar chains, and we obtain the energy of each chain. Interchain hydrogen bonds contribute to the interaction energy between the chains. These hydrogen bond energies are calculated in Hartree-Fock (HF) and Møller-Plesset perturbation theory to second order (MP2) approximations by using 6–31G** basis orbitals. The correlation energy, included in MP2, is a significant factor in the interchain hydrogen bond energies. PMID:19188588
Naveen, C. S. Jayanna, H. S. Lamani, Ashok R. Rajeeva, M. P.
2014-04-24
ZnO nanoparticles of different size were prepared by varying the molar ratio of glycine and zinc nitrate hexahydrate as fuel and oxidizer (F/O = 0.8, 1.11, 1.7) by simple solution combustion method. Powder samples were characterized by UV-Visible spectrophotometer, X-ray diffractometer, Scanning electron microscope (SEM). DC electrical conductivity measurements at room temperature and in the temperature range of 313-673K were carried out for the prepared thick films and it was found to increase with increase of temperature which confirms the semiconducting nature of the samples. Activation energies were calculated and it was found that, F/O molar ratio 1.7 has low E{sub AL} (Low temperature activation energy) and high E{sub AH} (High temperature activation energy) compared to other samples.
NASA Astrophysics Data System (ADS)
Li, Bin; Zhang, Qin-Jian; Shi, Yan-Chao; Li, Jia-Jun; Li, Hong; Lu, Fan-Xiu; Chen, Guang-Chao
2014-08-01
A nano-crystlline diamond film is grown by the dc arcjet chemical vapor deposition method. The film is characterized by scanning electron microscopy, high-resolution transmission electron microscopy (HRTEM), x-ray diffraction (XRD) and Raman spectra, respectively. The nanocrystalline grains are averagely with 80 nm in the size measured by XRD, and further proven by Raman and HRTEM. The observed novel morphology of the growth surface, pineapple-like morphology, is constructed by cubo-octahedral growth zones with a smooth faceted top surface and coarse side surfaces. The as-grown film possesses (100) dominant surface containing a little amorphous sp2 component, which is far different from the nano-crystalline film with the usual cauliflower-like morphology.
Applying multi-resolution numerical methods to geodynamics
NASA Astrophysics Data System (ADS)
Davies, David Rhodri
Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled
Neural network method applied to particle image velocimetry
NASA Astrophysics Data System (ADS)
Grant, Ian; Pan, X.
1993-12-01
realised. An important class of neural network is the multi-layer perceptron. The neurons are distributed on surfaces and linked by weighted interconnections. In the present paper we demonstrate how this type of net can developed into a competitive, adaptive filter which will identify PIV image pairs in a number of commonly occurring flow types. Previous work by the authors in particle tracking analysis (1, 2) has shown the efficiency of statistical windowing techniques in flows without systematic (in time or space) variations. The effectiveness of the present neural net is illustrated by applying it to digital simulations ofturbulent and rotating flows. Work reported by Cenedese et al (3) has taken a different approach in examining the potential for neural net methods applied to PIV.
NASA Astrophysics Data System (ADS)
Deceuster, John; Etienne, Adélaïde; Robert, Tanguy; Nguyen, Frédéric; Kaufmann, Olivier
2014-04-01
Several techniques are available to estimate the depth of investigation or to identify possible artifacts in dc resistivity surveys. Commonly, the depth of investigation (DOI) is mainly estimated by using an arbitrarily chosen cut-off value on a selected indicator (resolution, sensitivity or DOI index). Ranges of cut-off values are recommended in the literature for the different indicators. However, small changes in threshold values may induce strong variations in the estimated depths of investigation. To overcome this problem, we developed a new statistical method to estimate the DOI of dc resistivity surveys based on a modified DOI index approach. This method is composed of 5 successive steps. First, two inversions are performed by using different resistivity reference models for the inversion (0.1 and 10 times the arithmetic mean of the logarithm of the observed apparent resistivity values). Inversion models are extended to the edges of the survey line and to a depth range of three times the pseudodepth of investigation of the largest array spacing used. In step 2, we compute the histogram of a newly defined scaled DOI index. Step 3 consists of the fitting of the mixture of two Gaussian distributions (G1 and G2) to the cumulative distribution function of the scaled DOI index values. Based on this fitting, step 4 focuses on the computation of an interpretation index (II) defined for every cell j of the model as the relative probability density that the cell j belongs to G1, which describes the Gaussian distribution of the cells with a scaled DOI index close to 0.0. In step 5, a new inversion is performed by using a third resistivity reference model (the arithmetic mean of the logarithm of the observed apparent resistivity values). The final electrical resistivity image is produced by using II as alpha blending values allowing the visual discrimination between well-constrained areas and poorly-constrained cells.
A GIS modeling method applied to predicting forest songbird habitat
Dettmers, Randy; Bart, Jonathan
1999-01-01
We have developed an approach for using a??presencea?? data to construct habitat models. Presence data are those that indicate locations where the target organism is observed to occur, but that cannot be used to define locations where the organism does not occur. Surveys of highly mobile vertebrates often yield these kinds of data. Models developed through our approach yield predictions of the amount and the spatial distribution of good-quality habitat for the target species. This approach was developed primarily for use in a GIS context; thus, the models are spatially explicit and have the potential to be applied over large areas. Our method consists of two primary steps. In the first step, we identify an optimal range of values for each habitat variable to be used as a predictor in the model. To find these ranges, we employ the concept of maximizing the difference between cumulative distribution functions of (1) the values of a habitat variable at the observed presence locations of the target organism, and (2) the values of that habitat variable for all locations across a study area. In the second step, multivariate models of good habitat are constructed by combining these ranges of values, using the Boolean operators a??anda?? and a??or.a?? We use an approach similar to forward stepwise regression to select the best overall model. We demonstrate the use of this method by developing species-specific habitat models for nine forest-breeding songbirds (e.g., Cerulean Warbler, Scarlet Tanager, Wood Thrush) studied in southern Ohio. These models are based on speciesa?? microhabitat preferences for moisture and vegetation characteristics that can be predicted primarily through the use of abiotic variables. We use slope, land surface morphology, land surface curvature, water flow accumulation downhill, and an integrated moisture index, in conjunction with a land-cover classification that identifies forest/nonforest, to develop these models. The performance of these
A novel sintering method to obtain fully dense gadolinia doped ceria by applying a direct current
NASA Astrophysics Data System (ADS)
Hao, Xiaoming; Liu, Yajie; Wang, Zhenhua; Qiao, Jinshuo; Sun, Kening
2012-07-01
A fully dense Ce0.8Gd0.2O1.9 (gadolinia doped ceria, GDC) is obtained by a novel using a sintering technique for several seconds at 545 °C by applying a direct current (DC) electrical field of 70 V cm-1. The onset applied field value of this phenomenon is 20 V cm-1, and the volume specific power dissipation for the onset of flash sintering is about ∼10 mW mm-3. Through contrast with the shrinkage strain of the conventional sintering as well as scanning electron microscopy (SEM) analysis, we conclude that GDC specimens are sintered to fully density under various applied fields. In addition, we demonstrate that the grain size of GDC is decreasing with the increase of applied field and the decrease of sintering temperature. Through calculation, we find that sintering of GDC can be explained by the Joule heating from the applied electrical field.
Valuing national effects of digital health investments: an applied method.
Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad
2015-01-01
This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.
a Novel Method for Improving Plasma Nitriding Efficiency: Pre-Magnetization by DC Magnetic Field
NASA Astrophysics Data System (ADS)
Kovaci, Halim; Yetim, Ali Fatih; Bozkurt, Yusuf Burak; Çelik, Ayhan
2017-06-01
In this study, a novel pre-magnetization process, which enables easy diffusion of nitrogen, was used to enhance plasma nitriding efficiency. Firstly, magnetic fields with intensities of 1500G and 2500G were applied to the untreated samples before nitriding. After the pre-magnetization, the untreated and pre-magnetized samples were plasma nitrided for 4h in a gas mixture of 50% N2-50% H2 at 500∘C and 600∘C. The structural, mechanical and morphological properties of samples were examined by X-ray diffraction (XRD), scanning electron microscopy (SEM), microhardness tester and surface tension meter. It was observed that pre-magnetization increased the surface energy of the samples. Therefore, both compound and diffusion layer thicknesses increased with pre-magnetization process before nitriding treatment. As modified layer thickness increased, higher surface hardness values were obtained.
NASA Astrophysics Data System (ADS)
Riefler, Norbert; Eremina, Elena; Hertlein, Christopher; Helden, Laurent; Eremin, Yuri; Wriedt, Thomas; Bechinger, Clemens
2007-07-01
In the paper we applied a variant of the T-matrix method, the null-field method with discrete sources (NFM-DS) and the discrete sources method (DSM) to model light scattering by a particle near a plane surface in an evanescent wave field. Such investigations have a great practical value for total internal reflection microscopy (TIRM). The numerical algorithms of DSM and NFM-DS have been modified to model the specific conditions of real measurement experiments carried out in Stuttgart University. Objective response and scattering cross-section have been calculated. Numerical results of both methods have been compared and demonstrate good agreement with measurements.
Melt Conditioned DC (MC-DC) Casting of Magnesium Alloys
NASA Astrophysics Data System (ADS)
Zuo, Y. B.; Jiang, B.; Zhang, Y.; Fan, Z.
A new melt conditioned direct chill (MC-DC) casting process has been developed for producing high quality magnesium alloy billets and slabs. In the MC-DC casting process, intensive melt shearing provided by a high shear device is applied directly to the alloy melt in the sump during DC casting. The high shear device provides intensive melt shearing to disperse potential nucleating particles, creates a macroscopic melt flow to distribute uniformly the dispersed particles, and maintains a uniform temperature and chemical composition throughout the melt in the sump. Experimental results have demonstrated that the MC-DC casting process can produce magnesium alloy billets with significantly refined microstructure and reduced cast defects. In this paper, we introduce the new MC-DC casting process, report the grain refining effect of intensive melt shearing during the MC-DC casting process and discuss the grain refining mechanism.
dc electric invisibility cloak.
Yang, Fan; Mei, Zhong Lei; Jin, Tian Yu; Cui, Tie Jun
2012-08-03
We present the first experimental demonstration of a dc electric cloak for steady current fields. Using the analogy between electrically conducting materials and resistor networks, a dc invisibility cloak is designed, fabricated, and tested using the circuit theory. We show that the dc cloak can guide electric currents around the cloaked region smoothly and keep perturbations only inside the cloak. Outside the cloak, the current lines return to their original directions as if nothing happens. The measurement data agree exceptionally well with the theoretical prediction and simulation result, with nearly perfect cloaking performance. The proposed method can be directly used to realize other dc electric devices with anisotropic conductivities designed by the transformation optics. Manipulation of steady currents with the control of anisotropic conductivities has a lot of potential applications, such as electric impedance tomography, graphene, natural resource exploration, and military camouflage.
Use Conditions and Efficiency Measurements of DC Power Optimizers for Photovoltaic Systems: Preprint
Deline, C.; MacAlpine, S.
2013-10-01
No consensus standard exists for estimating annual conversion efficiency of DC-DC converters or power optimizers in photovoltaic (PV) applications. The performance benefits of PV power electronics including per-panel DC-DC converters depend in large part on the operating conditions of the PV system, along with the performance characteristics of the power optimizer itself. This work presents acase study of three system configurations that take advantage of the capabilities of DC power optimizers. Measured conversion efficiencies of DC-DC converters are applied to these scenarios to determine the annual weighted operating efficiency. A simplified general method of reporting weighted efficiency is given, based on the California Energy Commission's CEC efficiency rating and severalinput / output voltage ratios. Efficiency measurements of commercial power optimizer products are presented using the new performance metric, along with a description of the limitations of the approach.
Genualdi, Susie; MacMahon, Shaun; Robbins, Katherine; Farris, Samantha; Shyong, Nicole; DeJager, Lowri
2016-01-01
Sudan I, II, III and IV dyes are banned for use as food colorants in the United States and European Union because they are toxic and carcinogenic. These dyes have been illegally used as food additives in products such as chilli spices and palm oil to enhance their red colour. From 2003 to 2005, the European Union made a series of decisions requiring chilli spices and palm oil imported to the European Union to contain analytical reports declaring them free of Sudan I-IV. In order for the USFDA to investigate the adulteration of palm oil and chilli spices with unapproved colour additives in the United States, a method was developed for the extraction and analysis of Sudan dyes in palm oil, and previous methods were validated for Sudan dyes in chilli spices. Both LC-DAD and LC-MS/MS methods were examined for their limitations and effectiveness in identifying adulterated samples. Method validation was performed for both chilli spices and palm oil by spiking samples known to be free of Sudan dyes at concentrations close to the limit of detection. Reproducibility, matrix effects, and selectivity of the method were also investigated. Additionally, for the first time a survey of palm oil and chilli spices was performed in the United States, specifically in the Washington, DC, area. Illegal dyes, primarily Sudan IV, were detected in palm oil at concentrations from 150 to 24 000 ng ml(-1). Low concentrations (< 21 µg kg(-1)) of Sudan dyes were found in 11 out of 57 spices and are most likely a result of cross-contamination during preparation and storage and not intentional adulteration.
Genualdi, Susie; MacMahon, Shaun; Robbins, Katherine; Farris, Samantha; Shyong, Nicole; DeJager, Lowri
2016-01-01
Sudan I, II, III and IV dyes are banned for use as food colorants in the United States and European Union because they are toxic and carcinogenic. These dyes have been illegally used as food additives in products such as chilli spices and palm oil to enhance their red colour. From 2003 to 2005, the European Union made a series of decisions requiring chilli spices and palm oil imported to the European Union to contain analytical reports declaring them free of Sudan I–IV. In order for the USFDA to investigate the adulteration of palm oil and chilli spices with unapproved colour additives in the United States, a method was developed for the extraction and analysis of Sudan dyes in palm oil, and previous methods were validated for Sudan dyes in chilli spices. Both LC-DAD and LC-MS/MS methods were examined for their limitations and effectiveness in identifying adulterated samples. Method validation was performed for both chilli spices and palm oil by spiking samples known to be free of Sudan dyes at concentrations close to the limit of detection. Reproducibility, matrix effects, and selectivity of the method were also investigated. Additionally, for the first time a survey of palm oil and chilli spices was performed in the United States, specifically in the Washington, DC, area. Illegal dyes, primarily Sudan IV, were detected in palm oil at concentrations from 150 to 24 000 ng ml−1. Low concentrations (< 21 μg kg−1) of Sudan dyes were found in 11 out of 57 spices and are most likely a result of cross-contamination during preparation and storage and not intentional adulteration. PMID:26824489
Spatial location weighted optimization scheme for DC optical tomography.
Zhou, Jun; Bai, Jing; He, Ping
2003-01-27
In this paper, a spatial location weighted gradient-based optimization scheme for reducing the computation burden and increasing the reconstruction precision is stated. The method applies to DC diffusionbased optical tomography, where otherwise the reconstruction suffers slow convergence. The inverse approach employs a weighted steepest descent method combined with a conjugate gradient method. A reverse differentiation method is used to efficiently derive the gradient. The reconstruction results confirm that the spatial location weighted optimization method offers a more efficient approach to the DC optical imaging problem than unweighted method does.
Optimal Scheduling Method of Controllable Loads in DC Smart Apartment Building
NASA Astrophysics Data System (ADS)
Shimoji, Tsubasa; Tahara, Hayato; Matayoshi, Hidehito; Yona, Atsushi; Senjyu, Tomonobu
2015-12-01
From the perspective of global warming suppression and the depletion of energy resources, renewable energies, such as the solar collector (SC) and photovoltaic generation (PV), have been gaining attention in worldwide. Houses or buildings with PV and heat pumps (HPs) are recently being used in residential areas widely due to the time of use (TOU) electricity pricing scheme which is essentially inexpensive during middle-night and expensive during day-time. If fixed batteries and electric vehicles (EVs) can be introduced in the premises, the electricity cost would be even more reduced. While, if the occupants arbitrarily use these controllable loads respectively, power demand in residential buildings may fluctuate in the future. Thus, an optimal operation of controllable loads such as HPs, batteries and EV should be scheduled in the buildings in order to prevent power flow from fluctuating rapidly. This paper proposes an optimal scheduling method of controllable loads, and the purpose is not only the minimization of electricity cost for the consumers, but also suppression of fluctuation of power flow on the power supply side. Furthermore, a novel electricity pricing scheme is also suggested in this paper.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... Model DC-9-81 (MD- 81), DC-9-82 (MD-82), DC-9-83 (MD-83), DC-9-87 (MD-87), and MD-88 Airplanes AGENCY... Ground Floor, Room W12-140, 1200 New Jersey Avenue, SE., Washington, DC 20590. FOR FURTHER INFORMATION... ADs (b) None. Applicability (c) This AD applies to The Boeing Company Model DC-9-81 (MD-81),...
Applying Statistical Methods To The Proton Radius Puzzle
NASA Astrophysics Data System (ADS)
Higinbotham, Douglas
2016-03-01
In recent nuclear physics publications, one can find many examples where chi2 and reduced chi2 are the only tools used for the selection of models even though a chi2 difference test is only meaningful for nested models. With this in mind, we reanalyze electron scattering data, being careful to clearly define our selection criteria as well as using a co-variance matrix and confidence levels as per the statistics section of the particle data book. We will show that when applying such techniques to hydrogen elastic scattering data, the nested models often require fewer parameters than typically used and that non-nested models are often rejected inappropriately.
Kinetic Monte Carlo method applied to nucleic acid hairpin folding.
Sauerwine, Ben; Widom, Michael
2011-12-01
Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.
Moon, Jaeyun; Weaver, Keith; Feng, Bo; Chae, Han Gi; Kumar, Satish; Baek, Jong-Beom; Peterson, G P
2012-01-01
Customized engineered fibers are currently being used extensively in the aerospace and automobile industries due to the ability to "design in" specific engineering characteristics. Understanding the thermal conductivity of these new fibers is critical for thermal management and design optimization. In the current investigation, a steady-state dc thermal bridge method (DCTBM) is developed to measure the thermal conductivity of individual poly(ether ketone) (PEK)/carbon nanotube (CNT) fibers. For non-conductive fibers, a thin platinum layer was deposited on the test articles to serve as the heater and temperature sensor. The effect of the platinum layer on the thermal conductivity is presented and discussed. DCTBM is first validated using gold and platinum wires (25 μm in diameter) over a temperature ranging from room temperature to 400 K with ±11% uncertainty, and then applied to PEK/CNT fibers with diverse CNT loadings. At a 28 wt. % CNT loading, the thermal conductivity of fibers at 390 K is over 27 Wm(-1)K(-1), which is comparable to some engineering alloys.
NASA Astrophysics Data System (ADS)
Moon, Jaeyun; Weaver, Keith; Feng, Bo; Gi Chae, Han; Kumar, Satish; Baek, Jong-Beom; Peterson, G. P.
2012-01-01
Customized engineered fibers are currently being used extensively in the aerospace and automobile industries due to the ability to "design in" specific engineering characteristics. Understanding the thermal conductivity of these new fibers is critical for thermal management and design optimization. In the current investigation, a steady-state dc thermal bridge method (DCTBM) is developed to measure the thermal conductivity of individual poly(ether ketone) (PEK)/carbon nanotube (CNT) fibers. For non-conductive fibers, a thin platinum layer was deposited on the test articles to serve as the heater and temperature sensor. The effect of the platinum layer on the thermal conductivity is presented and discussed. DCTBM is first validated using gold and platinum wires (25 μm in diameter) over a temperature ranging from room temperature to 400 K with ±11% uncertainty, and then applied to PEK/CNT fibers with diverse CNT loadings. At a 28 wt. % CNT loading, the thermal conductivity of fibers at 390 K is over 27 Wm-1K-1, which is comparable to some engineering alloys.
Methods applied in studies of benthic marine debris.
Spengler, Angela; Costa, Monica F
2008-02-01
The ocean floor is one of the main accumulation sites of marine debris. The study of this kind of debris still lags behind that of shorelines. It is necessary to identify the methods used to evaluate this debris and how the results are presented and interpreted. From the available literature on benthic marine debris (26 studies), six sampling methods were registered: bottom trawl net, sonar, submersible, snorkeling, scuba diving and manta tow. The most frequent method used was bottom trawl net, followed by the three methods of diving. The majority of the debris was classified according to their former use and the results usually expressed as items per unity of area. To facilitate comparisons of the contamination levels among sites and regions some standardization requirements are suggested.
QSAGE iterative method applied to fuzzy parabolic equation
NASA Astrophysics Data System (ADS)
Dahalan, A. A.; Muthuvalu, M. S.; Sulaiman, J.
2016-02-01
The aim of this paper is to examine the effectiveness of the Quarter-Sweep Alternating Group Explicit (QSAGE) iterative method by solving linear system generated from the discretization of one-dimensional fuzzy diffusion problems. In addition, the formulation and implementation of the proposed method are also presented. The results obtained are then compared with Full-Sweep Gauss-Seidel (FSGS), Full-Sweep AGE (FSAGE) and Half-Sweep AGE (HSAGE) to illustrate their feasibility.
Spectral methods applied to fluidized bed combustors. Final report
Brown, R.C.; Christofides, N.J.; Junk, K.W.; Raines, T.S.; Thiede, T.D.
1996-08-01
The objective of this project was to develop methods for characterizing fuels and sorbents from time-series data obtained during transient operation of fluidized bed boilers. These methods aimed at determining time constants for devolatilization and char burnout using carbon dioxide (CO{sub 2}) profiles and from time constants for the calcination and sulfation processes using CO{sub 2} and sulfur dioxide (SO{sub 2}) profiles.
Efficiency transfer method applied to surface beta contamination measurements.
Stanga, D; De Felice, P; Capogni, M
2017-08-09
In this paper, the application of the efficiency transfer method to the evaluation of the surface beta contamination is described. Using efficiency transfer factors, the reference calibration factor of contamination monitors is corrected, to obtain the calibration factor for an actual contamination source. The experimental part of the paper illustrates the applicability of the method to the direct measurement of the surface beta contamination. Copyright © 2017 Elsevier Ltd. All rights reserved.
[Molecular and immunological methods applied in diagnosis of mycoses].
Kuba, Katarzyna
2008-01-01
The diagnosis of fungal infections remains a problem for the management of fungal diseases, particularly in the immunocompromised patients. Systemic Candida infections and invasive aspergillosis can be a serious problem for individuals who need intensive care. Traditional methods used for the identification and typing of medically important fungi, such as morphological and biochemical analysis, are time-consuming. For the diagnosis of mycoses caused by pathogenic fungi faster and more specific methods, especially after the dramatic increase in nosocomial invasive mycoses are needed. New diagnostic tools to detect circulating fungal antigens in biological fluids and PCR-based methods to detect species or genus-specific DNA or RNA have been developed. Antigen detection is limited to searching only one genus. Molecular genetic methods, especially PCR analysis, are becoming increasingly important as a part of diagnostics in the clinical mycology laboratory. Various modifications of the PCR method are used to detect DNA in clinical material, particularly multiple, nested and real-time PCR. Molecular methods may be used to detection of nucleic acids of fungi in clinical samples, to identify fungal cultures at the species level or to evaluate strain heterogeneity differences within the species. This article reviews some of the recent advances in the possibility of molecular diagnosis of fungal infections.
Newton like: Minimal residual methods applied to transonic flow calculations
NASA Technical Reports Server (NTRS)
Wong, Y. S.
1984-01-01
A computational technique for the solution of the full potential equation is presented. The method consists of outer and inner iterations. The outer iterate is based on a Newton like algorithm, and a preconditioned Minimal Residual method is used to seek an approximate solution of the system of linear equations arising at each inner iterate. The present iterative scheme is formulated so that the uncertainties and difficulties associated with many iterative techniques, namely the requirements of acceleration parameters and the treatment of additional boundary conditions for the intermediate variables, are eliminated. Numerical experiments based on the new method for transonic potential flows around the NACA 0012 airfoil at different Mach numbers and different angles of attack are presented, and these results are compared with those obtained by the Approximate Factorization technique. Extention to three dimensional flow calculations and application in finite element methods for fluid dynamics problems by the present method are also discussed. The Inexact Newton like method produces a smoother reduction in the residual norm, and the number of supersonic points and circulations are rapidly established as the number of iterations is increased.
Methods of Assessing and Achieving Normality Applied to Environmental Data
Mateu
1997-09-01
/ It has been recognized for a long time that data transformation methods capable of achieving normality of distributions could have a crucial role in statistical analysis, especially towards an efficient application of techniques such as analysis of variance and multiple regression analysis. Normality is a basic assumption in many of the statistical methods used in the environmental sciences and is very often neglected. In this paper several techniques to test normality of distributions are proposed and analyzed. Confidence intervals and nonparametric tests are used and discussed. Basic and Box-Cox transformations are the suggested methods to achieve normal variables. Finally, we develop an application related to environmental data with atmospheric parameters and SO2 and particle concentrations. Results show that the analyzed transformations work well and are very useful to achieve normal distributions.KEY WORDS: Normal distribution; Kurtosis; Skewness; Confidence intervals; Box-Cox transformations; Nonparametric tests
The lambda-scheme method applied to Stirling engines
NASA Astrophysics Data System (ADS)
Franco, R.
1985-12-01
An integration method of the motion equations is the so-called 'lambda-scheme': such a method suggests that, in the numerical procedure of the approximation of the derivatives in space with finite differences, the physical domains of dependence have to be correctly taken into account, according to the wave propagation through the flow. In the lambda-scheme method, the codes are simple, the computing time is kept very low, while accuracy (second-order in space and time) of the results is very satisfactory. As a matter of fact, the simulation model here discussed leads to a deeper analysis and a closer prediction of Stirling engine performances. As a first approach, a feasibility analysis is carried out for an expansion space-heat exchanger flow duty simulation.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
DAKOTA reliability methods applied to RAVEN/RELAP-7.
Swiler, Laura Painton; Mandelli, Diego; Rabiti, Cristian; Alfonsi, Andrea
2013-09-01
This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).
Method for applying pyrolytic carbon coatings to small particles
Beatty, Ronald L.; Kiplinger, Dale V.; Chilcoat, Bill R.
1977-01-01
A method for coating small diameter, low density particles with pyrolytic carbon is provided by fluidizing a bed of particles wherein at least 50 per cent of the particles have a density and diameter of at least two times the remainder of the particles and thereafter recovering the small diameter and coated particles.
[Synchrotron-based characterization methods applied to ancient materials (I)].
Anheim, Étienne; Thoury, Mathieu; Bertrand, Loïc
2015-12-01
This article aims at presenting the first results of a transdisciplinary research programme in heritage sciences. Based on the growing use and on the potentialities of micro- and nano-characterization synchrotron-based methods to study ancient materials (archaeology, palaeontology, cultural heritage, past environments), this contribution will identify and test conceptual and methodological elements of convergence between physicochemical and historical sciences.
GENERAL CONSIDERATIONS FOR GEOPHYSICAL METHODS APPLIED TO AGRICULTURE
USDA-ARS?s Scientific Manuscript database
Geophysics is the application of physical quantity measurement techniques to provide information on conditions or features beneath the earth’s surface. With the exception of borehole geophysical methods and soil probes like a cone penetrometer, these techniques are generally noninvasive with physica...
System Identification and POD Method Applied to Unsteady Aerodynamics
NASA Technical Reports Server (NTRS)
Tang, Deman; Kholodar, Denis; Juang, Jer-Nan; Dowell, Earl H.
2001-01-01
The representation of unsteady aerodynamic flow fields in terms of global aerodynamic modes has proven to be a useful method for reducing the size of the aerodynamic model over those representations that use local variables at discrete grid points in the flow field. Eigenmodes and Proper Orthogonal Decomposition (POD) modes have been used for this purpose with good effect. This suggests that system identification models may also be used to represent the aerodynamic flow field. Implicit in the use of a systems identification technique is the notion that a relative small state space model can be useful in describing a dynamical system. The POD model is first used to show that indeed a reduced order model can be obtained from a much larger numerical aerodynamical model (the vortex lattice method is used for illustrative purposes) and the results from the POD and the system identification methods are then compared. For the example considered, the two methods are shown to give comparable results in terms of accuracy and reduced model size. The advantages and limitations of each approach are briefly discussed. Both appear promising and complementary in their characteristics.
Comparing Two Attachment Classification Methods Applied to Preschool Strange Situations
Spieker, Susan; Crittenden, Patricia Mckinsey
2013-01-01
This study compared two methods for classifying preschool-age children's behavior in the Strange Situation procedure, the MacArthur (MAC) and the Preschool Assessment of Attachment (PAA), to determine whether they operationalized converging or diverging approaches to attachment theory. Strange Situations of 306 randomly selected 3-year-old children and their mothers in the NICHD Study of Early Child Care and Youth Development were classified with the MAC and PAA. The methods showed 50% agreement. A block of seven demographic, child and family predictors was unrelated to MAC classifications, but accounted for 19% of the variance in PAA classifications. The MAC and PAA each had associations with some child outcomes in grades 1-5 (ages 6-10) totalling 5% and 12% of the variance respectively, but some of the MAC associations were counter to the hypothesis. The MAC and PAA were sufficiently different to reflect both different classificatory methods and different theoretical understandings of attachment. Results are discussed in terms of limitations of the sample and measures available to compare the two methods, and clinical implications. PMID:19914941
Inversion method applied to the rotation curves of galaxies
NASA Astrophysics Data System (ADS)
Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.
2017-07-01
We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.
Ideal and computer mathematics applied to meshfree methods
NASA Astrophysics Data System (ADS)
Kansa, E.
2016-10-01
Early numerical methods to solve ordinary and partial differential relied upon human computers who used mechanical devices. The algorithms used changed little over the evolution of electronic computers having only low order convergence rates. A meshfree scheme was developed for problems that converges exponentially using the latest computational science toolkit.
Tensor product decomposition methods applied to complex flow data
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2017-04-01
Low-rank multilevel approximation methods are an important tool in numerical analysis and in scientific computing. Those methods are often suited to attack high-dimensional problems successfully and allow very compact representations of large data sets. Specifically, hierarchical tensor product decomposition methods emerge as an promising approach for application to data that are concerned with cascade-of-scales problems as, e.g., in turbulent fluid dynamics. We focus on two particular objectives, that is representing turbulent data in an appropriate compact form and, secondly and as a long-term goal, finding self-similar vortex structures in multiscale problems. The question here is whether tensor product methods can support the development of improved understanding of the multiscale behavior and whether they are an improved starting point in the development of compact storage schemes for solutions of such problems relative to linear ansatz spaces. We present the reconstruction capabilities of a tensor decomposition based modeling approach tested against 3D turbulent channel flow data.
Energy minimization methods applied to riboswitches: a perspective and challenges.
Barash, Danny; Gabdank, Idan
2010-01-01
Energy minimization methods for RNA secondary structure prediction have been used extensively for studying a variety of biological systems. Here, we demonstrate their applicability in riboswitch studies, exemplified in both the expression platform and aptamer domains. In the expression platform domain, energy minimization methods can be used to predict in silico a unique point mutation positioned in the non-conserved region of the TPP riboswitch that will transform it from a termination to an anti-termination state, thus backing the prediction experimentally. Furthermore, a successive prediction can be made for a compensatory mutation that is positioned over half the sequence length of the riboswitch from the original mutation and that completely overturns the anti-termination effect of the original mutation. This approach can be used to computationally predict rational modifications in riboswitches for both research and practical applications. In the aptamer domain, energy minimization methods can be used when attempting to detect a novel purine riboswitch in eukaryotes based on the consensus sequence and structure of the bacterial guanine binding aptamer. In the process, some interesting candidates are identified, and although they are attractive enough to be tested experimentally, they are not detectable by sequence based methods alone. These brief examples represent the important lessons to be learned as to the strengths and limitations of energy minimization methods. In light of our growing knowledge in the energy minimization field, future challenges can be advanced for the rational design of known riboswitches and the detection of novel riboswitches. Unlike analyses of specific cases, it is stressed that all the results described here are predictive in scope with direct applicability and an attempt to validate the predictions experimentally.
Over Current Protection for PFM Control DC-DC Converter
NASA Astrophysics Data System (ADS)
Yamada, Kouhei; Sugahara, Satoshi; Fujii, Nobuo
An over current protection method suitable for Fixed ON-time PFM (Pulse Frequency Modulation) Control DC-DC converters is proposed. It is based on inductor bottom current limiting, realized by monitoring the synchronous rectifier current and extending the OFF-phase of the main switch until it decreases to a predetermined limit, and can properly limit the output current even in case of short circuit. A Fixed ON-time PFM DC-DC converter with the proposed over current protection was designed and fabricated in CMOS IC. Its current limiting operation was verified with simulations and measurements.
Analysis of self-oscillating dc-to-dc converters
NASA Technical Reports Server (NTRS)
Burger, P.
1974-01-01
The basic operational characteristics of dc-to-dc converters are analyzed along with the basic physical characteristics of power converters. A simple class of dc-to-dc power converters are chosen which could satisfy any set of operating requirements, and three different controlling methods in this class are described in detail. Necessary conditions for the stability of these converters are measured through analog computer simulation whose curves are related to other operational characteristics, such as ripple and regulation. Further research is suggested for the solution of absolute stability and efficient physical design of this class of power converters.
Determination of appropriate DC voltage for switched mode power supply (SMPS) loads
NASA Astrophysics Data System (ADS)
Setiawan, Eko Adhi; Setiawan, Aiman; Purnomo, Andri; Djamal, Muchlishah Hadi
2017-03-01
Nowadays, most of modern and efficient household electronic devices operated based on Switched Mode Power Supply (SMPS) technology which convert AC voltage from the grid to DC voltage. Based on theory and experiment, SMPS loads could be supplied by DC voltage. However, the DC voltage rating to energize electronic home appliances is not standardized yet. This paper proposed certain method to determine appropriate DC voltage, and investigated comparison of SMPS power consumption which is supplied from AC and DC voltage. To determine the appropriate DC voltage, lux value of several lamps which have same specification energized by using AC voltage and the results is using as reference. Then, the lamps were supplied by various DC voltage to obtain the trends of the lux value to the applied DC voltage. After that, by using the trends and the reference lux value, the appropriate DC voltage can be determined. Furthermore, the power consumption on home appliances such as mobile phone, laptop and personal computer by using AC voltage and the appropriate DC voltage were conducted. The results show that the total power consumption of AC system is higher than DC system. The total power (apparent power) consumed by the lamp, mobile phone and personal computer which operated in 220 VAC were 6.93 VA, 34.31 VA and 105.85 VA respectively. On the other hand, under 277 VDC the load consumption were 5.83 W, 19.11 W and 74.46 W respectively.
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
Advanced in situ characterization methods applied to carbonaceous materials
NASA Astrophysics Data System (ADS)
Novák, P.; Goers, D.; Hardwick, L.; Holzapfel, M.; Scheifele, W.; Ufheil, J.; Würsig, A.
This paper is an overview of the progress recently achieved in our laboratory in the development and application of four in situ methods; namely X-ray diffraction (both synchrotron-based and standard), Raman microscopy, differential electrochemical mass spectrometry (DEMS), and infrared spectroscopy. We show representative results on graphite electrodes for each method as an illustration, in particular (i) the influence of the lithium intercalation and graphite exfoliation on the shift of the (0 0 2)-reflection of graphite, (ii) Raman single point and mapping measurements of graphite surface, (iii) gas evolution during solid electrolyte interphase (SEI) formation on graphite electrodes, and (iv) the development of infrared spectra during the SEI formation in γ-butyrolactone based electrolytes.
Generic Methods for Formalising Sequent Calculi Applied to Provability Logic
NASA Astrophysics Data System (ADS)
Dawson, Jeremy E.; Goré, Rajeev
We describe generic methods for reasoning about multiset-based sequent calculi which allow us to combine shallow and deep embeddings as desired. Our methods are modular, permit explicit structural rules, and are widely applicable to many sequent systems, even to other styles of calculi like natural deduction and term rewriting systems. We describe new axiomatic type classes which enable simplification of multiset or sequent expressions using existing algebraic manipulation facilities. We demonstrate the benefits of our combined approach by formalising in Isabelle/HOL a variant of a recent, non-trivial, pen-and-paper proof of cut-admissibility for the provability logic GL, where we abstract a large part of the proof in a way which is immediately applicable to other calculi. Our work also provides a machine-checked proof to settle the controversy surrounding the proof of cut-admissibility for GL.
Methods of parallel computation applied on granular simulations
NASA Astrophysics Data System (ADS)
Martins, Gustavo H. B.; Atman, Allbens P. F.
2017-06-01
Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.
Applying flow chemistry: methods, materials, and multistep synthesis.
McQuade, D Tyler; Seeberger, Peter H
2013-07-05
The synthesis of complex molecules requires control over both chemical reactivity and reaction conditions. While reactivity drives the majority of chemical discovery, advances in reaction condition control have accelerated method development/discovery. Recent tools include automated synthesizers and flow reactors. In this Synopsis, we describe how flow reactors have enabled chemical advances in our groups in the areas of single-stage reactions, materials synthesis, and multistep reactions. In each section, we detail the lessons learned and propose future directions.
The Recursion Method Applied to One-Dimensional Spin Systems
NASA Astrophysics Data System (ADS)
Viswanath, V. S.
The recursion method is used for the study of the dynamics of quantum spin models at zero and infinite temperatures. Two alternative formulations of the recursion method are described in detail. Application of either formulation to quantum many-body systems yields a set of continued-fraction coefficients. Several new calculational techniques for the analysis of these continued-fraction coefficients developed during the course of my research are presented. The efficacy and accuracy of these techniques are demonstrated by applications to the few situations were exact nontrivial results are available. For the s = 1/2 XXZ model on a linear chain, new and reliable quantitative information has been obtained on the type of ordering in the ground-state, on the size of gaps in the dynamically relevant excitation spectrum, on the bandwidths of dominant structures in spectral densities, on the exponents of infrared singularities in the same functions, and on the detailed shape of spectral-weight distributions. Zero temperature dynamic structure factors for the one-dimensional spin-s XYZ model in a magnetic field have been calculated for systems with s = 1/2, 1, 3/2. The line shapes and peak positions have been shown to differ considerably from the corresponding spin-wave results. Time-dependent spin autocorrelation functions and their spectral densities for the semi-infinite one -dimensional s = 1/2 XY model at infinite temperature have been determined in part by rigorous calculations in the fermion representation and in part by the recursion method in the spin representation. The study of boundary effects yields valuable new insight into the dynamical processes which govern the transport of spin fluctuations in that model. The exact results also provide a benchmark against which the results of the recursion method have been compared and calibrated.
Computing methods in applied sciences and engineering. VII
Glowinski, R.; Lions, J.L.
1986-01-01
The design of computers with fast memories, capable of up to one billion floating point operations per second, is important for the attempts being made to solve problems in Scientific Computing. The role of numerical algorithm designers is important due to the architectures and programming necessary to utilize the full potential of these machines. Efficient use of such computers requires sophisticated programming tools, and in the case of parallel computers algorithmic concepts have to be introduced. These new methods and concepts are presented.
Matrix methods applied to acoustic waves in multilayers
NASA Astrophysics Data System (ADS)
Adler, Eric L.
1990-11-01
Matrix methods for analyzing the electroacoustic characteristics of anisotropic piezoelectric multilayers are described. The conceptual usefulness of the methods is demonstrated in a tutorial fashion by examples showing how formal statements of propagation, transduction, and boundary-value problems in complicated acoustic layered geometries such as those which occur in surface acoustic wave (SAW) devices, in multicomponent laminates, and in bulk-wave composite transducers are simplified. The formulation given reduces the electroacoustic equations to a set of first-order matrix differential equations, one for each layer, in the variables that must be continuous across interfaces. The solution to these equations is a transfer matrix that maps the variables from one layer face to the other. Interface boundary conditions for a planar multilayer are automatically satisfied by multiplying the individual transfer matrices in the appropriate order, thus reducing the problem to just having to impose boundary conditions appropriate to the remaining two surfaces. The computational advantages of the matrix method result from the fact that the problem rank is independent of the number of layers, and from the availability of personal computer software that makes interactive numerical experimentation with complex layered structures practical.
MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS
R. ESTEP; ET AL
2000-06-01
Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.
Protein engineering methods applied to membrane protein targets.
Lluis, M W; Godfroy, J I; Yin, H
2013-02-01
Genes encoding membrane proteins have been estimated to comprise as much as 30% of the human genome. Among these membrane, proteins are a large number of signaling receptors, transporters, ion channels and enzymes that are vital to cellular regulation, metabolism and homeostasis. While many membrane proteins are considered high-priority targets for drug design, there is a dearth of structural and biochemical information on them. This lack of information stems from the inherent insolubility and instability of transmembrane domains, which prevents easy obtainment of high-resolution crystals to specifically study structure-function relationships. In part, this lack of structures has greatly impeded our understanding in the field of membrane proteins. One method that can be used to enhance our understanding is directed evolution, a molecular biology method that mimics natural selection to engineer proteins that have specific phenotypes. It is a powerful technique that has considerable success with globular proteins, notably the engineering of protein therapeutics. With respect to transmembrane protein targets, this tool may be underutilized. Another powerful tool to investigate membrane protein structure-function relationships is computational modeling. This review will discuss these protein engineering methods and their tremendous potential in the study of membrane proteins.
Differential correction method applied to measurement of the FAST reflector
NASA Astrophysics Data System (ADS)
Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng
2016-08-01
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%-80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-05
... Douglas Corporation Model DC- 9-10 Series Airplanes, DC-9-30 Series Airplanes, DC-9-81 (MD-81) Airplanes, DC-9-82 (MD-82) Airplanes, DC-9-83 (MD-83) Airplanes, DC-9- 87 (MD-87) Airplanes, MD-88 Airplanes... directive (AD), which applies to all McDonnell Douglas Model DC-9-10 series airplanes, DC-9-30 series...
Applied high resolution geophysical methods: Offshore geoengineering hazards
Trabant, P.K.
1984-01-01
This book is an examination of the purpose, methodology, equipment, and data interpretation of high-resolution geophysical methods, which are used to assess geological and manmade engineering hazards at offshore construction locations. It is a state-of-the-art review. Contents: 1. Introduction. 2. Maring geophysics, an overview. 3. Marine geotechnique, an overview. 4. Echo sounders. 5. Side scan sonar. 6. Subbottom profilers. 7. Seismic sources. 8. Single-channel seismic reflection systems. 9. Multifold acquisition and digital processing. 10. Marine magnetometers. 11. Marine geoengineering hazards. 12. Survey organization, navigation, and future developments. Appendix. Glossary. References. Index.
[Dichotomizing method applied to calculating equilibrium constant of dimerization system].
Cheng, Guo-zhong; Ye, Zhi-xiang
2002-06-01
The arbitrary trivariate algebraic equations are formed based on the combination principle. The univariata algebraic equation of equilibrium constant kappa for dimerization system is obtained through a series of algebraic transformation, and it depends on the properties of monotonic functions whether the equation is solvable or not. If the equation is solvable, equilibrium constant of dimerization system is obtained by dichotomy and its final equilibrium constant of dimerization system is determined according to the principle of error of fitting. The equilibrium constants of trisulfophthalocyanine and biosulfophthalocyanine obtained with this method are 47,973.4 and 30,271.8 respectively. The results are much better than those reported previously.
System And Method Of Applying Energetic Ions For Sterlization
Schmidt, John A.
2002-06-11
A method of sterilization of a container is provided whereby a cold plasma is caused to be disposed near a surface to be sterilized, and the cold plasma is then subjected to a pulsed voltage differential for producing energized ions in the plasma. Those energized ions then operate to achieve spore destruction on the surface to be sterilized. Further, a system for sterilization of a container which includes a conductive or non-conductive container, a cold plasma in proximity to the container, and a high voltage source for delivering a pulsed voltage differential between an electrode and the container and across the cold plasma, is provided.
System and method of applying energetic ions for sterilization
Schmidt, John A.
2003-12-23
A method of sterilization of a container is provided whereby a cold plasma is caused to be disposed near a surface to be sterilized, and the cold plasma is then subjected to a pulsed voltage differential for producing energized ions in the plasma. Those energized ions then operate to achieve spore destruction on the surface to be sterilized. Further, a system for sterilization of a container which includes a conductive or non-conductive container, a cold plasma in proximity to the container, and a high voltage source for delivering a pulsed voltage differential between an electrode and the container and across the cold plasma, is provided.
Error behavior of multistep methods applied to unstable differential systems
NASA Technical Reports Server (NTRS)
Brown, R. L.
1977-01-01
The problem of modeling a dynamic system described by a system of ordinary differential equations which has unstable components for limited periods of time is discussed. It is shown that the global error in a multistep numerical method is the solution to a difference equation initial value problem, and the approximate solution is given for several popular multistep integration formulas. Inspection of the solution leads to the formulation of four criteria for integrators appropriate to unstable problems. A sample problem is solved numerically using three popular formulas and two different stepsizes to illustrate the appropriateness of the criteria.
Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.
Ramírez, C L; Martí, M A; Roitberg, A E
2016-01-01
One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost.
Microcanonical ensemble simulation method applied to discrete potential fluids
NASA Astrophysics Data System (ADS)
Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro
2015-09-01
In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Microcanonical ensemble simulation method applied to discrete potential fluids.
Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro
2015-09-01
In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002)0129-183110.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.
Method of images applied to driven solid-state emitters
NASA Astrophysics Data System (ADS)
Scerri, Dale; Santana, Ted S.; Gerardot, Brian D.; Gauger, Erik M.
2017-04-01
Increasing the collection efficiency from solid-state emitters is an important step towards achieving robust single-photon sources, as well as optically connecting different nodes of quantum hardware. A metallic substrate may be the most basic method of improving the collection of photons from quantum dots, with predicted collection efficiency increases of up to 50%. The established "method-of-images" approach models the effects of a reflective surface for atomic and molecular emitters by replacing the metal surface with a second fictitious emitter which ensures appropriate electromagnetic boundary conditions. Here, we extend the approach to the case of driven solid-state emitters, where exciton-phonon interactions play a key role in determining the optical properties of the system. We derive an intuitive polaron master equation and demonstrate its agreement with the complementary half-sided cavity formulation of the same problem. Our extended image approach offers a straightforward route towards studying the dynamics of multiple solid-state emitters near a metallic surface.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Astrophysics Data System (ADS)
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation
NASA Astrophysics Data System (ADS)
Arotaritei, D.; Rotariu, C.
2015-09-01
In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).
Power Network impedance effects on noise emission of DC-DC converters
NASA Astrophysics Data System (ADS)
Esteban, M. C.; Arteche, F.; Iglesias, M.; Gimeno, A.; Arcega, F. J.; Johnson, M.; Cooper, W. E.
2012-01-01
The characterization of electromagnetic noise emissions of DC-DC converters is a critical issue that has been analyzed during the desing phase of CMS tracker upgrade. Previous simulation studies showed important variations in the level of conducted emissions when DC-DC converters are loaded/driven by different impedances and power network topologies. Several tests have been performed on real DC-DC converters to validate the Pspice model and simulation results. This paper presents these test results. Conducted noise emissions at the input and at the output terminals of DC-DC converters has been measured for different types of power and FEE impedances. Special attention has been paid to influence on the common-mode emissions by the carbon fiber material used to build the mechanical structure of the central detector. These study results show important recommendations and criteria to be applied in order to decrease the system noise level when integrating the DC-DC.
NASA Astrophysics Data System (ADS)
Cao, Jia; Yan, Zheng; He, Guangyu
2016-06-01
This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.
The Movable Type Method Applied to Protein-Ligand Binding
Zheng, Zheng; Ucisik, Melek N.; Merz, Kenneth M.
2013-01-01
Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term “movable type”. Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the
The Movable Type Method Applied to Protein-Ligand Binding.
Zheng, Zheng; Ucisik, Melek N; Merz, Kenneth M
2013-12-10
Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term "movable type". Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the free
Full wave dc-to-dc converter using energy storage transformers
NASA Technical Reports Server (NTRS)
Moore, E. T.; Wilson, T. G.
1969-01-01
Full wave dc-to-dc converter, for an ion thrustor, uses energy storage transformers to provide a method of dc-to-dc conversion and regulation. The converter has a high degree of physical simplicity, is lightweight and has high efficiency.
The Augmented Lagrangian Method Applied to Unsolvable Power Flows
NASA Astrophysics Data System (ADS)
Zambaldi, Mario C.; Francisco, Juliano B.; Barboza, Luciano V.
2011-11-01
This work aims to present and discuss an approach to restore the network electric equations solvability. The unsolvable power flow is modeled as a constrained optimization problem. The cost function is the least squares of the real and reactive power mismatches sum. The equality constraints are the real and reactive power mismatches at null injection buses and/or at those buses that must have their power demands totally supplied for technical or economical criteria. The mathematical model is solved by an algorithm based on the Augmented Lagrangian method considering the particular structure of the problem. Numerical results for a real equivalent system from the Brazilian South-Southeast region are presented in order to assess the performance of the proposed approach.
Meshless Petrov-Galerkin Method Applied to Axisymmetric Problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Chen, T.
2001-01-01
An axisymmetric Meshless Local Petrov-Galerkin (MLPG) algorithm is presented for the potential and elasticity problems. In this algorithm the trial and test functions are chosen from different spaces. By a judicious choice of these functions, the integrals involved in the weak form can be restricted to a local neighborhood. This makes the method truly meshless. The MLPG algorithm is used to study various potential and elasticity problems for which exact solutions are available. The sensitivity and effectiveness of the MLPG algorithm to various parameters such as the weight functions, basis functions and support domain radius, etc. was studied. The MLPG algorithm yielded accurate solutions for all weight functions, basis functions and support domain radii considered for all of the problems studied.
THE ADJOINT METHOD APPLIED TO TIME-DISTANCE HELIOSEISMOLOGY
Hanasoge, Shravan M.; Gizon, Laurent; Birch, Aaron; Tromp, Jeroen
2011-09-01
For a given misfit function, a specified optimality measure of a model, its gradient describes the manner in which one may alter properties of the system to march toward a stationary point. The adjoint method, arising from partial-differential-equation-constrained optimization, describes a means of extracting derivatives of a misfit function with respect to model parameters through finite computation. It relies on the accurate calculation of wavefields that are driven by two types of sources, namely, the average wave-excitation spectrum, resulting in the forward wavefield, and differences between predictions and observations, resulting in an adjoint wavefield. All sensitivity kernels relevant to a given measurement emerge directly from the evaluation of an interaction integral involving these wavefields. The technique facilitates computation of sensitivity kernels (Frechet derivatives) relative to three-dimensional heterogeneous background models, thereby paving the way for nonlinear iterative inversions. An algorithm to perform such inversions using as many observations as desired is discussed.
Data Reduction Methods Applied to the Fastrac Engine
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1999-01-01
The Fastrac rocket engine is currently being developed for the X-34 technology demonstrator vehicle. The engine performance model must be calibrated to support accurate performance prediction. Data reduction is the process of estimating hardware characteristics from available test data, and is essential for effective performance model calibration and prediction. A new data reduction procedure was developed, implemented, and tested using data from Fastrac engine tests. The procedure selects hardware and test measurements to use in the reduction process based on examination of the model influence matrix condition number. Predicted hardware characteristics are recovered from the solution of a quadratic programming problem. Computational tests indicate that the new procedure provides a significant improvement in test data reduction capability. Enhancements include improved test data utilization and time history data reduction capability. The new method is generically applicable to other systems.
Applying Human-Centered Design Methods to Scientific Communication Products
NASA Astrophysics Data System (ADS)
Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.
2016-12-01
Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.
Atomistic Method Applied to Computational Modeling of Surface Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Abel, Phillip B.
2000-01-01
The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
Random particle methods applied to broadband fan interaction noise
NASA Astrophysics Data System (ADS)
Dieste, M.; Gabard, G.
2012-10-01
Predicting broadband fan noise is key to reduce noise emissions from aircraft and wind turbines. Complete CFD simulations of broadband fan noise generation remain too expensive to be used routinely for engineering design. A more efficient approach consists in synthesizing a turbulent velocity field that captures the main features of the exact solution. This synthetic turbulence is then used in a noise source model. This paper concentrates on predicting broadband fan noise interaction (also called leading edge noise) and demonstrates that a random particle mesh method (RPM) is well suited for simulating this source mechanism. The linearized Euler equations are used to describe sound generation and propagation. In this work, the definition of the filter kernel is generalized to include non-Gaussian filters that can directly follow more realistic energy spectra such as the ones developed by Liepmann and von Kármán. The velocity correlation and energy spectrum of the turbulence are found to be well captured by the RPM. The acoustic predictions are successfully validated against Amiet's analytical solution for a flat plate in a turbulent stream. A standard Langevin equation is used to model temporal decorrelation, but the presence of numerical issues leads to the introduction and validation of a second-order Langevin model.
Scripted Finite Element Methods Applied to Global Geomagnetic Induction
NASA Astrophysics Data System (ADS)
Ribaudo, J.; Constable, C.
2007-12-01
Magnetic field observations from CHAMP, Ø rsted and SAC-C and improved techniques for comprehensive geomagnetic field modeling have generated renewed interest in using satellite and observatory data to study global scale electromagnetic induction in Earth's crust and mantle. The primary external source field derives from variations in the magnetospheric ring current, and recent studies show that over-simplified assumptions about its spatial structure lead to biased estimates of the frequency-dependent electromagnetic response functions generally used in inversions for mantle conductivity. The bias takes the form of local time dependence in the C- response estimates and highlights the need for flexible forward modeling tools for the global induction problem to accommodate 3D time-varying structure in both primary and induced fields. We are developing such tools using FlexPDE, a commercially available script-based finite element method (FEM) package for partial differential equations. Our strategy is to model the vector potential \\mathbf{A}, where \\mathbf{B} = \
Fourier transform methods applied to an optical heterodyne profilometer
NASA Astrophysics Data System (ADS)
Beltrán-González, A.; García-Torales, G.; Martínez-Ponce, G.
2013-11-01
In this work, theory and experiment describe the performance of a surface profile measurement device based on optical heterodyne interferometry are presented. The object and reference beams propagating through the interferometer are obtained by single-pass through an acousto-optic modulator. The diffraction orders 0 and the Doppler-shifted +1 (object and reference beams, respectively) are manipulated to propagate collinearly towards the interferometer output where a fast photodetector is placed to collect the irradiance. The modulated optical signal is Fourier transformed using a data acquisition card and RF communications software. The peak centered at the acousto-optic frequency in the power spectrum is filtered and averaged. The irregularities on the surface of the reflective sample are proportional to the height of this peak. The profile of a reflective blazed grating has been sketched by translating laterally the sample using a nanopositioning system. Experimental results are compared to the measurement done with a scanning electron microscope. There has been found a good agreement between both methods.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
What health care managers do: applying Mintzberg's structured observation method.
Arman, Rebecka; Dellve, Lotta; Wikström, Ewa; Törnström, Linda
2009-09-01
Aim The aim of the present study was to explore and describe what characterizes first- and second-line health care managers' use of time. Background Many Swedish health care managers experience difficulties managing their time. Methods Structured and unstructured observations were used. Ten first- and second-line managers in different health care settings were studied in detail from 3.5 and 4 days each. Duration and frequency of different types of work activities were analysed. Results The individual variation was considerable. The managers' days consisted to a large degree of short activities (<9 minutes). On average, nearly half of the managers' time was spent in meetings. Most of the managers' time was spent with subordinates and <1% was spent alone with their superiors. Sixteen per cent of their time was spent on administration and only a small fraction on explicit strategic work. Conclusions The individual variations in time use patterns suggest the possibility of interventions to support changes in time use patterns. Implications for nursing management A reliable description of what managers do paves the way for analyses of what they should do to be effective.
NASA Astrophysics Data System (ADS)
Urabe, Keiichiro; Shirai, Naoki; Tomita, Kentaro; Akiyama, Tsuyoshi; Murakami, Tomoyuki
2016-08-01
The density and temperature of electrons and key heavy particles were measured in an atmospheric-pressure pulsed-dc helium discharge plasma with a nitrogen molecular impurity generated using system with a liquid or metal anode and a metal cathode. To obtain these parameters, we conducted experiments using several laser-aided methods: Thomson scattering spectroscopy to obtain the spatial profiles of electron density and temperature, Raman scattering spectroscopy to obtain the neutral molecular nitrogen rotational temperature, phase-modulated dispersion interferometry to determine the temporal variation of the electron density, and time-resolved laser absorption spectroscopy to analyze the temporal variation of the helium metastable atom density. The electron density and temperature measured by Thomson scattering varied from 2.4 × 1014 cm-3 and 1.8 eV at the center of the discharge to 0.8 × 1014 cm-3 and 1.5 eV near the outer edge of the plasma in the case of the metal anode, respectively. The electron density obtained with the liquid anode was approximately 20% smaller than that obtained with the metal anode, while the electron temperature was not significantly affected by the anode material. The molecular nitrogen rotational temperatures were 1200 K with the metal anode and 1650 K with the liquid anode at the outer edge of the plasma column. The density of helium metastable atoms decreased by a factor of two when using the liquid anode.
Complexity methods applied to turbulence in plasma astrophysics
NASA Astrophysics Data System (ADS)
Vlahos, L.; Isliker, H.
2016-09-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the
Radiation-Tolerant DC-DC Converters
NASA Technical Reports Server (NTRS)
Skutt, Glenn; Sable, Dan; Leslie, Leonard; Graham, Shawn
2012-01-01
A document discusses power converters suitable for space use that meet the DSCC MIL-PRF-38534 Appendix G radiation hardness level P classification. A method for qualifying commercially produced electronic parts for DC-DC converters per the Defense Supply Center Columbus (DSCC) radiation hardened assurance requirements was developed. Development and compliance testing of standard hybrid converters suitable for space use were completed for missions with total dose radiation requirements of up to 30 kRad. This innovation provides the same overall performance as standard hybrid converters, but includes assurance of radiation- tolerant design through components and design compliance testing. This availability of design-certified radiation-tolerant converters can significantly reduce total cost and delivery time for power converters for space applications that fit the appropriate DSCC classification (30 kRad).
Ruiz, Monica S; O'Rourke, Allison; Allen, Sean T
2016-02-01
No current estimates exist for the size of the population of people who inject drugs (PWID) in the District of Columbia (DC). The WHO/UNAIDS Guidelines on Estimating the Size of Populations Most at Risk to HIV was used as the methodological framework to estimate the DC PWID population. The capture phase recruited harm reduction agency clients; the recapture phase recruited community-based PWID. The 951 participants were predominantly Black (83.9 %), male (69.8 %), and 40+ years of age (68.2 %). Approximately 50.3 % reported injecting drugs in the past 30 days. We estimate approximately 8829 (95 % CI 4899 and 12,759) PWID in DC. When adjusted for possible missed sub-populations of PWID, the estimate increases to 12,000; thus, the original estimate of approximately 9000 should be viewed in the context of the 95 % confidence interval. These evidence-based estimations should be used to determine program delivery needs and resource allocation for PWID in Washington, DC.
Near-infrared radiation curable multilayer coating systems and methods for applying same
Bowman, Mark P; Verdun, Shelley D; Post, Gordon L
2015-04-28
Multilayer coating systems, methods of applying and related substrates are disclosed. The coating system may comprise a first coating comprising a near-IR absorber, and a second coating deposited on a least a portion of the first coating. Methods of applying a multilayer coating composition to a substrate may comprise applying a first coating comprising a near-IR absorber, applying a second coating over at least a portion of the first coating and curing the coating with near infrared radiation.
NASA Technical Reports Server (NTRS)
Hamilton, H. B.; Strangas, E.
1980-01-01
The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.
NASA Astrophysics Data System (ADS)
Kimura, Akira
In inverter-converter driving systems for AC electric cars, the DC input voltage of an inverter contains a ripple component with a frequency that is twice as high as the line voltage frequency, because of a single-phase converter. The ripple component of the inverter input voltage causes pulsations on torques and currents of driving motors. To decrease the pulsations, a beat-less control method, which modifies a slip frequency depending on the ripple component, is applied to the inverter control. In the present paper, the beat-less control method was analyzed in the frequency domain. In the first step of the analysis, transfer functions, which revealed the relationship among the ripple component of the inverter input voltage, the slip frequency, the motor torque pulsation and the current pulsation, were derived with a synchronous rotating model of induction motors. An analysis model of the beat-less control method was then constructed using the transfer functions. The optimal setting of the control method was obtained according to the analysis model. The transfer functions and the analysis model were verified through simulations.
NASA Astrophysics Data System (ADS)
Canal, G. P.; Ferraro, N. M.; Evans, T. E.; Osborne, T. H.; Menard, J. E.; Ahn, J.-W.; Maingi, R.; Wingen, A.; Ciro, D.; Frerichs, H.; Schmitz, O.; Soukhanoviskii, V.; Waters, I.
2016-10-01
Single- and two-fluid resistive magnetohydrodynamic simulations, performed with the code M3D-C1, are used to investigate the effect of n = 3 magnetic perturbations on the SF divertor configuration. The calculations are based on simulated NSTX-U plasmas and the results show that additional and longer magnetic lobes are created in the null-point region of the SF configuration, compared to those in the conventional single-null. The intersection of these additional and longer lobes with the divertor plates are expected to cause more striations in the particle and heat flux target profiles. In addition, the results indicate that the size of the magnetic lobes, in both single-null and SF configurations, are more sensitive to resonant than to non-resonant magnetic perturbations. The results also suggest that lower values of current in non-axisymmetric control coils close enough to the primary x-point would be required to suppress edge localized modes in plasmas with the SF configuration. This work has been supported by the US Department of Energy, Office of Science, Office of Fusion Energy Science under DOE Award DE-SC0012706.
ERIC Educational Resources Information Center
Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti
2010-01-01
In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…
ERIC Educational Resources Information Center
Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti
2010-01-01
In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…
Eom, Ji Mi; Oh, Hyun Gon; Cho, Il Hwan; Kwon, Sang Jik; Cho, Eou Sik
2013-11-01
Niobium oxide (Nb2O5) films were deposited on p-type Si wafers and sodalime glasses at a room temperature using in-line pulsed-DC magnetron sputtering system with various duty ratios. The different duty ratio was obtained by varying the reverse voltage time of pulsed DC power from 0.5 to 2.0 micros at the fixed frequency of 200 kHz. From the structural and optical characteristics of the sputtered NbOx films, it was possible to obtain more uniform and coherent NbOx films in case of the higher reverse voltage time as a result of the cleaning effect on the Nb2O5 target surface. The electrical characteristics from the metal-insulator-semiconductor (MIS) fabricated with the NbOx films shows the leakage currents are influenced by the reverse voltage time and the Schottky barrier diode characteristics.
Xiao, Peng; Dong, Ting; Lan, Linfeng; Lin, Zhenguo; Song, Wei; Luo, Dongxiang; Xu, Miao; Peng, Junbiao
2016-04-27
Thin-film transistors (TFTs) with zirconium-doped indium oxide (ZrInO) semiconductor were successfully fabricated by an all-DC-sputtering method at room temperature. The ZrInO TFT without any intentionally annealing steps exhibited a high saturation mobility of 25.1 cm(2)V(-1)s(-1). The threshold voltage shift was only 0.35 V for the ZrInO TFT under positive gate bias stress for 1 hour. Detailed studies showed that the room-temperature ZrInO thin film was in the amorphous state with low carrier density because of the strong bonding strength of Zr-O. The room-temperature process is attractive for its compatibility with almost all kinds of the flexible substrates, and the DC sputtering process is good for the production efficiency improvement and the fabrication cost reduction.
Efficient Design in a DC to DC Converter Unit
NASA Technical Reports Server (NTRS)
Bruemmer, Joel E.; Williams, Fitch R.; Schmitz, Gregory V.
2002-01-01
Space Flight hardware requires high power conversion efficiencies due to limited power availability and weight penalties of cooling systems. The International Space Station (ISS) Electric Power System (EPS) DC-DC Converter Unit (DDCU) power converter is no exception. This paper explores the design methods and tradeoffs that were utilized to accomplish high efficiency in the DDCU. An isolating DC to DC converter was selected for the ISS power system because of requirements for separate primary and secondary grounds and for a well-regulated secondary output voltage derived from a widely varying input voltage. A flyback-current-fed push-pull topology or improved Weinberg circuit was chosen for this converter because of its potential for high efficiency and reliability. To enhance efficiency, a non-dissipative snubber circuit for the very-low-Rds-on Field Effect Transistors (FETs) was utilized, redistributing the energy that could be wasted during the switching cycle of the power FETs. A unique, low-impedance connection system was utilized to improve contact resistance over a bolted connection. For improved consistency in performance and to lower internal wiring inductance and losses a planar bus system is employed. All of these choices contributed to the design of a 6.25 KW regulated dc to dc converter that is 95 percent efficient. The methodology used in the design of this DC to DC Converter Unit may be directly applicable to other systems that require a conservative approach to efficient power conversion and distribution.
Early Oscillation Detection Technique for Hybrid DC/DC Converters
NASA Technical Reports Server (NTRS)
Wang, Bright L.
2011-01-01
normal operation. This technique eliminates the probing problem of a gain/phase margin method by connecting the power input to a spectral analyzer. Therefore, it is able to evaluate stability for all kinds of hybrid DC/DC converters with or without remote sense pins, and is suitable for real-time and in-circuit testing. This frequency-domain technique is more sensitive to detect oscillation at early stage than the time-domain method using an oscilloscope.
Development of toroid-type HTS DC reactor series for HVDC system
NASA Astrophysics Data System (ADS)
Kim, Kwangmin; Go, Byeong-Soo; Park, Hea-chul; Kim, Sung-kyu; Kim, Seokho; Lee, Sangjin; Oh, Yunsang; Park, Minwon; Yu, In-Keun
2015-11-01
This paper describes design specifications and performance of a toroid-type high-temperature superconducting (HTS) DC reactor. The first phase operation targets of the HTS DC reactor were 400 mH and 400 A. The authors have developed a real HTS DC reactor system during the last three years. The HTS DC reactor was designed using 2G GdBCO HTS wires. The HTS coils of the toroid-type DC reactor magnet were made in the form of a D-shape. The electromagnetic performance of the toroid-type HTS DC reactor magnet was analyzed using the finite element method program. A conduction cooling method was adopted for reactor magnet cooling. The total system has been successfully developed and tested in connection with LCC type HVDC system. Now, the authors are studying a 400 mH, kA class toroid-type HTS DC reactor for the next phase research. The 1500 A class DC reactor system was designed using layered 13 mm GdBCO 2G HTS wire. The expected operating temperature is under 30 K. These fundamental data obtained through both works will usefully be applied to design a real toroid-type HTS DC reactor for grid application.
Optimum Design of CMOS DC-DC Converter for Mobile Applications
NASA Astrophysics Data System (ADS)
Katayama, Yasushi; Edo, Masaharu; Denta, Toshio; Kawashima, Tetsuya; Ninomiya, Tamotsu
In recent years, low output power CMOS DC-DC converters which integrate power stage MOSFETs and a PWM controller using CMOS process have been used in many mobile applications. In this paper, we propose the calculation method of CMOS DC-DC converter efficiency and report optimum design of CMOS DC-DC converter based on this method. By this method, converter efficiencies are directly calculated from converter specifications, dimensions of power stage MOSFET and device parameters. Therefore, this method can be used for optimization of CMOS DC-DC converter design, such as dimensions of power stage MOSFET and switching frequency. The efficiency calculated by the proposed method agrees well with the experimental results.
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M A
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods.
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M. A.
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods. PMID:24883374
Combination of DC Vaccine and Conventional Chemotherapeutics.
Dong, Wei; Wei, Ran; Shen, Hongchang; Ni, Yang; Meng, Long; Du, Jiajun
2016-01-01
Recently mutual interactions of chemotherapy and immunotherapy have been widely accepted, and several synergistic mechanisms have been elucidated as well. Although much attention has focused on the combination of DC vaccine and chemotherapy, there are still many problems remaining to be resolved, including the optimal treatment schedule of the novel strategy. In this article, we methodically examined literature about the combination strategy of DC vaccine and conventional chemotherapy. Based on the published preclinical and clinical trials, treatment schedules of the combinational strategy can be classified as three modalities: chemotherapy with subsequent DC vaccine (post-DC therapy); DC vaccine followed by chemotherapy (pre-DC therapy); concurrent DC vaccine with chemotherapy (con-DC therapy).The safety and efficacy of this combinatorial immunotherapy strategy and its potential mechanisms are discussed. Although we could not draw conclusions on optimal treatment schedule, we summarize some tips which may be beneficial to trial design in the future.
GaN Microwave DC-DC Converters
NASA Astrophysics Data System (ADS)
Ramos Franco, Ignacio
Increasing the operating frequency of switching converters can have a direct impact in the miniaturization and integration of power converters. The size of energy-storage passive components and the difficulty to integrate them with the rest of the circuitry is a major challenge in the development of a fully integrated power supply on a chip. The work presented in this thesis attempts to address some of the difficulties encountered in the design of high-frequency converters by applying concepts and techniques usually used in the design of high-efficiency power amplifiers and high-efficiency rectifiers at microwave frequencies. The main focus is in the analysis, design, and characterization of dc-dc converters operating at microwave frequencies in the low gigahertz range. The concept of PA-rectifier duality, where a high-efficiency power amplifier operates as a high-efficiency rectifier is investigated through non-linear simulations and experimentally validated. Additionally, the concept of a self-synchronous rectifier, where a transistor rectifier operates synchronously without the need of a RF source or driver is demonstrated. A theoretical analysis of a class-E self-synchronous rectifier is presented and validated through non-linear simulations and experiments. Two GaN class-E2 dc-dc converters operating at a switching frequency of 1 and 1.2 GHz are demonstrated. The converters achieve 80 % and 75 % dc-dc efficiency respectively and are among the highest-frequency and highest-efficiency reported in the literature. The application of the concepts established in the analysis of a self-synchronous rectifier to a power amplifier culminated in the development of an oscillating, self-synchronous class-E 2 dc-dc converter. Finally, a proof-of-concept fully integrated GaN MMIC class-E 2 dc-dc converter switching at 4.6 GHz is demonstrated for the first time to the best of our knowledge. The 3.8 mm x 2.6 mm chip contains distributed inductors and does not require any
DC-Compensated Current Transformer.
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-20
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component.
NASA Astrophysics Data System (ADS)
Kim, Kwangmin; Go, Byeong-Soo; Sung, Hae-Jin; Park, Hea-chul; Kim, Seokho; Lee, Sangjin; Jin, Yoon-Su; Oh, Yunsang; Park, Minwon; Yu, In-Keun
2014-09-01
This paper describes the design specifications and performance of a real toroid-type high temperature superconducting (HTS) DC reactor. The HTS DC reactor was designed using 2G HTS wires. The HTS coils of the toroid-type DC reactor magnet were made in the form of a D-shape. The target inductance of the HTS DC reactor was 400 mH. The expected operating temperature was under 20 K. The electromagnetic performance of the toroid-type HTS DC reactor magnet was analyzed using the finite element method program. A conduction cooling method was adopted for reactor magnet cooling. Performances of the toroid-type HTS DC reactor were analyzed through experiments conducted under the steady-state and charge conditions. The fundamental design specifications and the data obtained from this research will be applied to the design of a commercial-type HTS DC reactor.
Method and device for ion mobility separations
Ibrahim, Yehia M.; Garimella, Sandilya V. B.; Smith, Richard D.
2017-07-11
Methods and devices for ion separations or manipulations in gas phase are disclosed. The device includes a single non-planar surface. Arrays of electrodes are coupled to the surface. A combination of RF and DC voltages are applied to the arrays of electrodes to create confining and driving fields that move ions through the device. The DC voltages are static DC voltages or time-dependent DC potentials or waveforms.
NASA Astrophysics Data System (ADS)
Tobari, Kazuaki; Sakamoto, Kiyoshi; Iwaji, Yoshitaka; Kaneko, Daigo; Uematsu, Hajime; Okubo, Tomofumi
We propose a new beatless control mechanism for permanent-magnet synchronous motor (PMSM) drives. In drive systems, the three-phase voltage source induces a ripple component whose frequency is six times that of the voltage source frequency in the DC voltage. Therefore, if the motor frequency becomes six times the voltage frequency, a beat phenomenon, which causes an increase in the motor current ripple is observed. We analyze the beat phenomenon which causes current ripples and propose a method based on periodic disturbance current regulation, i.e., beatless control. We carry out time-domain simulation and various experiments and demonstrate the effectiveness of the proposed controller.
An Integrated Programmable Wide-range PLL for Switching Synchronization in Isolated DC-DC Converters
NASA Astrophysics Data System (ADS)
Fard, Miad
In this thesis, two Phase-Locked-Loop (PLL) based synchronization schemes are introduced and applied to a bi-directional Dual-Active-Bridge (DAB) dc-dc converter with an input voltage up to 80 V switching in the range of 250 kHz to 1 MHz. The two schemes synchronize gating signals across an isolated boundary without the need for an isolator per transistor. The Power Transformer Sensing (PTS) method utilizes the DAB power transformer to indirectly sense switching on the secondary side of the boundary, while the Digital Isolator Sensing (DIS) method utilizes a miniature transformer for synchronization and communication at up to 100 MHz. The PLL is implemented on-chip, and is used to control an external DAB power-stage. This work will lead to lower cost, high-frequency isolated dc-dc converters needed for a wide variety of emerging low power applications where isolator cost is relatively high and there is a demand for the reduction of parts.
DC artifact correction for arbitrary phase-cycling sequence.
Han, Paul Kyu; Park, HyunWook; Park, Sung-Hong
2017-05-01
In magnetic resonance imaging (MRI), a non-zero offset in the receiver baseline signal during acquisition results in a bright spot or a line artifact in the center of the image known as a direct current (DC) artifact. Several methods have been suggested in the past for the removal or correction of DC artifacts in MR images, however, these methods cannot be applied directly when a specific phase-cycling technique is used in the imaging sequence. In this work, we proposed a new, simple technique that enables correction of DC artifacts for any arbitrary phase-cycling imaging sequences. The technique is composed of phase unification, DC offset estimation and correction, and phase restoration. The feasibility of the proposed method was demonstrated via phantom and in vivo experiments with a multiple phase-cycling balanced steady-state free precession (bSSFP) imaging sequence. Results showed successful removal of the DC artifacts in images acquired using bSSFP with phase-cycling angles of 0°, 90°, 180°, and 270°, indicating potential feasibility of the proposed method to any imaging sequence with arbitrary phase-cycling angles. Copyright © 2016 Elsevier Inc. All rights reserved.
An Error Analysis for the Finite Element Method Applied to Convection Diffusion Problems.
1981-03-01
D TFhG-]NOLOGY k 4b 00 \\" ) ’b Technical Note BN-962 AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONVECTION DIFFUSION PROBLEM by I...Babu~ka and W. G. Szym’czak March 1981 V.. UNVI I Of- ’i -S AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD P. - 0 w APPLIED TO CONVECTION DIFFUSION ...AOAO98 895 MARYLAND UNIVYCOLLEGE PARK INST FOR PHYSICAL SCIENCE--ETC F/G 12/I AN ERROR ANALYIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONV..ETC (U
Algebraic parameters identification of DC motors: methodology and analysis
NASA Astrophysics Data System (ADS)
Becedas, J.; Mamani, G.; Feliu, V.
2010-10-01
A fast, non-asymptotic, algebraic parameter identification method is applied to an uncertain DC motor to estimate the uncertain parameters: viscous friction coefficient and inertia. In this work, the methodology is developed and analysed, its convergence, a comparative study between the traditional recursive least square method and the algebraic identification method is carried out, and an analysis of the estimator in a noisy system is presented. Computer simulations were carried out to validate the suitability of the identification algorithm.
Czosnek, Cezary; Bućko, Mirosław M.; Janik, Jerzy F.; Olejniczak, Zbigniew; Bystrzejewski, Michał; Łabędź, Olga; Huczko, Andrzej
2015-03-15
Highlights: • Make-up of the SiC-based nanopowders is a function of the C:Si:O ratio in precursor. • Two-stage aerosol-assisted synthesis offers conditions close to equilibrium. • DC thermal plasma synthesis yields kinetically controlled SiC products. - Abstract: Nanosized SiC-based powders were prepared from selected liquid-phase organosilicon precursors by the aerosol-assisted synthesis, the DC thermal plasma synthesis, and a combination of the two methods. The two-stage aerosol-assisted synthesis method provides at the end conditions close to thermodynamic equilibrium. The single-stage thermal plasma method is characterized by short particle residence times in the reaction zone, which can lead to kinetically controlled products. The by-products and final nanopowders were characterized by powder XRD, infrared spectroscopy FT-IR, scanning electron microscopy SEM, and {sup 29}Si MAS NMR spectroscopy. BET specific surface areas of the products were determined by standard physical adsorption of nitrogen at 77 K. The major component in all synthesis routes was found to be cubic silicon carbide β-SiC with average crystallite sizes ranging from a few to tens of nanometers. In some cases, it was accompanied by free carbon, elemental silicon or silica nanoparticles. The final mesoporous β-SiC-based nanopowders have a potential as affordable catalyst supports.
1985-02-01
Associated with Damage Prediction) Professor John W. Hutchinson Methods for Analyzing the Harvard University Mechanical Properties of Nonlinear Two Phase...ANALYZING THE MECHANICAL PROPERTIES OF NONLINEAR TWO-PHASE COMPOSITE MATERIALS John W. Hutchinson, Harvard University, Cambridge, Massachul•,1: t:!; Thursday...MATHEMATICAL PROPERTIES OF COMPUTATIONAL SETS Charles R. Leake, US Army Concepts Analysis Agency, Bethesda, Maryland S0 NORMAL SOLUTIONS OF LARGE
An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy
ERIC Educational Resources Information Center
Gamso, Nancy M.
2011-01-01
The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…
1984-02-01
SPACECRAFT PROBLEM Levinson [lOI has described in detail an application of the symbolic language FORMAC to formulate the spacecraft problem shown in...Figure 3, consisting, of two rigid bodies with a common axis of rotation b. Figure 3. The equations are given in Ref. [lOI in complete detail, and are...demonstrate the feasibility of recoding the LINPACK routines in the Ada language . 167 !-CL -- METHODS OF CONVERSION. There are three approaches to
Hybrid Immersion-Polarization Method for Measuring Birefringence Applied to Spider Silks
2011-10-15
REPORT Hybrid immersion-polarization method for measuringbirefringence applied to spider silks 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: A technique...optic coefficients. The first measurement of the strain-optic coefficients of spider silk is presented. The technique is more 1. REPORT DATE (DD-MM...98) Prescribed by ANSI Std. Z39.18 - Hybrid immersion-polarization method for measuringbirefringence applied to spider silks Report Title ABSTRACT A
NASA Astrophysics Data System (ADS)
Shiraz, Farzin Amirkhani; Ardejani, Faramarz Doulati; Moradzadeh, Ali; Arab-Amiri, Ali Reza
2013-01-01
Coal washing factories may create serious environmental problems due to pyrite oxidation and acid mine drainage generation from coal waste piles on nearby land. Infiltration of pyrite oxidation products through the porous materials of the coal waste pile by rainwater cause changes in the conductivity of underground materials and groundwater downstream of the pile. Electromagnetic and electrical methods are effective for investigation and monitoring of the contaminated plumes caused by coal waste piles and tailings impoundments. In order to investigate the environmental impact from a coal waste pile at the Alborz Sharghi coal washing plant, an EM34 ground conductivity meter was used on seven parallel lines in an E-W direction, downstream of the waste pile. Two-dimensional resistivity models obtained by the inversion of EM34 conductivity data identified conductive leachate plumes. In addition, quasi-3D inversion of EM34 data has confirmed the decreasing resistivity at depth due to the contaminated plumes. Comparison between EM34, VLF and DC-resistivity datasets, which were acquired for similar survey lines, agree well in identifying changes in the resistivity trend. The EM34 and DC-resistivity sections have greater similarity and better smoothness rather than those of the VLF model. Two-dimensional inversion models of these methods have shown some contaminated plumes with low resistivity.
Petigara, Bhakti R; Scher, Alan L
2007-01-01
A reversed-phase liquid chromatographic method was developed to determine parts-per-million and higher levels of Sudan 1, 1-(phenylazo)-2-naphthalenol, in the disulfo monoazo color additive FD&C Yellow No. 6 and in a related monosulfo monoazo color additive, D&C Orange No. 4. Sudan I, the corresponding unsulfonated monoazo dye, is a known impurity in these color additives. The color additives are dissolved in water and methanol, and the filtered solutions are directly chromatographed, without extraction or concentration, by using gradient elution at 0.25 mL/min. Calibrations from peak areas at 485 nm were linear. At a 99% confidence level, the limits of determination were 0.008 microg Sudan I/mL (0.4 ppm) in FD&C Yellow No. 6 and 0.011 microg Sudan I/mL (0.00011%) in D&C Orange No. 4. The confidence intervals were 0.202 +/- 0.002 microg Sudan I/mL (10.1 +/- 0.1 ppm) near the specification level for Sudan I in FD&C Yellow No. 6 and 20.0 +/- 0.2 microg Sudan I/mL (0.200 +/- 0.002%) near the highest concentration of Sudan I found in D&C Orange No. 4. A survey was conducted to determine Sudan I in 28 samples of FD&C Yellow No. 6 from 17 international manufacturers over 3 years, and in a pharmacology-tested sample. These samples were found to contain undetected levels (16 samples), 0.5-9.7 ppm Sudan I (0.01-0.194 microg Sudan I/mL in analyzed solutions; 11 samples including the pharmacology sample), and > or =10 ppm Sudan I (> or = 0.2 microg Sudan I/mL; 2 samples). Analyses of 21 samples of D&C Orange No. 4 from 8 international manufacturers over 4 years found Sudan I at undetected levels (8 samples), 0.0005 to < 0.005% Sudan I (0.05 to < 0.5 microg Sudan I/mL in analyzed solutions; 3 samples, including a pharmacology batch), 0.005 to <0.05% Sudan I (0.5 to <5 microg Sudan I/mL; 9 samples), and 0.18% Sudan I (18 microg Sudan I/mL; 1 sample).
NASA Technical Reports Server (NTRS)
Black, J. M.
1978-01-01
Circuit consists of chopper section which converts input dc to square wave, followed by bridge-rectifier stage. Chopper gives nearly-ideal switching characteristics, and bridge uses series of full-wave stages rather than less-efficient half-wave rectifiers found in previous circuits. Special features of full-wave circuit allow redundant components to be eliminated, lowering parts count. Circuit can also be adapted for use as dc-to-dc converter or as combination dc-and-ac source.
NASA Astrophysics Data System (ADS)
Walker, I. R.
1993-03-01
In the field of condensed matter physics, it is frequently necessary to produce motion at low temperatures. While this can often be done using a mechanical linkage which connects the cryogenic and ambient environments, space constraints sometimes render such a solution impractical. The following paper describes a miniature dc electric motor which can be used to produce motion under these conditions, and also presents a novel scheme for monitoring its position. The motor is a skew-wound ironless device with a coaxial gearhead, and is capable of operating at a temperature of 4 K, in a vacuum, and in a magnetic field of several hundred gauss. The position monitoring arrangement requires no modifications to the motor or the addition of extra hardware, such as rotary encoders or potentiometers. Based on the angular dependence of the rotor inductance, it has been found to work with a number of different motors of the skew-wound ironless type, both at room temperature and at 4°. Provisions have been made to allow the motor and position monitor to be operated by computer control. The author anticipates that they will find applications in other areas where motion is needed and space is at a premium.
NASA Astrophysics Data System (ADS)
Yuval, Douglas; Oldenburg, W.
1996-04-01
Oxidation of sulfide minerals in the mine tailings impoundments at Copper Cliff, Ontario generates acidic conditions and elevated concentrations of dissolved metals and sulfates in the pore water. The pore water migrates away from the tailings to pose a potential environmental hazard if is should reach nearby water systems. There is a need to characterize this potential environmental problem and to assess the future hazards. A combined DC resistivity and induced polarization (IP) survey was carried out along one of the major flowpaths in the tailings and the data were inverted to produce detailed electrical conductivity and chargeability structures of the cross-section below the survey line. The conductivity distributions are directly translated, through theoretical and empirical relations, to a map of the concentration of the total dissolved solids (TDS) along the cross-section and thereby provide insight about the in-situ pore water quality. The sulfide minerals are the source of the IP response and, thus, when combined with borehole data, the chargeability model can be used to estimate the amount and distribution of the sulfides.
NASA Astrophysics Data System (ADS)
Radeva, Veselka S.
Several interactive methods, applied in the astronomy education during creation of the project about a colony in the Space, are presented. The methods Pyramid, Brainstorm, Snow-slip (Snowball) and Aquarium give the opportunity for schooler to understand and learn well a large packet of astronomical knowledge.
Calculation of accurate channel spacing of an AWG optical demultiplexer applying proportional method
NASA Astrophysics Data System (ADS)
Seyringer, D.; Hodzic, E.
2015-06-01
We present the proportional method to correct the channel spacing between the transmitted output channels of an AWG. The developed proportional method was applied to 64-channel, 50 GHz AWG and the achieved results confirm very good correlation between designed channel spacing (50 GHz) and the channel spacing calculated from simulated AWG transmission characteristics.
An Empirical Study of Applying Associative Method in College English Vocabulary Learning
ERIC Educational Resources Information Center
Zhang, Min
2014-01-01
Vocabulary is the basis of any language learning. To many Chinese non-English majors it is difficult to memorize English words. This paper applied associative method in presenting new words to them. It is found that associative method did receive a better result both in short-term and long-term retention of English words. Compared with the…
On a Moving Mesh Method Applied to the Shallow Water Equations
NASA Astrophysics Data System (ADS)
Felcman, J.; Kadrnka, L.
2010-09-01
The moving mesh method is applied to the numerical solution of the shallow water equations. The original numerical flux of the Vijayasundaram type is used in the finite volume method. The mesh adaptation procedure is described. The relevant numerical examples are presented.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
The application of standardized control and interface circuits to three dc to dc power converters.
NASA Technical Reports Server (NTRS)
Yu, Y.; Biess, J. J.; Schoenfeld, A. D.; Lalli, V. R.
1973-01-01
Standardized control and interface circuits were applied to the three most commonly used dc to dc converters: the buck-boost converter, the series-switching buck regulator, and the pulse-modulated parallel inverter. The two-loop ASDTIC regulation control concept was implemented by using a common analog control signal processor and a novel digital control signal processor. This resulted in control circuit standardization and superior static and dynamic performance of the three dc-to-dc converters. Power components stress control, through active peak current limiting and recovery of switching losses, was applied to enhance reliability and converter efficiency.
Results and Potentials of Applying a BRD Method of Lecture Proposed by Uda to Scientific Lectures
NASA Astrophysics Data System (ADS)
Washino, Shoichi
This paper discusses both results and potentials of applying a BRD method of lecture proposed by Uda to scientific lectures. The results are very successful and also evaluations by students are satisfactory. Fifty percents of the students say that they can concentrate to lectures under the method. This effect can be explain using spare capacity model in the field of psychology, which is often used to explain human abilities of doing two jobs simultaneously. Eighty percents of the students also hope continuing the BRD method. One of issues to be discussed later is developing a new assessment method suitable to the BRD method. One of hopeful candidates would be a portfolio method.
Active Problem Solving and Applied Research Methods in a Graduate Course on Numerical Methods
ERIC Educational Resources Information Center
Maase, Eric L.; High, Karen A.
2008-01-01
"Chemical Engineering Modeling" is a first-semester graduate course traditionally taught in a lecture format at Oklahoma State University. The course as taught by the author for the past seven years focuses on numerical and mathematical methods as necessary skills for incoming graduate students. Recent changes to the course have included Visual…
Active Problem Solving and Applied Research Methods in a Graduate Course on Numerical Methods
ERIC Educational Resources Information Center
Maase, Eric L.; High, Karen A.
2008-01-01
"Chemical Engineering Modeling" is a first-semester graduate course traditionally taught in a lecture format at Oklahoma State University. The course as taught by the author for the past seven years focuses on numerical and mathematical methods as necessary skills for incoming graduate students. Recent changes to the course have included Visual…
NASA Astrophysics Data System (ADS)
Wu, Dan; Wang, Bing; Han, Zhixue; Xiao, Changchun
2009-12-01
A novel feature link quality analysis method (FLQAM) applied in satellite adaptive frequency hopping communication is presented in this paper. The method uses a frequency domain feature to tell whether the tested frequency area is jammed or not, which is much simple and efficient compared with traditional methods. Simulations are performed in Matlab environment and show that the success detecting rates is above 95% when INR is over -1dB.
NASA Astrophysics Data System (ADS)
Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto
In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.
Simultaneous distribution of AC and DC power
Polese, Luigi Gentile
2015-09-15
A system and method for the transport and distribution of both AC (alternating current) power and DC (direct current) power over wiring infrastructure normally used for distributing AC power only, for example, residential and/or commercial buildings' electrical wires is disclosed and taught. The system and method permits the combining of AC and DC power sources and the simultaneous distribution of the resulting power over the same wiring. At the utilization site a complementary device permits the separation of the DC power from the AC power and their reconstruction, for use in conventional AC-only and DC-only devices.
NASA Technical Reports Server (NTRS)
Atkins, H. L.; Shu, Chi-Wang
2001-01-01
The explicit stability constraint of the discontinuous Galerkin method applied to the diffusion operator decreases dramatically as the order of the method is increased. Block Jacobi and block Gauss-Seidel preconditioner operators are examined for their effectiveness at accelerating convergence. A Fourier analysis for methods of order 2 through 6 reveals that both preconditioner operators bound the eigenvalues of the discrete spatial operator. Additionally, in one dimension, the eigenvalues are grouped into two or three regions that are invariant with order of the method. Local relaxation methods are constructed that rapidly damp high frequencies for arbitrarily large time step.
Small, Joshua; Fruehling, Adam; Garg, Anurag; Liu, Xiaoguang; Peroulis, Dimitrios
2014-01-01
Mechanically underdamped electrostatic fringing-field MEMS actuators are well known for their fast switching operation in response to a unit step input bias voltage. However, the tradeoff for the improved switching performance is a relatively long settling time to reach each gap height in response to various applied voltages. Transient applied bias waveforms are employed to facilitate reduced switching times for electrostatic fringing-field MEMS actuators with high mechanical quality factors. Removing the underlying substrate of the fringing-field actuator creates the low mechanical damping environment necessary to effectively test the concept. The removal of the underlying substrate also a has substantial improvement on the reliability performance of the device in regards to failure due to stiction. Although DC-dynamic biasing is useful in improving settling time, the required slew rates for typical MEMS devices may place aggressive requirements on the charge pumps for fully-integrated on-chip designs. Additionally, there may be challenges integrating the substrate removal step into the back-end-of-line commercial CMOS processing steps. Experimental validation of fabricated actuators demonstrates an improvement of 50x in switching time when compared to conventional step biasing results. Compared to theoretical calculations, the experimental results are in good agreement. PMID:25145811
Small, Joshua; Fruehling, Adam; Garg, Anurag; Liu, Xiaoguang; Peroulis, Dimitrios
2014-08-15
Mechanically underdamped electrostatic fringing-field MEMS actuators are well known for their fast switching operation in response to a unit step input bias voltage. However, the tradeoff for the improved switching performance is a relatively long settling time to reach each gap height in response to various applied voltages. Transient applied bias waveforms are employed to facilitate reduced switching times for electrostatic fringing-field MEMS actuators with high mechanical quality factors. Removing the underlying substrate of the fringing-field actuator creates the low mechanical damping environment necessary to effectively test the concept. The removal of the underlying substrate also a has substantial improvement on the reliability performance of the device in regards to failure due to stiction. Although DC-dynamic biasing is useful in improving settling time, the required slew rates for typical MEMS devices may place aggressive requirements on the charge pumps for fully-integrated on-chip designs. Additionally, there may be challenges integrating the substrate removal step into the back-end-of-line commercial CMOS processing steps. Experimental validation of fabricated actuators demonstrates an improvement of 50x in switching time when compared to conventional step biasing results. Compared to theoretical calculations, the experimental results are in good agreement.
An uncertainty analysis of the PVT gauging method applied to sub-critical cryogenic propellant tanks
NASA Astrophysics Data System (ADS)
Van Dresar, Neil T.
2004-06-01
The PVT (pressure, volume, temperature) method of liquid quantity gauging in low-gravity is based on gas law calculations assuming conservation of pressurant gas within the propellant tank and the pressurant supply bottle. There is interest in applying this method to cryogenic propellant tanks since the method requires minimal additional hardware or instrumentation. To use PVT with cryogenic fluids, a non-condensable pressurant gas (helium) is required. With cryogens, there will be a significant amount of propellant vapor mixed with the pressurant gas in the tank ullage. This condition, along with the high sensitivity of propellant vapor pressure to temperature, makes the PVT method susceptible to substantially greater measurement uncertainty than is the case with less volatile propellants. A conventional uncertainty analysis is applied to example cases of liquid hydrogen and liquid oxygen tanks. It appears that the PVT method may be feasible for liquid oxygen. Acceptable accuracy will be more difficult to obtain with liquid hydrogen.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
de Souza, Jeanette Beber; Daniel, Luiz Antonio
2011-01-01
Water disinfection assays were carried out using ozone and chlorine in non-sequential steps--the individual method--and in sequential steps--the combined ozone/chlorine method. Escherichia coli strain ATCC 11229 was used as the indicator microorganism. For the assays using the individual method, the applied dosages of ozone were 2.0, 3.0 and 5.0 mg/L, and 2.0 and 5.0 mg/L of chlorine were used. For the assays applying the combined method, the dosages (dosage combination) were, in mg/L: 2.0 O3 + 2.0 Cl, 3.0 O3 + 2.0 Cl2, 5.0 O3 + 2.0 Cl2 and 2.0 O3 + 5.0 Cl2. The applied contact times were 5, 10, 15 and 20 minutes for the individual method as well as for the combined method. For all used dosages and contact times, E. coli inactivation was superior to the inactivation obtained in the individual method, indicating the occurrence of synergism for E. coli inactivation in the combined method.
NASA Astrophysics Data System (ADS)
Rahimzadegan, Majid; Sadeghi, Behnam
2016-07-01
This paper aims to implement an iterative fuzzy edge detection (IFED) method on blurred satellite images. Some degradation effects such as atmospheric effects, clouds and their shadows, atmospheric aerosols, and fog remarkably decline the quality satellite images. Hence, some processes such as enhancement and edge detection in satellite images are challenging. One group of methods that can deal with these effects is fuzzy logic methods. Therefore, IFED method was applied in this work on the subimages of the Ikonos, Landsat 7, and SPOT 5 satellite images, contaminated by aforementioned effects. Such as most FED methods, IFED has two components: enhancement and edge detection. In this context, a six-step iterative method, using the if-then-else mechanism, was implemented on the images to perform fuzzy enhancement, and subsequently, edge detection was done. To evaluate the merit of the enhancement and select the best number of iterations, edge gray-value rate criterion was applied. The peak signal-to-noise ratio (PSNR) is applied for the quantitative evaluation of the IFED method. The results of IFED, in comparison with some prior edge detection methods, showed higher PSNR values and a high performance in the edge detection of the earth features in the blurred satellite images.
Kannan, R; Ramakrishna, T V; Rajagopalan, S R
1985-05-01
A method is described for the sequential determination of phosphorus, arsenic and silicon at ng/ml levels by d.c. polarography. These elements are converted into their heteropolymolybdates and separated by selective solvent extraction. Determination of the molybdenum in the extract gives an enhancement factor of 12 for determination of the hetero-atom. A further enhancement by a factor of 40 is achieved by determining the molybdenum by catalytic polarography in nitrate medium. Charging-current compensation is employed to improve precision and the detection limit. The detection limits for phosphorus, arsenic and silicon are 0.5, 4.7 and 3.1 mu/gl., respectively and the relative standard deviation is 2-2.5%.
WU, TIANLIANG; ZANG, HONGCHENG
2016-01-01
The ultrasound probe and advancement of the needle during real-time ultrasound-assisted guidance of catheterization of the right internal jugular vein (RIJV) tend to collapse the vein, which reduces the success rate of the procedure. We have developed a novel puncture point-traction method (PPTM) to facilitate RIJV cannulation. The present study examined whether this method facilitated the performance of RIJV catheterization in anesthetized patients. In this study, 120 patients were randomly assigned to a group in which PPTM was performed (PPTM group, n=60) or a group in which it was not performed (non-PPTM group, n=60). One patient was excluded because of internal carotid artery puncture and 119 patients remained for analysis. The cross-sectional area (CSA), anteroposterior diameter (AD) and transverse diameter (TD) of the RIJV at the cricoid cartilage level following the induction of anesthesia and during catheterization were measured, and the number with obvious loss of resistance (NOLR), the number with easy aspiration of blood into syringe (NEABS) during advancement of the needle, and the number of first-pass punctures (NFPP) during catheterization were determined. In the non-PPTM group, the CSA was smaller during catheterization compared with that following the induction of anesthesia (P<0.01). In the PPTM group compared with the non-PPTM group during catheterization, the CSA was larger (P<0.01) and the AD (P<0.01) and TD (P<0.05) were wider; NOLR (P<0.01), NEABS (P<0.01) and NFPP (P<0.01) increased significantly. The findings from this study confirmed that the PPTM facilitated catheterization of the RIJV and improved the success rate of RIJV catheterization in anesthetized patients in the supine position. PMID:27347054
Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services
Rajabi, A; Dabiri, A
2012-01-01
Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171
Michałowska-Kaczmarczyk, Anna Maria; Asuero, Agustin G; Martin, Julia; Alonso, Esteban; Jurado, Jose Marcos; Michałowski, Tadeusz
2014-12-01
Rational functions of the Padé type are used for the calibration curve (CCM), and standard addition (SAM) methods purposes. In this paper, the related functions were applied to results obtained from the analyses of (a) nickel with use of FAAS method, (b) potassium according to FAES method, and (c) salicylic acid according to HPLC-MS/MS method. A uniform, integral criterion of nonlinearity of the curves, obtained according to CCM and SAM, is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area. Copyright © 2014 Elsevier B.V. All rights reserved.
The Wolf method applied to the liquid-vapor interface of water.
Noé Mendoza, Francisco; López-Lemus, Jorge; Chapela, Gustavo A; Alejandre, José
2008-07-14
The Wolf method for the calculation of electrostatic interactions is applied in a liquid phase and at the liquid-vapor interface of water and its results are compared with those from the Ewald sums method. Molecular dynamics simulations are performed to calculate the radial distribution functions at room temperature. The interface simulations are used to obtain the coexisting densities and surface tension along the coexistence curve. The water model is a flexible version of the extended simple point charge model. The Wolf method gives good structural results, fair coexistence densities, and poor surface tensions as compared with those obtained using the Ewald sums method.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1982-01-01
A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.
Applying systems-centered theory (SCT) and methods in organizational contexts: putting SCT to work.
Gantt, Susan P
2013-04-01
Though initially applied in psychotherapy, a theory of living human systems (TLHS) and its systems-centered practice (SCT) offer a comprehensive conceptual framework replete with operational definitions and methods that is applicable in a wide range of contexts. This article elaborates the application of SCT in organizations by first summarizing systems-centered theory, its constructs and methods, and then using case examples to illustrate how SCT has been used in organizational and coaching contexts.
Applying Item Response Theory Methods to Design a Learning Progression-Based Science Assessment
ERIC Educational Resources Information Center
Chen, Jing
2012-01-01
Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1)…
Critical path method applied to research project planning: Fire Economics Evaluation System (FEES)
Earl B. Anderson; R. Stanton Hales
1986-01-01
The critical path method (CPM) of network analysis (a) depicts precedence among the many activities in a project by a network diagram; (b) identifies critical activities by calculating their starting, finishing, and float times; and (c) displays possible schedules by constructing time charts. CPM was applied to the development of the Forest Service's Fire...
A Method of Measuring the Costs and Benefits of Applied Research.
ERIC Educational Resources Information Center
Sprague, John W.
The Bureau of Mines studied the application of the concepts and methods of cost-benefit analysis to the problem of ranking alternative applied research projects. Procedures for measuring the different classes of project costs and benefits, both private and public, are outlined, and cost-benefit calculations are presented, based on the criteria of…
7 CFR 632.16 - Methods of applying planned land use and treatment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 6 2011-01-01 2011-01-01 false Methods of applying planned land use and treatment. 632.16 Section 632.16 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE LONG TERM CONTRACTING RURAL ABANDONED MINE PROGRAM...
7 CFR 632.16 - Methods of applying planned land use and treatment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 6 2012-01-01 2012-01-01 false Methods of applying planned land use and treatment. 632.16 Section 632.16 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE LONG TERM CONTRACTING RURAL ABANDONED MINE PROGRAM...
A Method of Measuring the Costs and Benefits of Applied Research.
ERIC Educational Resources Information Center
Sprague, John W.
The Bureau of Mines studied the application of the concepts and methods of cost-benefit analysis to the problem of ranking alternative applied research projects. Procedures for measuring the different classes of project costs and benefits, both private and public, are outlined, and cost-benefit calculations are presented, based on the criteria of…
Applying Item Response Theory Methods to Design a Learning Progression-Based Science Assessment
ERIC Educational Resources Information Center
Chen, Jing
2012-01-01
Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1)…
Trends in Research Methods in Applied Linguistics: China and the West.
ERIC Educational Resources Information Center
Yihong, Gao; Lichun, Li; Jun, Lu
2001-01-01
Examines and compares current trends in applied linguistics (AL) research methods in China and the West. Reviews AL articles in four Chinese journals, from 1978-1997, and four English journals from 1985 to 1997. Articles are categorized and subcategorized. Results show that in China, AL research is heading from non-empirical toward empirical, with…
21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Hawke, Ian; Loeffler, Frank; Nerozzi, Andrea
2005-05-15
We present a simple method for applying excision boundary conditions for the relativistic Euler equations. This method depends on the use of reconstruction-evolution methods, a standard class of high-resolution shock-capturing methods. We test three different reconstruction schemes, namely, total variation diminishing, piecewise parabolic method (PPM) and essentially nonoscillatory. The method does not require that the coordinate system is adapted to the excision boundary. We demonstrate the effectiveness of our method using tests containing discontinuities, static test fluid solutions with black holes, and full dynamical collapse of a neutron star to a black hole. A modified PPM scheme is introduced because of problems arisen when matching excision with the original PPM reconstruction scheme.
Rajabi, A; Dabiri, A
2012-01-01
Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.
Controlling DC permeability in cast steels
NASA Astrophysics Data System (ADS)
Sumner, Aaran; Gerada, Chris; Brown, Neil; Clare, Adam
2017-05-01
Annealing (at multiple cooling rates) and quenching (with tempering) was performed on specimens of cast steel of varying composition. The aim was to devise a method for selecting the steel with the highest permeability, from any given range of steels, and then increasing the permeability by heat treatment. Metallographic samples were imaged using optical microscopy to show the effect of the applied heat treatments on the microstructure. Commonly cast steels can have DC permeability altered by the careful selection of a heat treatment. Increases of up to 381% were achieved by annealing using a cooling rate of 6.0 °C/min. Annealing was found to cause the carbon present in the steel to migrate from grain boundaries and from within ferrite crystals into adjacent pearlite crystals. The migration of the carbon resulted in less carbon at grain boundaries and within ferrite crystals reducing the number of pinning sites between magnetic domains. This gives rise to a higher permeability. Quenching then tempering was found to cause the formation of small ferrite crystals with the carbon content of the steel predominately held in the martensitic crystal structures. The results show that with any given range of steel compositions the highest baseline DC permeability will be found with the steel that has the highest iron content and the lowest carbon content. For the samples tested in this paper a cooling rate of 4.5 °C/min resulted in the relative permeability of the sample with the highest baseline permeability, AS4, increasing from 783 to 1479 at 0.5 T. This paper shows how heat treatments commonly applied to hypoeutectoid cast steels, to improve their mechanical performance, can be used to also enhance electromagnetic properties of these alloys. The use of cast steels allows the creation of DC components for electrical machines not possible by the widely used method of stacking of electrical grade sheet steels.
Campbell, Jeremy B; Newson, Steve
2013-02-26
Embodiments of DC source assemblies of power inverter systems of the type suitable for deployment in a vehicle having an electrically grounded chassis are provided. An embodiment of a DC source assembly comprises a housing, a DC source disposed within the housing, a first terminal, and a second terminal. The DC source also comprises a first capacitor having a first electrode electrically coupled to the housing, and a second electrode electrically coupled to the first terminal. The DC source assembly further comprises a second capacitor having a first electrode electrically coupled to the housing, and a second electrode electrically coupled to the second terminal.
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented an offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.
NASA Astrophysics Data System (ADS)
Li, Jinliang; Wang, Zhigang; Lei, Dongyun; Yang, Miao; Xia, Zengming; Xiang, Yun; Yan, Yu; Liu, Zhiguo
2017-07-01
It would be hard to be repaired in the case of a power outage if the ±800kV ultra high voltage DC (UHVDC) transmission line was put into operation, so live working is the guarantee for its security and stability. Insulator strings of duplex L-type are widely used in UHVDC small angle towers. In the meantime, the tower structure and the insulator strings’ arrangement type is different than the V-type. In order to ensure the security and stability of the transmission line, this paper presents a method of live replacing the duplex L-type insulators based on the actual parameter of the UHVDC L tower type and develops apparatus as a complementary supporting by mechanics analysis. In order to ensure the safety of working staff, the safe distance and combined gap of the hanging basket method are checked. Safety precautions are also proposed in the paper by the calculation of body surface field strength of the persons doing live working in the finite element method. It is shown by the field application that the live working task of replacing the L-type insulator strings can be accomplished successfully by the method put forward in this paper, which provides reference for the following live working on the UHV transmission line.
NASA Astrophysics Data System (ADS)
Nurmukhanbetova, A. K.; Goldberg, V. Z.; Nauruzbayev, D. K.; Rogachev, G. V.; Golovkov, M. S.; Mynbayev, N. A.; Artemov, S.; Karakhodjaev, A.; Kuterbekov, K.; Rakhymzhanov, A.; Berdibek, Zh.; Ivanov, I.; Tikhonov, A.; Zherebchevsky, V. I.; Torilov, S. Yu.; Tribble, R. E.
2017-03-01
To study resonance reactions of heavy ions at low energy we have combined the Thick Target Inverse Kinematics Method (TTIK) with Time of Flight method (TF). We used extended target and TF to resolve the identification problems of various possible nuclear processes inherent to the simplest popular version of TTIK. Investigations of the 15N interaction with hydrogen and helium gas targets by using this new approach are presented.
Is the 'LEMON' method an easily applied emergency airway assessment tool?
Reed, Matthew J; Rennie, Louise M; Dunn, Mark J G; Gray, Alasdair J; Robertson, Colin E; McKeown, Dermot W
2004-06-01
To assess whether the 'LEMON' method, devised by the developers of the US National Emergency Airway Management Course, is an easily applied airway assessment tool in patients undergoing treatment in the emergency department resuscitation room. One hundred patients treated in the resuscitation room of a UK teaching hospital between June 2002 and January 2003 were assessed on criteria based on the 'LEMON' method. All seven criteria of the 'Look' section of the method could be adequately assessed. Data for the 'Evaluate' section could not be obtained in 10 patients, with inter-incisor distance being the most problematical item. The 'Mallampatti' score was unavailable in 43 patients, and had to be assessed in the supine position in 32 of the remaining 57 patients. Assessment for airway 'Obstruction' and 'Neck mobility' could be performed in all patients. The 'Look', 'Obstruction' and 'Neck mobility' components of the 'LEMON' method are the easiest to assess in patients undergoing treatment in the emergency department resuscitation room. The 'Evaluate' and 'Mallampatti' components are less easily applied to the population that present to the resuscitation room, and assessment of these is more problematical and prone to inaccuracy. We suggest that the 'LEMON' airway assessment method may not be easily applied in its entirety to unselected resuscitation room patients, and that information on the 'Evaluate' and 'Mallampatti' parameters may not always be available.
An applied study using systems engineering methods to prioritize green systems options
Lee, Sonya M; Macdonald, John M
2009-01-01
For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.
Unique method of treatment for exotropia applying low-energy helium-neon laser
NASA Astrophysics Data System (ADS)
Bessmertnaya, Valentina
1995-01-01
Orthoptic treatment for exotropia applying spheroprismatic correction and low-energy helium- neon laser stimulation possesses a series of advantages to the surgical treatment. Prismatic correction has been already applied for exotropia earlier and has proven to be quite effective treating the disease and its minor complications. But in more severe cases when exotropia is accompanied by hyperphoria exceeding 3 pr Dptr and cyclotropia, prismatic correction method is not helpful enough. To cure the most complicated cases of exotropia low-energy helium- neon laser was successfully used for the first time as the only means capable of eliminating hypertropia and cyclotropia. The novelty and high efficiency of the method enables ophthalmologists to approach concomitant squint not as an eye muscular deterioration but as physiological reaction of the visual analyzer to suppress diplopia. Thus the method eliminates the cause of squint.
Method of applying coatings to substrates and the novel coatings produced thereby
Hendricks, C.D.
1987-09-15
A method for applying novel coatings to substrates is provided. The ends of a multiplicity of rods of different materials are melted by focused beams of laser light. Individual electric fields are applied to each of the molten rod ends, thereby ejecting charged particles that include droplets, atomic clusters, molecules, and atoms. The charged particles are separately transported, by the accelerations provided by electric potentials produced by an electrode structure, to substrates where they combine and form the coatings. Layered and thickness graded coatings comprised of hitherto unavailable compositions, are provided. 2 figs.
Method of applying a cerium diffusion coating to a metallic alloy
Jablonski, Paul D [Salem, OR; Alman, David E [Benton, OR
2009-06-30
A method of applying a cerium diffusion coating to a preferred nickel base alloy substrate has been discovered. A cerium oxide paste containing a halide activator is applied to the polished substrate and then dried. The workpiece is heated in a non-oxidizing atmosphere to diffuse cerium into the substrate. After cooling, any remaining cerium oxide is removed. The resulting cerium diffusion coating on the nickel base substrate demonstrates improved resistance to oxidation. Cerium coated alloys are particularly useful as components in a solid oxide fuel cell (SOFC).
NASA Astrophysics Data System (ADS)
Watanabe, T.; Seto, K.; Toyoda, H.; Takano, T.
2016-09-01
Connected Control Method (CCM) is a well-known mechanism in the field of civil structural vibration control that utilizes mutual reaction forces between plural buildings connected by dampers as damping force. However, the fact that CCM requires at least two buildings to obtain reaction force prevents CCM from further development. In this paper, a novel idea to apply CCM onto a single building by splitting the building into four substructures is presented. An experimental model structure split into four is built and CCM is applied by using four magnetic dampers. Experimental analysis is carried out and basic performance and effectiveness of the presented idea is confirmed.
NASA Technical Reports Server (NTRS)
Malevsky, A. V.; Yuen, D. A.
1991-01-01
Characteristics-based methods for the advection-diffusion equation are presented and directly applied to study thermal convection with extremely large Rayleigh number (Ra). It is shown that the operator-splitting method for advection-diffusion problems is very accurate for determining the advected field at extremely high Peclet number (Pe). The technique presented is considered to have great potential for solving advection-dominated problems, while the Langrangian method is more accurate for lower Pe. It is noted that the accuracy of these characteristics-based methods strongly depends on the quality of interpolation. The computational time for the operator-splitting method grows with the number of time steps employed. The Langrangian method was used for simulations of convection at very high Ra, up to 3 x 10 to the 9th, and time-dependent, thermal convection solutions were obtained for infinite Prandtl number.
The multi-configuration electron-nuclear dynamics method applied to LiH.
Ulusoy, Inga S; Nest, Mathias
2012-02-07
The multi-configuration electron-nuclear dynamics (MCEND) method is a nonadiabatic quantum dynamics approach to the description of molecular processes. MCEND is a combination of the multi-configuration time-dependent Hartree (MCTDH) method for atoms and its antisymmetrized equivalent MCTDHF for electrons. The purpose of this method is to simultaneously describe nuclear and electronic wave packets in a quantum dynamical way, without the need to calculate potential energy surfaces and diabatic coupling functions. In this paper we present first exemplary calculations of MCEND applied to the LiH molecule, and discuss computational and numerical details of our implementation.
Non-invasive imaging methods applied to neo- and paleontological cephalopod research
NASA Astrophysics Data System (ADS)
Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.
2013-11-01
Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.
Smart, JC
2016-01-01
Background The National HIV/AIDS Strategy calls for active surveillance programs for human immunodeficiency virus (HIV) to more accurately measure access to and retention in care across the HIV care continuum for persons living with HIV within their jurisdictions and to identify persons who may need public health services. However, traditional public health surveillance methods face substantial technological and privacy-related barriers to data sharing. Objective This study developed a novel data-sharing approach to improve the timeliness and quality of HIV surveillance data in three jurisdictions where persons may often travel across the borders of the District of Columbia, Maryland, and Virginia. Methods A deterministic algorithm of approximately 1000 lines was developed, including a person-matching system with Enhanced HIV/AIDS Reporting System (eHARS) variables. Person matching was defined in categories (from strongest to weakest): exact, very high, high, medium high, medium, medium low, low, and very low. The algorithm was verified using conventional component testing methods, manual code inspection, and comprehensive output file examination. Results were validated by jurisdictions using internal review processes. Results Of 161,343 uploaded eHARS records from District of Columbia (N=49,326), Maryland (N=66,200), and Virginia (N=45,817), a total of 21,472 persons were matched across jurisdictions over various strengths in a matching process totaling 21 minutes and 58 seconds in the privacy device, leaving 139,871 uniquely identified with only one jurisdiction. No records matched as medium low or low. Over 80% of the matches were identified as either exact or very high matches. Three separate validation methods were conducted for this study, and they all found ≥90% accuracy between records matched by this novel method and traditional matching methods. Conclusions This study illustrated a novel data-sharing approach that may facilitate timelier and better
Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem
Alchalabi, R.M.; Turinsky, P.J.
1996-12-31
The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.
Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma
Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.
2015-01-15
An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.
A Study on Railway Signal System to Apply CDMA-QAM Method
NASA Astrophysics Data System (ADS)
Mochizuki, Hiroshi; Asano, Akira; Sano, Minoru; Takahashi, Sei; Nakamura, Hideo
Recently, QAM (Quadrature Amplitude Modulation) transmission is paid to attention in digital modulation method, and it has been used for wireless LAN and digital broadcasting. QAM is a modulation method that puts information on carrier amplitude and phase. So QAM can achieve high-capacity data transmission that the use efficiency of frequency band is good. But QAM has a bad characteristic that is weak to the noise and the interference because distance between each symbol is short. Then we propose the method that not transmission data but CDMA signal, obtained after transmission data is modulated by spread code and multiplexed, is given to each symbol of QAM. And we studied about railway signal system on track circuit as a sample to apply this method. As result of computer simulations, we verified to be able to achieve the improvement of signal to noise ratio by optimizing the constellation map of QAM. We report with the method of a synchronous acquisition.
Maji, Amal K.; Maity, Niladri; Banerji, Pratim; Banerjee, Debdulal
2012-01-01
Background: Pueraria tuberosa (Fabaceae) is a well-known medicinal herbs used in Indian traditional medicines. The puerarin is one of the most important bioactive constituent found in the tubers of this plant. Quantitative estimation of bioactive molecules is essential for the purpose of quality control and dose determination of herbal medicines. The study was designed to develop a validated reversed phase high-performance liquid chromatography (RP-HPLC) method for the quantification of puerarin in the tuber extract of P. tuberosa. Materials and Methods: The RP-HPLC system with Luna C18 (2) 100 Å, 250 × 4.6 mm column was used in this study. The analysis was performed using the mobile phase: 0.1% acetic acid in acetonitrile and 0.1% acetic acid in water (90:10, v/v) under column temperature 25°C. The detection wavelength was set at 254 nm with a flow rate of 1 ml/min. The method validation was performed according to the guidelines of International Conference on Harmonization. Results: The puerarin content of P. tuberosa extract was found to be 9.28 ±0.09%. The calibration curve showed good linearity relationship in the range of 200-1000μg/ml (r2>0.99). The LOD and LOQ were 57.12 and 181.26μg/ml, respectively and the average recovery of puerarin was 99.73% ±1.02%. The evaluation of system suitability, precision, robustness and ruggedness parameters were also found to produce satisfactory results. Conclusions: The developed method is very simple and rapid with excellent specificity, accuracy and precision which can be useful for the routine analysis and quantitative estimation of puerarin in plant extracts and formulations. PMID:23781483
Ocampo, Joanne Michelle F; Smart, J C; Allston, Adam; Bhattacharjee, Reshma; Boggavarapu, Sahithi; Carter, Sharon; Castel, Amanda D; Collmann, Jeff; Flynn, Colin; Hamp, Auntré; Jordan, Diana; Kassaye, Seble; Kharfen, Michael; Lum, Garret; Pemmaraju, Raghu; Rhodes, Anne; Stover, Jeff; Young, Mary A
2016-01-01
The National HIV/AIDS Strategy calls for active surveillance programs for human immunodeficiency virus (HIV) to more accurately measure access to and retention in care across the HIV care continuum for persons living with HIV within their jurisdictions and to identify persons who may need public health services. However, traditional public health surveillance methods face substantial technological and privacy-related barriers to data sharing. This study developed a novel data-sharing approach to improve the timeliness and quality of HIV surveillance data in three jurisdictions where persons may often travel across the borders of the District of Columbia, Maryland, and Virginia. A deterministic algorithm of approximately 1000 lines was developed, including a person-matching system with Enhanced HIV/AIDS Reporting System (eHARS) variables. Person matching was defined in categories (from strongest to weakest): exact, very high, high, medium high, medium, medium low, low, and very low. The algorithm was verified using conventional component testing methods, manual code inspection, and comprehensive output file examination. Results were validated by jurisdictions using internal review processes. Of 161,343 uploaded eHARS records from District of Columbia (N=49,326), Maryland (N=66,200), and Virginia (N=45,817), a total of 21,472 persons were matched across jurisdictions over various strengths in a matching process totaling 21 minutes and 58 seconds in the privacy device, leaving 139,871 uniquely identified with only one jurisdiction. No records matched as medium low or low. Over 80% of the matches were identified as either exact or very high matches. Three separate validation methods were conducted for this study, and they all found ≥90% accuracy between records matched by this novel method and traditional matching methods. This study illustrated a novel data-sharing approach that may facilitate timelier and better quality HIV surveillance data for public health
NASA Astrophysics Data System (ADS)
Langer, Stefan
2014-11-01
For unstructured finite volume methods an agglomeration multigrid with an implicit multistage Runge-Kutta method as a smoother is developed for solving the compressible Reynolds averaged Navier-Stokes (RANS) equations. The implicit Runge-Kutta method is interpreted as a preconditioned explicit Runge-Kutta method. The construction of the preconditioner is based on an approximate derivative. The linear systems are solved approximately with a symmetric Gauss-Seidel method. To significantly improve this solution method grid anisotropy is treated within the Gauss-Seidel iteration in such a way that the strong couplings in the linear system are resolved by tridiagonal systems constructed along these directions of strong coupling. The agglomeration strategy is adapted to this procedure by taking into account exactly these anisotropies in such a way that a directional coarsening is applied along these directions of strong coupling. Turbulence effects are included by a Spalart-Allmaras model, and the additional transport-type equation is approximately solved in a loosely coupled manner with the same method. For two-dimensional and three-dimensional numerical examples and a variety of differently generated meshes we show the wide range of applicability of the solution method. Finally, we exploit the GMRES method to determine approximate spectral information of the linearized RANS equations. This approximate spectral information is used to discuss and compare characteristics of multistage Runge-Kutta methods.
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
The role of applied epidemiology methods in the disaster management cycle.
Malilay, Josephine; Heumann, Michael; Perrotta, Dennis; Wolkin, Amy F; Schnall, Amy H; Podgornik, Michelle N; Cruz, Miguel A; Horney, Jennifer A; Zane, David; Roisman, Rachel; Greenspan, Joel R; Thoroughman, Doug; Anderson, Henry A; Wells, Eden V; Simms, Erin F
2014-11-01
Disaster epidemiology (i.e., applied epidemiology in disaster settings) presents a source of reliable and actionable information for decision-makers and stakeholders in the disaster management cycle. However, epidemiological methods have yet to be routinely integrated into disaster response and fully communicated to response leaders. We present a framework consisting of rapid needs assessments, health surveillance, tracking and registries, and epidemiological investigations, including risk factor and health outcome studies and evaluation of interventions, which can be practiced throughout the cycle. Applying each method can result in actionable information for planners and decision-makers responsible for preparedness, response, and recovery. Disaster epidemiology, once integrated into the disaster management cycle, can provide the evidence base to inform and enhance response capability within the public health infrastructure.
NASA Astrophysics Data System (ADS)
Romagnoli, Francesco; Blumberga, Dagnija
2010-01-01
Geophysical methods are applied to determine a spatial model of the underground, to locate fault zones, to investigate the regional ground-water system, or to derive lithological parameters. In other words, applied geophysics provides the distribution of physical parameters of the subsurface through surveys at the earth's surface without destroying soil formations. In Latvia there are few expert companies of geophysics surveys. Thus, it is important that environmental experts have knowledge in this, and that Latvia, as a country, has access to international expertise. During the study introduced hereto we compared the geophysics course in the Riga Technical University (RTU) with similar courses in eight European universities and one Latvian university, gathering information through websites and/or personal contacts. We collected information about the duration of the courses, learning objectives, topics, teaching methods, credit points etc. As a result, proposals on how to improve the course in RTU were elaborated.
The Role of Applied Epidemiology Methods in the Disaster Management Cycle
Heumann, Michael; Perrotta, Dennis; Wolkin, Amy F.; Schnall, Amy H.; Podgornik, Michelle N.; Cruz, Miguel A.; Horney, Jennifer A.; Zane, David; Roisman, Rachel; Greenspan, Joel R.; Thoroughman, Doug; Anderson, Henry A.; Wells, Eden V.; Simms, Erin F.
2014-01-01
Disaster epidemiology (i.e., applied epidemiology in disaster settings) presents a source of reliable and actionable information for decision-makers and stakeholders in the disaster management cycle. However, epidemiological methods have yet to be routinely integrated into disaster response and fully communicated to response leaders. We present a framework consisting of rapid needs assessments, health surveillance, tracking and registries, and epidemiological investigations, including risk factor and health outcome studies and evaluation of interventions, which can be practiced throughout the cycle. Applying each method can result in actionable information for planners and decision-makers responsible for preparedness, response, and recovery. Disaster epidemiology, once integrated into the disaster management cycle, can provide the evidence base to inform and enhance response capability within the public health infrastructure. PMID:25211748
Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.
Lestelle, Lawrence C.; Mobrand, Lars E.
1996-05-01
The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.
REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.
UMEDA, T.; MATSUFURU, H.
2005-07-25
We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.
Maji, Amal K; Maity, Niladri; Banerji, Pratim; Banerjee, Debdulal
2012-07-01
Pueraria tuberosa (Fabaceae) is a well-known medicinal herbs used in Indian traditional medicines. The puerarin is one of the most important bioactive constituent found in the tubers of this plant. Quantitative estimation of bioactive molecules is essential for the purpose of quality control and dose determination of herbal medicines. The study was designed to develop a validated reversed phase high-performance liquid chromatography (RP-HPLC) method for the quantification of puerarin in the tuber extract of P. tuberosa. The RP-HPLC system with Luna C18 (2) 100 Å, 250 × 4.6 mm column was used in this study. The analysis was performed using the mobile phase: 0.1% acetic acid in acetonitrile and 0.1% acetic acid in water (90:10, v/v) under column temperature 25°C. The detection wavelength was set at 254 nm with a flow rate of 1 ml/min. The method validation was performed according to the guidelines of International Conference on Harmonization. The puerarin content of P. tuberosa extract was found to be 9.28 ±0.09%. The calibration curve showed good linearity relationship in the range of 200-1000μg/ml (r (2)>0.99). The LOD and LOQ were 57.12 and 181.26μg/ml, respectively and the average recovery of puerarin was 99.73% ±1.02%. The evaluation of system suitability, precision, robustness and ruggedness parameters were also found to produce satisfactory results. The developed method is very simple and rapid with excellent specificity, accuracy and precision which can be useful for the routine analysis and quantitative estimation of puerarin in plant extracts and formulations.
NASA Astrophysics Data System (ADS)
Nayan, Niraj; Narayana Murty, S. V. S.; Govind; Mittal, M. C.; Sinha, P. P.
2010-09-01
Modes for heat treatment for homogenizing the cast structure and raising the processibility of alloy AA2014 are determined by empirical methods with the use of light microscopy. The time and temperature of annealing, which determine minimization of dendritic segregation and dissolution of particles of secondary phase, are chosen for a typical commercial ingot. Homogenization at 500°C for 26 h is chosen for alloy AA2014 (the Russian counterpart is AK8) with a mean grain size of 200 μm.
NASA Astrophysics Data System (ADS)
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
[New methods of treatment applied in the hospital of Sochi during the Great Patriotic War].
Artiukhov, S A
2013-05-01
During the Great Patriotic War 1941-1945 Sochi was turned into the largest hospital base in the south of the USSR. All told, 335 thousand wonded and seriously ill soldiers were treated in the hospitals of Sochi. During the war physicians applied many new, including, early unknown medical methods of treatment. Poor provision with medical equipment, instruments, bandages and medicines was made up for using of local resources. Adoption of new treatment methods based on the use of local medicines allowed the Sochi's physicians to save many lives during the war.
Comparison of Computational Methods Applied to Oxazole, Thiazole, and Other Heterocyclic Compounds
1993-01-01
E ~~ ~ ~~~~~ T hia ole a n(t eWee o c c i o p u d Comparison of Computational Methods Applied to Oxazole , Thiazole, and Other Heterocyclic Compounds0...including 6-31G’ and MP2/6-31G* ab initio calculations, were performed for the oxazole and thiazole het- erocycles. The results indicate a scatter among the...methods sensitive to the nature of the heterocycle. This was in particular evident in the oxazole molecule, where AMI gave a singularly high value of
Machine Learning Method Applied in Readout System of Superheated Droplet Detector
NASA Astrophysics Data System (ADS)
Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco
2017-07-01
Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.
Single-cell volume estimation by applying three-dimensional reconstruction methods
NASA Astrophysics Data System (ADS)
Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten
1999-05-01
We have studied three-dimensional reconstruction methods to estimate the cell volume of astroglial cells in primary culture. The studies are based on fluorescence imaging and optical sectioning. An automated image-acquisition system was developed to collect two-dimensional microscopic images. Images were reconstructed by the Linear Maximum a Posteriori method and the non-linear Maximum Likelihood Expectation Maximization (ML-EM) method. In addition, because of the high computational demand of the ML-EM algorithm, we have developed a fast variant of this method. (1) Advanced image analysis techniques were applied for accurate and automated cell volume determination. (2) The sensitivity and accuracy of the reconstruction methods were evaluated by using fluorescent micro-beads with known diameter. The algorithms were applied to fura-2-labeled astroglial cells in primary culture exposed to hypo- or hyper-osmotic stress. The results showed that the ML-EM reconstructed images are adequate for the determination of volume changes in cells or parts thereof.
Applying simulation model to uniform field space charge distribution measurements by the PEA method
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished from space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.
NASA Technical Reports Server (NTRS)
Thompson, David E.
2005-01-01
Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.
Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro
2013-01-01
Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.
Lessons learned applying CASE methods/tools to Ada software development projects
NASA Technical Reports Server (NTRS)
Blumberg, Maurice H.; Randall, Richard L.
1993-01-01
This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.
Xie, Zhaoheng; Li, Suying; Yang, Kun; Xu, Baixuan; Ren, Qiushi
2016-05-27
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging.
Xie, Zhaoheng; Li, Suying; Yang, Kun; Xu, Baixuan; Ren, Qiushi
2016-01-01
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging. PMID:27240368
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-07
... Corporation Model DC- 8-31, DC-8-32, DC-8-33, DC-8-41, DC-8-42, and DC-8-43 Airplanes; Model DC-8-50 Series Airplanes; Model DC-8F-54 and DC-8F-55 Airplanes; Model DC-8-60 Series Airplanes; Model DC-8-60F Series Airplanes; Model DC-8- 70 Series Airplanes; and Model DC-8-70F Series Airplanes AGENCY:......
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.
2014-04-01
In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O
NASA Astrophysics Data System (ADS)
Ngkoimani, Laode; Saleh Isa, Laode; Jahidin; Syamsurizal; Asfar, Suryawan; Makkawaru, Andi
2017-05-01
The purpose of this study is to determine coal distribution and plating in Tawanga Village, Eastern Kolaka Southeast Sulawesi. The measurements were performed in 3 lines which length of every line is 32 m, 64 m, and 80 m by using Geoelectrical method. The data then analyzed by using Res2Dinv software to find out resistivity and rock layer structure. By using this software revealed that resistivity for between 0.0913 and 438 Ωm, 0.552 and 344 Ωm, 0.898 and 271 Ωm for the first line, second line and third line ranging repectively. Based on these results then coal distribution was estimated. That is found that they are around 130-438 Ωm for the first line, 137-344 Ωm for the second line, and 120-271 Ωm for the third line. They distributes at 0,5 - 12,4 meter from northwest to southeast.
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).
Nonlinear Phenomena and Resonant Parametric Perturbation Control in QR-ZCS Buck DC-DC Converters
NASA Astrophysics Data System (ADS)
Hsieh, Fei-Hu; Liu, Feng-Shao; Hsieh, Hui-Chang
The purpose of this study is to investigate the chaotic phenomena and to control in current-mode controlled quasi-resonant zero-current-switching (QR-ZCS) DC-DC buck converters, and to present control of chaos by resonant parametric perturbation control methods. First of all, MATLAB/SIMULINK is used to derive a mathematical model for QR-ZCS DC-DC buck converters, and to simulate the converters to observe the waveform of output voltages, inductance currents and phase-plane portraits from the period-doubling bifurcation to chaos by changing the load resistances. Secondly, using resonant parametric perturbation control in QR-ZCS buck DC-DC converters, the simulation results of the chaotic converter form chaos state turn into stable state period 1, and improve ripple amplitudes of converters under the chaos, to verify the validity of the proposes method.
NASA Technical Reports Server (NTRS)
Cuk, Slobodan M. (Inventor); Middlebrook, Robert D. (Inventor)
1980-01-01
A dc-to-dc converter having nonpulsating input and output current uses two inductances, one in series with the input source, the other in series with the output load. An electrical energy transferring device with storage, namely storage capacitance, is used with suitable switching means between the inductances to DC level conversion. For isolation between the source and load, the capacitance may be divided into two capacitors coupled by a transformer, and for reducing ripple, the inductances may be coupled. With proper design of the coupling between the inductances, the current ripple can be reduced to zero at either the input or the output, or the reduction achievable in that way may be divided between the input and output.
Simple way to apply nonlocal van der Waals functionals within all-electron methods
NASA Astrophysics Data System (ADS)
Tran, Fabien; Stelzl, Julia; Koller, David; Ruh, Thomas; Blaha, Peter
2017-08-01
The method based on fast Fourier transforms proposed by G. Román-Pérez and J. M. Soler [Phys. Rev. Lett. 103, 096102 (2009), 10.1103/PhysRevLett.103.096102], which allows for a computationally fast implementation of the nonlocal van der Waals (vdW) functionals, has significantly contributed to making the vdW functionals popular in solid-state physics. However, the Román-Pérez-Soler method relies on a plane-wave expansion of the electron density; therefore it cannot be applied readily to all-electron densities for which an unaffordable number of plane waves would be required for an accurate expansion. In this work, we present the results for the lattice constant and binding energy of solids that were obtained by applying a smoothing procedure to the all-electron density calculated with the linearized augmented plane-wave method. The smoothing procedure has the advantages of being very simple to implement, basis-set independent, and allowing the calculation of the potential. It is also shown that the results agree very well with those from the literature that were obtained with the projector augmented wave method.
Regulation of a lightweight high efficiency capacitator diode voltage multiplier dc-dc converter
NASA Technical Reports Server (NTRS)
Harrigill, W. T., Jr.; Myers, I. T.
1976-01-01
A method for the regulation of a capacitor diode voltage multiplier dc-dc converter has been developed which has only minor penalties in weight and efficiency. An auxiliary inductor is used, which only handles a fraction of the total power, to control the output voltage through a pulse width modulation method in a buck boost circuit.
Regulation of a lightweight high efficiency capacitor diode voltage multiplier dc-dc converter
NASA Technical Reports Server (NTRS)
Harrigill, W. T., Jr.; Myers, I. T.
1976-01-01
A method for the regulation of a capacitor diode voltage multiplier dc-dc converter has been developed which has only minor penalties in weight and efficiency. An auxiliary inductor is used, which only handles a fraction of the total power, to control the output voltage through a pulse width modulation method in a buck boost circuit.
NASA Astrophysics Data System (ADS)
Eppeldauer, George P.; Yoon, Howard W.; Jarrett, Dean G.; Larason, Thomas C.
2013-10-01
For photocurrent measurements with low uncertainties, wide dynamic range reference current-to-voltage converters and a new converter calibration method have been developed at the National Institute of Standards and Technology (NIST). The high-feedback resistors of a reference converter were in situ calibrated on a high-resistivity, printed circuit board placed in an electrically shielded box electrically isolated from the operational amplifier using jumpers. The feedback resistors, prior to their installation, were characterized, selected and heat treated. The circuit board was cleaned with solvents, and the in situ resistors were calibrated using measurement systems for 10 kΩ to 10 GΩ standard resistors. We demonstrate that dc currents from 1 nA to 100 µA can be measured with uncertainties of 55 × 10-6 (k = 2) or lower, which are lower in uncertainties than any commercial device by factors of 10 to 30 at the same current setting. The internal (NIST) validations of the reference converter are described.
Contextual filtering method applied to sub-bands of interferometric image decomposition
NASA Astrophysics Data System (ADS)
Belhadj-Aissa, S.; Hocine, F.; Boughacha, M. S.; Belhadj-Aissa, M.
2016-10-01
The precision and accuracy of Digital elevation model and deformation measurement, from SAR interferometry (InSAR/DInSAR) depend mainly on the quality of the interferogram. However, the phase noise, which is mainly due to decorrelation between the images and the speckle, makes the step of phase unwrapping most delicate. In this paper, we propose a filtering method that combines the techniques of decomposition into sub-bands and nonlinear local weights. The Spectral / Contextual filter that we propose, inspired from to Goldstein filter is applied to the sub-bands from the wavelet decomposition. To validate the results, we applied to interferometric products tandem pair ERS1/ERS2 taken in the region of Algiers Algeria.
Validation of the Deterministic Realistic Method Applied to CATHARE on LB LOCA Experiments
Sauvage, Jean-Yves; Laroche, Stephane
2002-07-01
Framatome-ANP and EDF have defined a generic approach for using a best-estimate code in design basis accident studies called Deterministic Realistic Method (DRM). It has been applied to elaborate a LB LOCA ECCS evaluation model based on the CATHARE code. From a prior statistical analysis of uncertainties, the DRM derives a conservative deterministic model, preserving the realistic nature of the simulation, to be used in the further applications. The conservatism of the penalized model is demonstrated comparing penalized calculations with relevant experimental data. The DRM proved to be a highly flexible tool and has been applied successfully to meet the specific French and specific Belgian requirements of Safety Authorities. (authors)
Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation
NASA Astrophysics Data System (ADS)
Hatten, Noble; Russell, Ryan P.
2016-12-01
A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.
A note on the accuracy of spectral method applied to nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Wong, Peter S.
1994-01-01
Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.
Relativistic convergent close-coupling method applied to electron scattering from mercury
Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor
2010-08-15
We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) {sup 3}P{sub 0,1,2} states. These cross sections are associated with the formation of negative ion (Hg{sup -}) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.
Li, Yudu; Ma, Sen; Hu, Zhongze; Chen, Jiansheng; Su, Guangda; Dou, Weibei
2015-01-01
Research on brain machine interface (BMI) has been developed very fast in recent years. Numerous feature extraction methods have successfully been applied to electroencephalogram (EEG) classification in various experiments. However, little effort has been spent on EEG based BMI systems regarding familiarity of human faces cognition. In this work, we have implemented and compared the classification performances of four common feature extraction methods, namely, common spatial pattern, principal component analysis, wavelet transform and interval features. High resolution EEG signals were collected from fifteen healthy subjects stimulated by equal number of familiar and novel faces. Principal component analysis outperforms other methods with average classification accuracy reaching 94.2% leading to possible real life applications. Our findings thereby may contribute to the BMI systems for face recognition.
A reflective lens: applying critical systems thinking and visual methods to ecohealth research.
Cleland, Deborah; Wyborn, Carina
2010-12-01
Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research.
NASA Astrophysics Data System (ADS)
Zhu, Zheng; Katzgraber, Helmut G.
2014-03-01
We study the thermodynamic properties of the two-dimensional Edwards-Anderson Ising spin-glass model on a square lattice using the tensor renormalization group method based on a higher-order singular-value decomposition. Our estimates of the internal energy per spin agree very well with high-precision parallel tempering Monte Carlo studies, thus illustrating that the method can, in principle, be applied to frustrated magnetic systems. In particular, we discuss the necessary tuning of parameters for convergence, memory requirements, efficiency for different types of disorder, as well as advantages and limitations in comparison to conventional multicanonical and Monte Carlo methods. Extensions to higher space dimensions, as well as applications to spin glasses in a field are explored.
Improved first Rayleigh-Sommerfeld method applied to metallic cylindrical focusing micro mirrors.
Ye, Jia-Sheng; Zhang, Yan; Hane, Kazuhiro
2009-04-27
An improved first Rayleigh-Sommerfeld method (IRSM1) is intensively applied to analyzing the focal properties of metallic cylindrical focusing micro mirrors. A variety of metallic cylindrical focusing mirrors with different f-numbers, different polarization of incidence, or different types of profiles are investigated. The focal properties include the focal spot size, the diffraction efficiency, the real focal length, the total reflected power, and the normalized sidelobe power. Numerical results calculated by the IRSM1, the original first Rayleigh-Sommerfeld method (ORSM1), and the rigorous boundary element method (BEM) are presented for quantitative comparison. It is found that the IRSM1 is much more accurate than the ORSM1 in performance analysis of metallic cylindrical focusing mirrors, especially for cylindrical refractive focusing mirrors with small f-numbers. Moreover, the IRSM1 saves great amounts of computational time and computer memory in calculations, in comparison with the vectorial BEM.
Preconditioning Method Applied to Near-Critical Carbon-Dioxide Flows in Micro-Channel
NASA Astrophysics Data System (ADS)
Yamamoto, Satoru; Toratani, Masayuki; Sasao, Yasuhiro
A numerical method for simulating near-critical carbon-dioxide flows in a micro-channel is presented. This method is based on the preconditioning method applied to the compressible Navier-Stokes equations. The Peng-Robinson equation of state is introduced to evaluate the properties of near-critical fluids. As numerical examples, Near-critical carbon-dioxide flows in a square cavity and in a micro-channel are calculated and the calculated results are compared with the experimental data and the theoretical results. Finally, we demonstrate that the compressibility dominates the near-critical carbon-dioxide flows in a micro-channel even if the flow is very slow and the Reynolds number is very low.
Finite volume and finite element methods applied to 3D laminar and turbulent channel flows
Louda, Petr; Příhoda, Jaromír; Sváček, Petr; Kozel, Karel
2014-12-10
The work deals with numerical simulations of incompressible flow in channels with rectangular cross section. The rectangular cross section itself leads to development of various secondary flow patterns, where accuracy of simulation is influenced by numerical viscosity of the scheme and by turbulence modeling. In this work some developments of stabilized finite element method are presented. Its results are compared with those of an implicit finite volume method also described, in laminar and turbulent flows. It is shown that numerical viscosity can cause errors of same magnitude as different turbulence models. The finite volume method is also applied to 3D turbulent flow around backward facing step and good agreement with 3D experimental results is obtained.
2011-01-01
Background Statistical methods for ranking differentially expressed genes (DEGs) from gene expression data should be evaluated with regard to high sensitivity, specificity, and reproducibility. In our previous studies, we evaluated eight gene ranking methods applied to only Affymetrix GeneChip data. A more general evaluation that also includes other microarray platforms, such as the Agilent or Illumina systems, is desirable for determining which methods are suitable for each platform and which method has better inter-platform reproducibility. Results We compared the eight gene ranking methods using the MicroArray Quality Control (MAQC) datasets produced by five manufacturers: Affymetrix, Applied Biosystems, Agilent, GE Healthcare, and Illumina. The area under the curve (AUC) was used as a measure for both sensitivity and specificity. Although the highest AUC values can vary with the definition of "true" DEGs, the best methods were, in most cases, either the weighted average difference (WAD), rank products (RP), or intensity-based moderated t statistic (ibmT). The percentages of overlapping genes (POGs) across different test sites were mainly evaluated as a measure for both intra- and inter-platform reproducibility. The POG values for WAD were the highest overall, irrespective of the choice of microarray platform. The high intra- and inter-platform reproducibility of WAD was also observed at a higher biological function level. Conclusion These results for the five microarray platforms were consistent with our previous ones based on 36 real experimental datasets measured using the Affymetrix platform. Thus, recommendations made using the MAQC benchmark data might be universally applicable. PMID:21639945
77 FR 31341 - Application To Export Electric Energy; DC Energy, LLC
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-25
... Application To Export Electric Energy; DC Energy, LLC AGENCY: Office of Electricity Delivery and Energy Reliability, DOE. ACTION: Notice of Application. SUMMARY: DC Energy, LLC (DC Energy) has applied to renew its... of Energy, 1000 Independence Avenue SW., Washington, DC 20585-0350. Because of delays in handling...
River basin soil-vegetation condition assessment applying mathematic simulation methods
NASA Astrophysics Data System (ADS)
Mishchenko, Natalia; Trifonova, Tatiana; Shirkin, Leonid
2013-04-01
Meticulous attention paid nowadays to the problem of vegetation cover productivity changes is connected also to climate global transformation. At the same time ecosystems anthropogenic transformation, basically connected to the changes of land use structure and human impact on soil fertility, is developing to a great extent independently from climatic processes and can seriously influence vegetation cover productivity not only at the local and regional levels but also globally. Analysis results of land use structure and soil cover condition influence on river basin ecosystems productive potential is presented in the research. The analysis is carried out applying integrated characteristics of ecosystems functioning, space images processing results and mathematic simulation methods. The possibility of making permanent functional simulator defining connection between macroparameters of "phytocenosis-soil" system condition on the basis of basin approach is shown. Ecosystems of river catchment basins of various degrees located in European part of Russia were chosen as research objects. For the integrated assessment of ecosystems soil and vegetation conditions the following characteristics have been applied: 1. Soil-productional potential, characterizing the ability of natural and natural-anthropogenic ecosystem in certain soil-bioclimatic conditions for long term reproduction. This indicator allows for specific phytomass characteristics and ecosystem produce, humus content in soil and bioclimatic parameters. 2. Normalized difference vegetation index (NDVI) has been applied as an efficient, remotely defined, monitoring indicator characterizing spatio-temporal unsteadiness of soil-productional potential. To design mathematic simulator functional simulation methods and principles on the basis of regression, correlation and factor analysis have been applied in the research. Coefficients values defining in the designed static model of phytoproductivity distribution has been
Methods for evaluating the biological impact of potentially toxic waste applied to soils
Neuhauser, E.F.; Loehr, R.C.; Malecki, M.R.
1985-12-01
The study was designed to evaluate two methods that can be used to estimate the biological impact of organics and inorganics that may be in wastes applied to land for treatment and disposal. The two methods were the contact test and the artificial soil test. The contact test is a 48 hr test using an adult worm, a small glass vial, and filter paper to which the test chemical or waste is applied. The test is designed to provide close contact between the worm and a chemical similar to the situation in soils. The method provides a rapid estimate of the relative toxicity of chemicals and industrial wastes. The artificial soil test uses a mixture of sand, kaolin, peat, and calcium carbonate as a representative soil. Different concentrations of the test material are added to the artificial soil, adult worms are added and worm survival is evaluated after two weeks. These studies have shown that: earthworms can distinguish between a wide variety of chemicals with a high degree of accuracy.
Characteristics of an Extrusion Panel Made by Applying a Modified Curing Method
Kim, Haseog; Park, Sangki; Lee, Seahyun
2016-01-01
CO2 emitted from building materials and the construction materials industry has reached about 67 million tons. Controls on the use of consumed fossil fuels and the reduction of emission gases are essential for the reduction of CO2 in the construction area as one reduces the second and third curing to emit CO2 in the construction materials industry. In this study, a new curing method was addressed by using a low energy curing admixture (LA) in order to exclude autoclave curing. The new curing method was applied to make panels. Then, its physical properties, depending on the mixed amount of fiber, type of fiber and mixed ratio of fiber, were observed. The type of fiber did not appear to be a main factor that affected strength, while the LA mixing ratio and mixing amount of fiber appeared to be major factors affecting the strength. Applying the proposed new curing method can reduce carbon and restrain the use of fossil fuels through a reduction of the second and third curing processes, which emit CO2 in the construction materials industry. Therefore, it will be helpful to reduce global warming. PMID:28773470
Research methods used in developing and applying quality indicators in primary care
Campbell, S; Braspenning, J; Hutchinson, A; Marshall, M
2002-01-01
Quality indicators have been developed throughout Europe primarily for use in hospitals, but also increasingly for primary care. Both development and application are important but there has been less research on the application of indicators. Three issues are important when developing or applying indicators: (1) which stakeholder perspective(s) are the indicators intended to reflect; (2) what aspects of health care are being measured; and (3) what evidence is available? The information required to develop quality indicators can be derived using systematic or non-systematic methods. Non-systematic methods such as case studies play an important role but they do not tap in to available evidence. Systematic methods can be based directly on scientific evidence by combining available evidence with expert opinion, or they can be based on clinical guidelines. While it may never be possible to produce an error free measure of quality, measures should adhere, as far as possible, to some fundamental a priori characteristics (acceptability, feasibility, reliability, sensitivity to change, and validity). Adherence to these characteristics will help maximise the effectiveness of quality indicators in quality improvement strategies. It is also necessary to consider what the results of applying indicators tell us about quality of care. PMID:12468698
A new method to improve network topological similarity search: applied to fold recognition
Lhota, John; Hauptman, Ruth; Hart, Thomas; Ng, Clara; Xie, Lei
2015-01-01
Motivation: Similarity search is the foundation of bioinformatics. It plays a key role in establishing structural, functional and evolutionary relationships between biological sequences. Although the power of the similarity search has increased steadily in recent years, a high percentage of sequences remain uncharacterized in the protein universe. Thus, new similarity search strategies are needed to efficiently and reliably infer the structure and function of new sequences. The existing paradigm for studying protein sequence, structure, function and evolution has been established based on the assumption that the protein universe is discrete and hierarchical. Cumulative evidence suggests that the protein universe is continuous. As a result, conventional sequence homology search methods may be not able to detect novel structural, functional and evolutionary relationships between proteins from weak and noisy sequence signals. To overcome the limitations in existing similarity search methods, we propose a new algorithmic framework—Enrichment of Network Topological Similarity (ENTS)—to improve the performance of large scale similarity searches in bioinformatics. Results: We apply ENTS to a challenging unsolved problem: protein fold recognition. Our rigorous benchmark studies demonstrate that ENTS considerably outperforms state-of-the-art methods. As the concept of ENTS can be applied to any similarity metric, it may provide a general framework for similarity search on any set of biological entities, given their representation as a network. Availability and implementation: Source code freely available upon request Contact: lxie@iscb.org PMID:25717198
The Fractional Step Method Applied to Simulations of Natural Convective Flows
NASA Technical Reports Server (NTRS)
Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)
2002-01-01
This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-02-18
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Nguyen, Vinh Hao; Suh, Young Soo
2007-01-01
This paper is concerned with improving performance of a state estimation problem over a network in which a send-on-delta (SOD) transmission method is used. The SOD method requires that a sensor node transmit data to the estimator node only if its measurement value changes more than a given specified δ value. This method has been explored and applied by researchers because of its efficiency in the network bandwidth improvement. However, when this method is used, it is not ensured that the estimator node receives data from the sensor nodes regularly at every estimation period. Therefore, we propose a method to reduce estimation error in case of no sensor data reception. When the estimator node does not receive data from the sensor node, the sensor value is known to be in a (−δi,+δi) interval from the last transmitted sensor value. This implicit information has been used to improve estimation performance in previous studies. The main contribution of this paper is to propose an algorithm, where the sensor value interval is reduced to (−δi/2,+δi/2) in certain situations. Thus, the proposed algorithm improves the overall estimation performance without any changes in the send-on-delta algorithms of the sensor nodes. Through numerical simulations, we demonstrate the feasibility and the usefulness of the proposed method.
Evaluation of Methods for Sampling, Recovery, and Enumeration of Bacteria Applied to the Phylloplane
Donegan, Katherine; Matyac, Carl; Seidler, Ramon; Porteous, Arlene
1991-01-01
Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. Several experiments were performed to examine quantitative recovery methods that are commonly used or that have potential applications. In these experiments, Erwinia herbicola and Enterobacter cloacae were applied in greenhouses to Blue Lake bush beans (Phaseolus vulgaris) and Cayuse oats (Avena sativa). Sampling indicated that the variance in bacterial counts among leaves increased over time and that this increase caused an overestimation of the mean population size by bulk leaf samples relative to single leaf samples. An increase in the number of leaves in a bulk sample, above a minimum number, did not significantly reduce the variance between samples. Experiments evaluating recovery methods demonstrated that recovery of bacteria from leaves was significantly better with stomacher blending, than with blending, sonication, or washing and that the recovery efficiency was constant over a range of sample inoculum densities. Delayed processing of leaf samples, by storage in a freezer, did not significantly lower survival and recovery of microorganisms when storage was short term and leaves were not stored in buffer. The drop plate technique for enumeration of bacteria did not significantly differ from the spread plate method. Results of these sampling, recovery, and enumeration experiments indicate a need for increased development and standardization of methods used by researchers as there are significant differences among, and also important limitations to, some of the methods used. PMID:16348404
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Cork-resin ablative insulation for complex surfaces and method for applying the same
NASA Technical Reports Server (NTRS)
Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)
1980-01-01
A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.
Method for applying a photoresist layer to a substrate having a preexisting topology
Morales, Alfredo M.; Gonzales, Marcela
2004-01-20
The present invention describes a method for preventing a photoresist layer from delaminating, peeling, away from the surface of a substrate that already contains an etched three dimensional structure such as a hole or a trench. The process comprises establishing a saturated vapor phase of the solvent media used to formulate the photoresist layer, above the surface of the coated substrate as the applied photoresist is heated in order to "cure" or drive off the retained solvent constituent within the layer. By controlling the rate and manner in which solvent is removed from the photoresist layer the layer is stabilized and kept from differentially shrinking and peeling away from the substrate.
Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca
2016-01-01
Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly applied within healthcare organisations, although limited guidance exists and different methods, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the methods used within them, including the extent to which staff and patients are involved. Methods Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently applied within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the methods used often required extensive resources. Conclusion Further research is required to develop high quality yet practical methods which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency
Applying the case method for teaching within the health professions--teaching the students.
Stjernquist, M; Crang Svalenius, E
2007-05-01
When using the Case Method in teaching situations, problem-solving is emphasized and taught, in order to acquire the skills and later be able to apply them in new situations. The basis of the learning process is the students' own activity in the situation and is built on critical appraisal and discussion. To explain what the Case Method is, what it is not and to describe when and where to use the Case Method. The objective is also to describe how to write a 'case', how to lead a 'case' discussion and how to deal with problems. Why one should use the Case Method is also highlighted. The case used should be founded on a real life situation, containing a problem that must be handled. The structure and use of the white board plays a central part. It is important that the setting allows the teacher to interact with all the students. Groups of up to 30 students can be handled with ease, though larger groups are feasible in the right physical setting. Within the health professions, the Case Method can be used at all levels of training and to a certain extent the same case can be used--the depth with which it is addressed depends on the student's prior knowledge. Different professions and specialists can take part. A whole curriculum can be built up around the Case Method, but more often it is used together with other pedagogic methods. The Case Method is a well-structured, student-activating way of teaching, well-suited to hone problem-solving skills within health education programmes.
NASA Astrophysics Data System (ADS)
Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin
2017-02-01
Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.
Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES
NASA Astrophysics Data System (ADS)
Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean
2009-06-01
Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).
A multiple flux boundary element method applied to the description of surface water waves
NASA Astrophysics Data System (ADS)
Hague, C. H.; Swan, C.
2009-08-01
This paper concerns a two dimensional numerical model based on a high-order boundary element method with fully nonlinear free surface boundary conditions. Multiple fluxes are applied as a method of removing the so-called “corner problem”, whereby the direction of the outward normal at geometric discontinuities is ill-defined. In the present method, both fluxes associated with differing directions of the outward normal at a corner are considered, allowing a single node to be placed at that position. This prevents any loss of information at what can be an important part of the boundary, especially if considering simulations of wave reflection and wave run-up. The method is compared to both the double node approach and the use of discontinuous elements and is shown to be a more accurate technique. The success of the method is further demonstrated by its ability to accurately simulate various problems involving wave transmission and wave-structure interactions at domain corners; the results being achieved without the need for filtering, smoothing or re-gridding of any kind.
NASA Astrophysics Data System (ADS)
Brezina, Tadej; Graser, Anita; Leth, Ulrich
2017-02-01
Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.
NASA Astrophysics Data System (ADS)
Brezina, Tadej; Graser, Anita; Leth, Ulrich
2017-04-01
Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.
A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies
Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.
2012-01-01
Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571
A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale.
Mircioiu, Constantin; Atkinson, Jeffrey
2017-05-10
A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple "yes" or "no" but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies for practice. Results obtained show that with "large" (>15) numbers of responses and similar (but clearly not normal) distributions from different subgroups, parametric and non-parametric analyses give in almost all cases the same significant or non-significant results for inter-subgroup comparisons. Parametric methods were more discriminant in the cases of non-similar conclusions. Considering that the largest differences in opinions occurred in the upper part of the 4-point Likert scale (ranks 3 "very important" and 4 "essential"), a "score analysis" based on this part of the data was undertaken. This transformation of the ordinal Likert data into binary scores produced a graphical representation that was visually easier to understand as differences were accentuated. In conclusion, in this case of Likert ordinal data with high response rates, restraining the analysis to non-parametric methods leads to a loss of information. The addition of parametric methods, graphical analysis, analysis of subsets, and transformation of data leads to more in-depth analyses.
Applying the method of fundamental solutions to harmonic problems with singular boundary conditions
NASA Astrophysics Data System (ADS)
Valtchev, Svilen S.; Alves, Carlos J. S.
2017-07-01
The method of fundamental solutions (MFS) is known to produce highly accurate numerical results for elliptic boundary value problems (BVP) with smooth boundary conditions, posed in analytic domains. However, due to the analyticity of the shape functions in its approximation basis, the MFS is usually disregarded when the boundary functions possess singularities. In this work we present a modification of the classical MFS which can be applied for the numerical solution of the Laplace BVP with Dirichlet boundary conditions exhibiting jump discontinuities. In particular, a set of harmonic functions with discontinuous boundary traces is added to the MFS basis. The accuracy of the proposed method is compared with the results form the classical MFS.
Validation of Inverse Methods Applied to Forensic Analysis of Spent Fuel
Broadhead, Bryan L; Weber, Charles F
2010-01-01
Inverse depletion/decay methods are useful tools for application to nuclear forensics. Previously, inverse methods were applied to the generic case of predicting the burnup, initial enrichment, and cooling time for selected spent nuclear fuels based on measured actinide and fission product concentrations. These existing measurements were not developed or optimized for use by these inverse techniques, and hence previous work demonstrated the prediction of only the fuel burnup, initial enrichment, and cooling time. Previously, nine spent fuel samples from an online data compilation were randomly selected for study. This work set out to demonstrate the full prediction capabilities using measured isotopic data, but with a more deliberate selection of fuel samples. The current approach is to evaluate nuclides within the same element to see if complementary information can be obtained in addition to the reactor burnup, enrichment, and cooling. Specifically, the reactor power and the fuel irradiation time values are desired to achieve the maximum prediction capabilities of these techniques.
Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Atkins, Harold L.
2009-01-01
The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.
NASA Astrophysics Data System (ADS)
Islam, M. T.; Trevorah, R. M.; Appadoo, D. R. T.; Best, S. P.; Chantler, C. T.
2017-04-01
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7 K through 353 K.
Proposal of a checking parameter in the simulated annealing method applied to the spin glass model
NASA Astrophysics Data System (ADS)
Yamaguchi, Chiaki
2016-02-01
We propose a checking parameter utilizing the breaking of the Jarzynski equality in the simulated annealing method using the Monte Carlo method. This parameter is based on the Jarzynski equality. By using this parameter, to detect that the system is in global minima of the free energy under gradual temperature reduction is possible. Thus, by using this parameter, one is able to investigate the efficiency of annealing schedules. We apply this parameter to the ± J Ising spin glass model. The application to the Gaussian Ising spin glass model is also mentioned. We discuss that the breaking of the Jarzynski equality is induced by the system being trapped in local minima of the free energy. By performing Monte Carlo simulations of the ± J Ising spin glass model and a glassy spin model proposed by Newman and Moore, we show the efficiency of the use of this parameter.
NASA Astrophysics Data System (ADS)
García-Allende, P. B.; Conde, O. M.; Mirapeix, J.; Cubillas, A. M.; López-Higuera, J. M.
2007-07-01
A data processing method for hyperspectral images is presented. Each image contains the whole diffuse reflectance spectra of the analyzed material for all the spatial positions along a specific line of vision. This data processing method is composed of two blocks: data compression and classification unit. Data compression is performed by means of Principal Component Analysis (PCA) and the spectral interpretation algorithm for classification is the Spectral Angle Mapper (SAM). This strategy of classification applying PCA and SAM has been successfully tested on the raw material on-line characterization in the tobacco industry. In this application case the desired raw material (tobacco leaves) should be discriminated from other unwanted spurious materials, such as plastic, cardboard, leather, candy paper, etc. Hyperspectral images are recorded by a spectroscopic sensor consisting of a monochromatic camera and a passive Prism- Grating-Prism device. Performance results are compared with a spectral interpretation algorithm based on Artificial Neural Networks (ANN).
Islam, M T; Trevorah, R M; Appadoo, D R T; Best, S P; Chantler, C T
2017-04-15
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr(2) analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7K through 353K.
Double sweep preconditioner for optimized Schwarz methods applied to the Helmholtz problem
Vion, A. Geuzaine, C.
2014-06-01
This paper presents a preconditioner for non-overlapping Schwarz methods applied to the Helmholtz problem. Starting from a simple analytic example, we show how such a preconditioner can be designed by approximating the inverse of the iteration operator for a layered partitioning of the domain. The preconditioner works by propagating information globally by concurrently sweeping in both directions over the subdomains, and can be interpreted as a coarse grid for the domain decomposition method. The resulting algorithm is shown to converge very fast, independently of the number of subdomains and frequency. The preconditioner has the advantage that, like the original Schwarz algorithm, it can be implemented as a matrix-free routine, with no additional preprocessing.
Vortex methods with immersed lifting lines applied to LES of wind turbine wakes
NASA Astrophysics Data System (ADS)
Chatelain, Philippe; Bricteux, Laurent; Winckelmans, Gregoire; Koumoutsakos, Petros
2010-11-01
We present the coupling of a vortex particle-mesh method with immersed lifting lines. The method relies on the Lagrangian discretization of the Navier-Stokes equations in vorticity-velocity formulation. Advection is handled by the particles while the mesh allows the evaluation of the differential operators and the use of fast Poisson solvers. We use a Fourier-based fast Poisson solver which simultaneously allows unbounded directions and inlet/outlet boundaries. A lifting line approach models the vorticity sources in the flow. Its immersed treatment efficiently captures the development of vorticity from thin sheets into a three-dimensional field. We apply this approach to the simulation of a wind turbine wake at very high Reynolds number. The combined use of particles and multiscale subgrid models allows the capture of wake dynamics with minimal spurious diffusion and dispersion.
Ruggieri, Alexander P; Pakhomov, Serguei V; Chute, Christopher G
2004-01-01
In an effort to unearth semantic models that could prove fruitful to functional-status terminology development we applied the "frame semantic" method, derived from the linguistic theory of thematic roles currently exemplified in the Berkeley "FrameNet" Project. Full descriptive sentences with functional-status conceptual meaning were derived from structured content within a corpus of questionnaire assessment instruments commonly used in clinical practice for functional-status assessment. Syntactic components in those sentences were delineated through manual annotation and mark-up. The annotated syntactic constituents were tagged as frame elements according to their semantic role within the context of the derived functional-status expression. Through this process generalizable "semantic frames" were elaborated with recurring "frame elements". The "frame semantic" method as an approach to rendering semantic models for functional-status terminology development and its use as a basis for machine recognition of functional status data in clinical narratives are discussed.
NASA Astrophysics Data System (ADS)
Bianchi, F.; Pocci, P.; Prina-Ricotti, L.
1981-02-01
The assumptions used to formulate the processing method, the proposed algorithm, and phoneme recognition test results of a homotopic signal processing method are presented. The hearing system is considered as a box with one imput, that applies a signal whose information content = 500 Kbit/sec, and many thousand outputs, the nerve fibers, having a transmission rate variable between 30 and 400 bit/sec. The signal transmitted by any one fiber is a series of equal impulse. Homotopic representation of a phoneme is available in steady state after 2 to 3 msec. The phoneme patterns are very different, although patterns for the same phoneme from different speakers are similar. Transition patterns between phonemes change rapidly. Recognition rate, using a minicomputer, of all possible combinations of 'a', 'e', 'r' and 'm' is 95.2%.
Castello, Charles C; New, Joshua Ryan
2012-01-01
Autonomous detection and correction of potentially missing or corrupt sensor data is a essential concern in building technologies since data availability and correctness is necessary to develop accurate software models for instrumented experiments. Therefore, this paper aims to address this problem by using statistical processing methods including: (1) least squares; (2) maximum likelihood estimation; (3) segmentation averaging; and (4) threshold based techniques. Application of these validation schemes are applied to a subset of data collected from Oak Ridge National Laboratory s (ORNL) ZEBRAlliance research project, which is comprised of four single-family homes in Oak Ridge, TN outfitted with a total of 1,218 sensors. The focus of this paper is on three different types of sensor data: (1) temperature; (2) humidity; and (3) energy consumption. Simulations illustrate the threshold based statistical processing method performed best in predicting temperature, humidity, and energy data.
Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies
Paim, Clésio S; Führ, Fernanda; Barth, Aline B; Gonçalves, Carlos E I; Nardi, Nance; Steppe, Martin; Schapoval, Elfrides E S
2011-02-15
The validation of a microbiological assay applying the cylinder-plate method to determine the quinolone gemifloxacin mesylate (GFM) content is described. Using a strain of Staphylococcus epidermidis ATCC 12228 as the test organism, the GFM content in tablets at concentrations ranging from 0.5 to 4.5 μg mL(-1) could be determined. A standard curve was obtained by plotting three values derived from the diameters of the growth inhibition zone. A prospective validation showed that the method developed is linear (r=0.9966), precise (repeatability and intermediate precision), accurate (100.63%), specific and robust. GFM solutions (from the drug product) exposed to direct UVA radiation (352 nm), alkaline hydrolysis, acid hydrolysis, thermal stress, hydrogen peroxide causing oxidation, and a synthetic impurity were used to evaluate the specificity of the bioassay. The bioassay and the previously validated high performance liquid chromatographic (HPLC) method were compared using Student's t test, which indicated that there was no statistically significant difference between these two validated methods. These studies demonstrate the validity of the proposed bioassay, which allows reliable quantification of GFM in tablets and can be used as a useful alternative methodology for GFM analysis in stability studies and routine quality control. The GFM reference standard (RS), photodegraded GFM RS, and synthetic impurity samples were also studied in order to determine the preliminary in vitro cytotoxicity to peripheral blood mononuclear cells. The results indicated that the GFM RS and photodegraded GFM RS were potentially more cytotoxic than the synthetic impurity under the conditions of analysis applied. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.
Conformable Tile Method of Applying CryoCoat{sup TM} UL79 Insulation to Cryogenic Tanks
Francis, Will; Tupper, Mike; Harrison, Stephen
2004-06-23
A procedure for fabricating, forming, and bonding thin tiles of CryoCoat{sup TM} UL79 cryogenic insulation, a syntactic foam material, to a large curved surface was developed and its performance was verified. This effort was undertaken because of safety concerns for the Alpha Magnetic Spectrometer 02 (AMS-02) experiment, a space-based particle physics detector designed to search for antimatter, dark matter and the origin of cosmic rays in space. The key component of the detector is a large superconducting magnet, cooled to 1.8 K by superfluid helium. From ground safety and flight safety considerations, the system must be safe in the event of a sudden catastrophic loss of insulating vacuum. Previous testing showed that a thin layer of CryoCoat{sup TM} UL79 reduces the heat flux in the event of vacuum loss by nearly a factor of eight, which satisfies the safety concern. A practical method of applying a uniform 3 mm layer of CryoCoat{sup TM} UL79 insulation material to the helium vessel containing the superconducting magnet was developed for this requirement. The fabrication procedure was validated through application of CryoCoat{sup TM} UL79 on a prototype helium tank, which was subsequently tested at cryogenic temperatures. Because of these successful tests, NASA has accepted the conformable tile method for applying CryoCoat{sup TM} UL79 and has agreed that AMS-02 is safe to fly.
Neural net applied to anthropological material: a methodical study on the human nasal skeleton.
Prescher, Andreas; Meyers, Anne; Gerf von Keyserlingk, Diedrich
2005-07-01
A new information processing method, an artificial neural net, was applied to characterise the variability of anthropological features of the human nasal skeleton. The aim was to find different types of nasal skeletons. A neural net with 15*15 nodes was trained by 17 standard anthropological parameters taken from 184 skulls of the Aachen collection. The trained neural net delivers its classification in a two-dimensional map. Different types of noses were locally separated within the map. Rare and frequent types may be distinguished after one passage of the complete collection through the net. Statistical descriptive analysis, hierarchical cluster analysis, and discriminant analysis were applied to the same data set. These parallel applications allowed comparison of the new approach to the more traditional ones. In general the classification by the neural net is in correspondence with cluster analysis and discriminant analysis. However, it goes beyond these classifications because of the possibility of differentiating the types in multi-dimensional dependencies. Furthermore, places in the map are kept blank for intermediate forms, which may be theoretically expected, but were not included in the training set. In conclusion, the application of a neural network is a suitable method for investigating large collections of biological material. The gained classification may be helpful in anatomy and anthropology as well as in forensic medicine. It may be used to characterise the peculiarity of a whole set as well as to find particular cases within the set.
Sun oils: effect of the applied dose on SPF determined by using in vitro method.
Couteau, C; Paparis, E; Coiffard, L J M
2014-05-01
The relationship between skin cancer and exposure to the sun is now clearly established. It is therefore necessary to ensure that consumers have effective sun protection. The effectiveness of the anti-solar products is quantified using a universal indicator, the SPF (sun protection factor). This value can be given in two different ways: by in vivo (standard ISO 24444:2010) and in vitro methods. The in vitro method was adopted for this study, for ethical reasons. The protective effect regarding non-melanoma cancers given by sun products has been proven. It is nevertheless a fact that consumers need to be made aware. Indeed, the quantity of sun protection product applied in reality by the consumer is clearly lower than the recommended amount. Under these conditions, the following question can be asked: What is the level of protection attained if half or even a quarter of the recommended dose of product is applied? In order to answer this question, 20 oils available on the market were tested in vitro at five different doses (5, 7.5, 10, 12.5, 15.0 mg over a surface of 25 cm(2)). We showed that a ratio of two polynomial functions exists between the SPF and the final mass of the product. The factors reducing the efficacy when the dose is divided by 2 are very variable, ranging from 2 to 4 according to which product is studied.
Variational method applied to two-component Ginzburg-Landau theory
NASA Astrophysics Data System (ADS)
Romaguera, Antonio R. de C.; Silva, K. J. S.
2013-09-01
In this paper, we apply a variational method to two-component superconductors, as in the MgB2 materials, using the two-component Ginzburg-Landau (GL) theory. We expand the order parameter in a series of eigenfunctions containing one or two terms in each component. We also assume azimuthal symmetry to the set of eigenfunctions used in the mathematical procedure. The extension of the GL theory to two components leads to the quantization of the magnetic flux in fractions of ϕ0. We consider two kinds of component interaction potentials: Γ1|ΨI|2|ΨII|2 and Γ _2(Ψ _I^*Ψ _{II}+Ψ _IΨ _{II}^*). The simplicity of the method allows one to implement it in a broad range of physical systems, such as hybrid magnetic-superconducting mesoscopic systems, texturized thin films, metallic hydrogen superfluid, and mesoscopic superconductors near inhomogeneous magnetic fields, simply by replacing the vector potential by its corresponding expression. As an example, we apply our results to a disk of radius R and thickness t.
Tester periodically registers dc amplifier characteristics
NASA Technical Reports Server (NTRS)
Cree, D.; Wenzel, G. E.
1966-01-01
Motor-driven switcher-recorder periodically registers the zero drift and gain drift signals of a dc amplifier subjected to changes in environment. A time coding method is used since several measurements are shared on a single recorder trace.
NASA Astrophysics Data System (ADS)
Hayashi, K.
2014-12-01
. Engineers need more quantitative information. In order to apply geophysical methods to engineering design works, quantitative interpretation is very important. The presentation introduces several case studies from different countries around the world (Fig. 2) from the integrated and quantitative points of view.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
A ``local observables'' method for wave mechanics applied to atomic hydrogen
NASA Astrophysics Data System (ADS)
Bowman, Peter J.
2008-12-01
An alternative method of deriving the values of the observables of atomic systems is presented. Rather than using operators and eigenvalues the local variables method uses the continuity equation together with current densities derived from wave functions that are solutions of the Dirac or Pauli equation. The method is applied to atomic hydrogen using the usual language of quantum mechanics rather than that of geometric algebra with which the method is often associated. The picture of the atom that emerges is one in which the electron density as a whole is rotating about a central axis. The results challenge some assumptions of conventional quantum mechanics. Electron spin is shown to be a property of the dynamical motion of the electron and not an intrinsic property of the electron, the ground state of hydrogen is shown to have an orbital angular momentum of ℏ, and excited states are shown to have angular momenta that are different from the eigenvalues of the usual quantum mechanical operators. The uncertainty relations are found not to be applicable to the orthogonal components of the angular momentum. No double electron spin gyromagnetic ratio is required to account for the observed magnetic moments, and the behavior of the atom in a magnetic field is described entirely in kinetic terms.
KRON's Method Applied to the Study of Electromagnetic Interference Occurring in Aerospace Systems
NASA Astrophysics Data System (ADS)
Leman, S.; Reineix, A.; Hoeppe, F.; Poiré, Y.; Mahoudi, M.; Démoulin, B.; Üstüner, F.; Rodriquez, V. P.
2012-05-01
In this paper, spacecraft and aircraft mock-ups are used to simulate the performance of KRON based tools applied to the simulation of large EMC systems. These tools aim to assist engineers in the design phase of complex systems. This is done by effectively evaluating the EM disturbances between antennas, electronic equipment, and Portable Electronic Devices found in large systems. We use a topological analysis of the system to model independent sub-volumes such as antennas, cables, equipments, PED and cavity walls. Each of these sub- volumes is modelled by an appropriate method which can be based on, for example, analytical expressions, transmission line theory or other numerical tools such as the full wave FDFD method. This representation associated with the electrical tensorial method of G.KRON leads to reasonable simulation times (typically a few minutes) and accurate results. Because equivalent sub-models are built separately, the main originality of this method is that each sub- volume can be easily replaced by another one without rebuilding the entire system. Comparisons between measurements and simulations will be also presented.
Martin, Jennifer A.; Smith, Joshua E.; Warren, Mercedes; Chávez, Jorge L.; Hagen, Joshua A.; Kelley-Loughnane, Nancy
2015-01-01
Small molecules provide rich targets for biosensing applications due to their physiological implications as biomarkers of various aspects of human health and performance. Nucleic acid aptamers have been increasingly applied as recognition elements on biosensor platforms, but selecting aptamers toward small molecule targets requires special design considerations. This work describes modification and critical steps of a method designed to select structure-switching aptamers to small molecule targets. Binding sequences from a DNA library hybridized to complementary DNA capture probes on magnetic beads are separated from nonbinders via a target-induced change in conformation. This method is advantageous because sequences binding the support matrix (beads) will not be further amplified, and it does not require immobilization of the target molecule. However, the melting temperature of the capture probe and library is kept at or slightly above RT, such that sequences that dehybridize based on thermodynamics will also be present in the supernatant solution. This effectively limits the partitioning efficiency (ability to separate target binding sequences from nonbinders), and therefore many selection rounds will be required to remove background sequences. The reported method differs from previous structure-switching aptamer selections due to implementation of negative selection steps, simplified enrichment monitoring, and extension of the length of the capture probe following selection enrichment to provide enhanced stringency. The selected structure-switching aptamers are advantageous in a gold nanoparticle assay platform that reports the presence of a target molecule by the conformational change of the aptamer. The gold nanoparticle assay was applied because it provides a simple, rapid colorimetric readout that is beneficial in a clinical or deployed environment. Design and optimization considerations are presented for the assay as proof-of-principle work in buffer to
Martin, Jennifer A; Smith, Joshua E; Warren, Mercedes; Chávez, Jorge L; Hagen, Joshua A; Kelley-Loughnane, Nancy
2015-02-28
Small molecules provide rich targets for biosensing applications due to their physiological implications as biomarkers of various aspects of human health and performance. Nucleic acid aptamers have been increasingly applied as recognition elements on biosensor platforms, but selecting aptamers toward small molecule targets requires special design considerations. This work describes modification and critical steps of a method designed to select structure-switching aptamers to small molecule targets. Binding sequences from a DNA library hybridized to complementary DNA capture probes on magnetic beads are separated from nonbinders via a target-induced change in conformation. This method is advantageous because sequences binding the support matrix (beads) will not be further amplified, and it does not require immobilization of the target molecule. However, the melting temperature of the capture probe and library is kept at or slightly above RT, such that sequences that dehybridize based on thermodynamics will also be present in the supernatant solution. This effectively limits the partitioning efficiency (ability to separate target binding sequences from nonbinders), and therefore many selection rounds will be required to remove background sequences. The reported method differs from previous structure-switching aptamer selections due to implementation of negative selection steps, simplified enrichment monitoring, and extension of the length of the capture probe following selection enrichment to provide enhanced stringency. The selected structure-switching aptamers are advantageous in a gold nanoparticle assay platform that reports the presence of a target molecule by the conformational change of the aptamer. The gold nanoparticle assay was applied because it provides a simple, rapid colorimetric readout that is beneficial in a clinical or deployed environment. Design and optimization considerations are presented for the assay as proof-of-principle work in buffer to
NASA Astrophysics Data System (ADS)
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp
2016-06-01
Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.
NASA Astrophysics Data System (ADS)
Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos
2013-04-01
In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission
Niida, Yo; Kuroda, Mondo; Mitani, Yusuke; Okumura, Akiko; Yokoi, Ayano
2012-11-01
Establishing a simple and effective mutation screening method is one of the most compelling problems with applying genetic diagnosis to clinical use. Because there is no reliable and inexpensive screening system, amplifying by PCR and performing direct sequencing of every coding exon is the gold standard strategy even today. However, this approach is expensive and time consuming, especially when gene size or sample number is large. Previously, we developed CEL nuclease mediated heteroduplex incision with polyacrylamide gel electrophoresis and silver staining (CHIPS) as an ideal simple mutation screening system constructed with only conventional apparatuses and commercially available reagents. In this study, we evaluated the utility of CHIPS technology for genetic diagnosis in clinical practice by applying this system to screening for the COL2A1, WRN and RPS6KA3 mutations in newly diagnosed patients with Stickler syndrome (autosomal dominant inheritance), Werner syndrome (autosomal recessive inheritance) and Coffin-Lowry syndrome (X-linked inheritance), respectively. In all three genes, CHIPS detected all DNA variations including disease causative mutations within a day. Direct sequencing of all coding exons of these genes confirmed 100% sensitivity and specificity. We demonstrate high sensitivity, high cost performance and reliability of this simple system, with compatibility to all inheritance modes. Because of its low technology, CHIPS is ready to use and potentially disseminate to any laboratories in the world.
Lopes, Fernanda Cristina Rezende; Tannous, Katia; Rueda-Ordóñez, Yesid Javier
2016-11-01
This work aims the study of decomposition kinetics of guarana seed residue using thermogravimetric analyzer under synthetic air atmosphere applying heating rates of 5, 10, and 15°C/min, from room temperature to 900°C. Three thermal decomposition stages were identified: dehydration (25.1-160°C), oxidative pyrolysis (240-370°C), and combustion (350-650°C). The activation energies, reaction model, and pre-exponential factor were determined through four isoconversional methods, master plots, and linearization of the conversion rate equation, respectively. A scheme of two-consecutive reactions was applied validating the kinetic parameters of first-order reaction and two-dimensional diffusion models for the oxidative pyrolysis stage (149.57kJ/mol, 6.97×10(10)1/s) and for combustion stage (77.98kJ/mol, 98.611/s), respectively. The comparison between theoretical and experimental conversion and conversion rate showed good agreement with average deviation lower than 2%, indicating that these results could be used for modeling of guarana seed residue. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bamberger, Katharine T
2016-03-01
The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.
A general optimization method applied to a vdW-DF functional for water
NASA Astrophysics Data System (ADS)
Fritz, Michelle; Soler, Jose M.; Fernandez-Serra, Marivi
In particularly delicate systems, like liquid water, ab initio exchange and correlation functionals are simply not accurate enough for many practical applications. In these cases, fitting the functional to reference data is a sensible alternative to empirical interatomic potentials. However, a global optimization requires functional forms that depend on many parameters and the usual trial and error strategy becomes cumbersome and suboptimal. We have developed a general and powerful optimization scheme called data projection onto parameter space (DPPS) and applied it to the optimization of a van der Waals density functional (vdW-DF) for water. In an arbitrarily large parameter space, DPPS solves for vector of unknown parameters for a given set of known data, and poorly sampled subspaces are determined by the physically-motivated functional shape of ab initio functionals using Bayes' theory. We present a new GGA exchange functional that has been optimized with the DPPS method for 1-body, 2-body, and 3-body energies of water systems and results from testing the performance of the optimized functional when applied to the calculation of ice cohesion energies and ab initio liquid water simulations. We found that our optimized functional improves the description of both liquid water and ice when compared to other versions of GGA exchange.
Bamberger, Katharine T.
2015-01-01
The use of intensive longitudinal methods (ILM)—rapid in situ assessment at micro timescales—can be overlaid on RCTs and other study designs in applied family research. Especially when done as part of a multiple timescale design—in bursts over macro timescales, ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based (rather than family-based) interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM. PMID:26541560
NASA Astrophysics Data System (ADS)
Albelda, J.; Denia, F. D.; Torres, M. I.; Fuenmayor, F. J.
2007-06-01
To carry out the acoustic analysis of dissipative silencers with uniform cross-section, the application of the mode matching method at the geometrical discontinuities is an attractive option from a computational point of view. The consideration of this methodology assumes, in general, that the modes associated with the transversal geometry of each element with uniform cross-section are known for the excitation frequencies considered in the analysis. The calculation of the transversal modes is not, however, a simple task when the acoustic system involves perforated elements and absorbent materials. The current work presents a modal approach to calculate the transversal modes and the corresponding axial wavenumbers for dissipative mufflers of uniform (but arbitrary) cross-section. The proposed technique is based on the division of the transversal section into subdomains and the subsequent use of a substructuring procedure with two sets of modes to improve the convergence. The former set of modes fulfils the condition of zero pressure at the common boundary between transversal subdomains while the latter satisfies the condition of zero derivative in the direction normal to the boundary. The approach leads to a versatile methodology with a moderate computational effort that can be applied to mufflers commonly found in real applications. To validate the procedure presented in this work, comparisons are provided with finite element predictions and results available in the literature, showing a good agreement. In addition, the procedure is applied to an example of practical interest.
Moreno-Romero, Jordi; Santos-González, Juan; Hennig, Lars; Köhler, Claudia
2017-02-01
The early endosperm tissue of dicot species is very difficult to isolate by manual dissection. This protocol details how to apply the INTACT (isolation of nuclei tagged in specific cell types) system for isolating early endosperm nuclei of Arabidopsis at high purity and how to generate parental-specific epigenome profiles. As a Protocol Extension, this article describes an adaptation of an existing Nature Protocol that details the use of the INTACT method for purification of root nuclei. We address how to obtain the INTACT lines, generate the starting material and purify the nuclei. We describe a method that allows purity assessment, which has not been previously addressed. The purified nuclei can be used for ChIP and DNA bisulfite treatment followed by next-generation sequencing (seq) to study histone modifications and DNA methylation profiles, respectively. By using two different Arabidopsis accessions as parents that differ by a large number of single-nucleotide polymorphisms (SNPs), we were able to distinguish the parental origin of epigenetic modifications. Our protocol describes the only working method to our knowledge for generating parental-specific epigenome profiles of the early Arabidopsis endosperm. The complete protocol, from silique collection to finished libraries, can be completed in 2 d for bisulfite-seq (BS-seq) and 3 to 4 d for ChIP-seq experiments.This protocol is an extension to: Nat. Protoc. 6, 56-68 (2011); doi:10.1038/nprot.2010.175; published online 16 December 2010.
Quantum Monte Carlo method applied to non-Markovian barrier transmission
NASA Astrophysics Data System (ADS)
Hupin, Guillaume; Lacroix, Denis
2010-01-01
In nuclear fusion and fission, fluctuation and dissipation arise because of the coupling of collective degrees of freedom with internal excitations. Close to the barrier, quantum, statistical, and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte Carlo method is applied to systems with quadratic potentials. In all ranges of temperature and coupling, the stochastic method matches the exact evolution, showing that non-Markovian effects can be simulated accurately. A comparison with other theories, such as Nakajima-Zwanzig or time-convolutionless, shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated by different approaches including the Markovian limit. Large differences with an exact result are seen in the latter case or when only second order in the coupling strength is considered, as is generally assumed in nuclear transport models. In contrast, if fourth order in the coupling or quantum Monte Carlo method is used, a perfect agreement is obtained.
An interface tracking method applied to morphological evolution during phase change
NASA Technical Reports Server (NTRS)
Shyy, W.; Udaykumar, H. S.; Liang, S.-J.
1992-01-01
The focus of this work is the numerical simulation of interface motion during solidification of pure materials. First, the applicability of the oft-used quasi-stationary approximation for interface motion is assessed. It is seen that such an approximation results in poor accuracy for nontrivial Stefan numbers. Solution of the full set of equations including grid movement terms yields close agreement with analytical results. Next, a generic interface tracking procedure is designed, which overcomes restrictions of single-valuedness of the interface imposed by commonly used mapping methods. This method incorporates with ease interface phenomena involving curvature, which assume importance at the smaller scales of a deformed interface. The method is then applied to study the development of a morphologically unstable phase interface. The issue of appropriate scaling has been addressed. The Gibbs-Thomson effect for curved interfaces has been included. The evolution of the interface, with the competing mechanisms of undercooling and surface tension is found to culminate in tip-splitting, cusp formation and persistent cellular development.
Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui
2011-11-01
Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization method to determine which gene patents could indeed be problematic. The method is applied to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing methods and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called 'gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing.
Xin, Li; Wenxue, Hong; Jialin, Song; Jiannan, Kang
2005-01-01
We endeavor to provide a novel tool to evaluate environmental comfort level in Health Smart Home (HSH). HSH is regarded a good alternative for the independent life of elders and people with disability. Numerous intelligent devices, installed within a home environment, can provide the resident with continuous monitoring and comfortable environment. In this paper, a novel method of evaluating environmental comfort level is provided. An intelligent sensor is a fuzzy comfort sensor that can measure and fusion the environmental parameters. Based upon the results, it will further give a linguistic description about the environmental comfort level, in the manner of an expert system. The core of the sensor is multi-parameter information fusion. Similar to human behavior, the sensor makes all the evaluation about the surrounding environment's comfort level based on the symbolic measurement theory. We applied chart representation theory in multivariate analysis in the biomedical engineering field to complete the human comfortable sensor's linguistic concept creation. We achieved better performance when using this method to complete multi-parameter fusion and fuzziness. It is our belief that this method can be used in both biology intelligent sensing and many other areas, where the quantitative and qualitative information transform is needed.
Photonic simulation method applied to the study of structural color in Myxomycetes.
Dolinko, Andrés; Skigin, Diana; Inchaussandague, Marina; Carmaran, Cecilia
2012-07-02
We present a novel simulation method to investigate the multicolored effect of the Diachea leucopoda (Physarales order, Myxomycetes class), which is a microorganism that has a characteristic pointillistic iridescent appearance. It was shown that this appearance is of structural origin, and is produced within the peridium -protective layer that encloses the mass of spores-, which is basically a corrugated sheet of a transparent material. The main characteristics of the observed color were explained in terms of interference effects using a simple model of homogeneous planar slab. In this paper we apply a novel simulation method to investigate the electromagnetic response of such structure in more detail, i.e., taking into account the inhomogeneities of the biological material within the peridium and its curvature. We show that both features, which could not be considered within the simplified model, affect the observed color. The proposed method is of great potential for the study of biological structures, which present a high degree of complexity in the geometrical shapes as well as in the materials involved.
Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui
2011-01-01
Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization method to determine which gene patents could indeed be problematic. The method is applied to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing methods and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called ‘gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing. PMID:21811306
Resampling method for applying density-dependent habitat selection theory to wildlife surveys.
Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel
2015-01-01
Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large
Resampling Method for Applying Density-Dependent Habitat Selection Theory to Wildlife Surveys
Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel
2015-01-01
Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large
A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale
Mircioiu, Constantin; Atkinson, Jeffrey
2017-01-01
A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple “yes” or “no” but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies for practice. Results obtained show that with “large” (>15) numbers of responses and similar (but clearly not normal) distributions from different subgroups, parametric and non-parametric analyses give in almost all cases the same significant or non-significant results for inter-subgroup comparisons. Parametric methods were more discriminant in the cases of non-similar conclusions. Considering that the largest differences in opinions occurred in the upper part of the 4-point Likert scale (ranks 3 “very important” and 4 “essential”), a “score analysis” based on this part of the data was undertaken. This transformation of the ordinal Likert data into binary scores produced a graphical representation that was visually easier to understand as differences were accentuated. In conclusion, in this case of Likert ordinal data with high response rates, restraining the analysis to non-parametric methods leads to a loss of information. The addition of parametric methods, graphical analysis, analysis of subsets, and transformation of data leads to more in-depth analyses. PMID:28970438
Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system
NASA Astrophysics Data System (ADS)
Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew
2016-05-01
Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.
Jackson, Rebecca D; Best, Thomas M; Borlawsky, Tara B; Lai, Albert M; James, Stephen; Gurcan, Metin N
2012-01-01
The conduct of clinical and translational research regularly involves the use of a variety of heterogeneous and large-scale data resources. Scalable methods for the integrative analysis of such resources, particularly when attempting to leverage computable domain knowledge in order to generate actionable hypotheses in a high-throughput manner, remain an open area of research. In this report, we describe both a generalizable design pattern for such integrative knowledge-anchored hypothesis discovery operations and our experience in applying that design pattern in the experimental context of a set of driving research questions related to the publicly available Osteoarthritis Initiative data repository. We believe that this ‘test bed’ project and the lessons learned during its execution are both generalizable and representative of common clinical and translational research paradigms. PMID:22647689
An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring
NASA Technical Reports Server (NTRS)
Buratynski, E. K.; Caughey, D. A.
1984-01-01
An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.
Practical methods of tracking of nonstationary time series applied to real-world data
NASA Astrophysics Data System (ADS)
Nabney, Ian T.; McLachlan, Alan; Lowe, David
1996-03-01
In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Two real world data sets, containing electricity load demands and foreign exchange market prices, are used to test several different methods, ranging from linear models with fixed parameters, to non-linear models which adapt both parameters and model order on-line. Training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. The results of our experiments show that there are advantages to be gained in tracking real world non-stationary data through the use of more complex adaptive models.
NASA Technical Reports Server (NTRS)
Lukemire, Alan T. (Inventor)
1993-01-01
A pulse-width modulated DC-to-DC power converter including a first inductor, i.e. a transformer or an equivalent fixed inductor equal to the inductance of the secondary winding of the transformer, coupled across a source of DC input voltage via a transistor switch which is rendered alternately conductive (ON) and nonconductive (OFF) in accordance with a signal from a feedback control circuit is described. A first capacitor capacitively couples one side of the first inductor to a second inductor which is connected to a second capacitor which is coupled to the other side of the first inductor. A circuit load shunts the second capacitor. A semiconductor diode is additionally coupled from a common circuit connection between the first capacitor and the second inductor to the other side of the first inductor. A current sense transformer generating a current feedback signal for the switch control circuit is directly coupled in series with the other side of the first inductor so that the first capacitor, the second inductor and the current sense transformer are connected in series through the first inductor. The inductance values of the first and second inductors, moreover, are made identical. Such a converter topology results in a simultaneous voltsecond balance in the first inductance and ampere-second balance in the current sense transformer.
NASA Technical Reports Server (NTRS)
Lukemire, Alan T. (Inventor)
1995-01-01
A pulse-width modulated DC-to-DC power converter including a first inductor, i.e. a transformer or an equivalent fixed inductor equal to the inductance of the secondary winding of the transformer, coupled across a source of DC input voltage via a transistor switch which is rendered alternately conductive (ON) and nonconductive (OFF) in accordance with a signal from a feedback control circuit is described. A first capacitor capacitively couples one side of the first inductor to a second inductor which is connected to a second capacitor which is coupled to the other side of the first inductor. A circuit load shunts the second capacitor. A semiconductor diode is additionally coupled from a common circuit connection between the first capacitor and the second inductor to the other side of the first inductor. A current sense transformer generating a current feedback signal for the switch control circuit is directly coupled in series with the other side of the first inductor so that the first capacitor, the second inductor and the current sense transformer are connected in series through the first inductor. The inductance values of the first and second inductors, moreover, are made identical. Such a converter topology results in a simultaneous voltsecond balance in the first inductance and ampere-second balance in the current sense transformer.
Experimental design applied to spin coating of 2D colloidal crystal masks: a relevant method?
Colson, Pierre; Cloots, Rudi; Henrist, Catherine
2011-11-01
Monolayers of colloidal spheres are used as masks in nanosphere lithography (NSL) for the selective deposition of nanostructured layers. Several methods exist for the formation of self-organized particle monolayers, among which spin coating appears to be very promising. However, a spin coating process is defined by several parameters like several ramps, rotation speeds, and durations. All parameters influence the spreading and drying of the droplet containing the particles. Moreover, scientists are confronted with the formation of numerous defects in spin coated layers, limiting well-ordered areas to a few micrometers squared. So far, empiricism has mainly ruled the world of nanoparticle self-organization by spin coating, and much of the literature is experimentally based. Therefore, the development of experimental protocols to control the ordering of particles is a major goal for further progress in NSL. We applied experimental design to spin coating, to evaluate the efficiency of this method to extract and model the relationships between the experimental parameters and the degree of ordering in the particles monolayers. A set of experiments was generated by the MODDE software and applied to the spin coating of latex suspension (diameter 490 nm). We calculated the ordering by a homemade image analysis tool. The results of partial least squares (PLS) modeling show that the proposed mathematical model only fits data from strictly monolayers but is not predictive for new sets of parameters. We submitted the data to principal component analysis (PCA) that was able to explain 91% of the results when based on strictly monolayered samples. PCA shows that the ordering was positively correlated to the ramp time and negatively correlated to the first rotation speed. We obtain large defect-free domains with the best set of parameters tested in this study. This protocol leads to areas of 200 μm(2), which has never been reported so far.
NASA Astrophysics Data System (ADS)
Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga
2014-05-01
Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually
DC-Compensated Current Transformer †
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-01
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830
Output feedback trajectory stabilization of the uncertainty DC servomechanism system.
Aguilar-Ibañez, Carlos; Garrido-Moctezuma, Ruben; Davila, Jorge
2012-11-01
This work proposes a solution for the output feedback trajectory-tracking problem in the case of an uncertain DC servomechanism system. The system consists of a pendulum actuated by a DC motor and subject to a time-varying bounded disturbance. The control law consists of a Proportional Derivative controller and an uncertain estimator that allows compensating the effects of the unknown bounded perturbation. Because the motor velocity state is not available from measurements, a second-order sliding-mode observer permits the estimation of this variable in finite time. This last feature allows applying the Separation Principle. The convergence analysis is carried out by means of the Lyapunov method. Results obtained from numerical simulations and experiments in a laboratory prototype show the performance of the closed loop system.
Intelligent dc-dc Converter Technology Developed and Tested
NASA Technical Reports Server (NTRS)
Button, Robert M.
2001-01-01
The NASA Glenn Research Center and the Cleveland State University have developed a digitally controlled dc-dc converter to research the benefits of flexible, digital control on power electronics and systems. Initial research and testing has shown that conventional dc-dc converters can benefit from improved performance by using digital-signal processors and nonlinear control algorithms.
A novel wireless power and data transmission AC to DC converter for an implantable device.
Liu, Jhao-Yan; Tang, Kea-Tiong
2013-01-01
This article presents a novel AC to DC converter implemented by standard CMOS technology, applied for wireless power transmission. This circuit combines the functions of the rectifier and DC to DC converter, rather than using the rectifier to convert AC to DC and then supplying the required voltage with regulator as in the transitional method. This modification can reduce the power consumption and the area of the circuit. This circuit also transfers the loading condition back to the external circuit by the load shift keying(LSK), determining if the input power is not enough or excessive, which increases the efficiency of the total system. The AC to DC converter is fabricated with the TSMC 90nm CMOS process. The circuit area is 0.071mm(2). The circuit can produce a 1V DC voltage with maximum output current of 10mA from an AC input ranging from 1.5V to 2V, at 1MHz to 10MHz.
Das, B; Meirovitch, H; Navon, I M
2003-07-30
Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1222-1231, 2003
Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method
Bhagwat, Nikhil V.
2005-01-01
In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.
The generalized cross-validation method applied to geophysical linear traveltime tomography
NASA Astrophysics Data System (ADS)
Bassrei, A.; Oliveira, N. P.
2009-12-01
The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.
NASA Astrophysics Data System (ADS)
Kim, A.; Dreger, D. S.; Taira, T.
2009-12-01
In this study, we developed a finite-source inversion method using the waveforms of small earthquakes as empirical Green's functions (eGf) to study the rupture process of micro-earthquakes on the San Andreas fault. This method is different from the ordinarily eGf deconvolution method which deconvolves the seismogram of the smaller simpler-source event from the seismogram of the larger event recovering the moment rate function of the larger more complex-source event. In the eGf deconvolution method commonly spectral domain deconvolution is used where the small earthquake spectrum is divided from the larger target event spectrum, and low spectral values are replaced by a water-level value to damp the effect of division by small numbers (e.g. Clayton and Wiggins, 1976). The water-level is chosen by trial and error. Such a rough regularization of the spectral ratio can result in the solution having unrealistic negative values and short-period oscillations. Also the amplitude and duration of the moment rate functions can be influenced by the adopted water-level value. In this study we propose to use the eGf waveform directly in the inversion, rather than the moment rate function obtained from spectral division. In this approach the eGf is treated as the Green’s function from each subfault, and contrary to the deconvolution approach can make use multiple eGfs distributed over the fault plane. The method can therefore be applied to short source-receiver distance situations since the variation in radiation pattern due to source-receiver geometry is better accounted for. Numerical tests of the waveform eGf inversion method indicate that in the case where the large slip asperity is not located at the hypocenter, the eGf located near the asperity recovers the prescribed model better than that using an eGf co-located with the main shock hypocenter. Synthetic analyses also show that using multiple eGfs can better constrain the slip model than using only one eGf in the
Symmetry analysis for nonlinear time reversal methods applied to nonlinear acoustic imaging
NASA Astrophysics Data System (ADS)
Dos Santos, Serge; Chaline, Jennifer
2015-10-01
Using symmetry invariance, nonlinear Time Reversal (TR) and reciprocity properties, the classical NEWS methods are supplemented and improved by new excitations having the intrinsic property of enlarging frequency analysis bandwidth and time domain scales, with now both medical acoustics and electromagnetic applications. The analysis of invariant quantities is a well-known tool which is often used in nonlinear acoustics in order to simplify complex equations. Based on a fundamental physical principle known as symmetry analysis, this approach consists in finding judicious variables, intrinsically scale dependant, and able to describe all stages of behaviour on the same theoretical foundation. Based on previously published results within the nonlinear acoustic areas, some practical implementation will be proposed as a new way to define TR-NEWS based methods applied to NDT and medical bubble based non-destructive imaging. This paper tends to show how symmetry analysis can help us to define new methodologies and new experimental set-up involving modern signal processing tools. Some example of practical realizations will be proposed in the context of biomedical non-destructive imaging using Ultrasound Contrast Agents (ACUs) where symmetry and invariance properties allow us to define a microscopic scale-invariant experimental set-up describing intrinsic symmetries of the microscopic complex system.
A new feature extraction method for signal classification applied to cord dorsum potential detection
NASA Astrophysics Data System (ADS)
Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.
2012-10-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.
NASA Astrophysics Data System (ADS)
Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana
2016-12-01
Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.
Applying community-oriented primary care methods in British general practice: a case study.
Iliffe, Steve; Lenihan, Penny; Wallace, Paul; Drennan, Vari; Blanchard, Martin; Harris, Andrew
2002-01-01
BACKGROUND: The '75 and over' assessments built into the 1990 contract for general practice have failed to enthuse primary care teams or make a significant impact on the health of older people. Alternative methods for improving the health of older people living at home are being sought. AIM: To test the feasibility of applying community-oriented primary care methodology to a relatively deprived sub-population of older people in a relatively deprived area. DESIGN OF STUDY: A combination of developmental and triangulation approaches to data analysis. SETTING: Four general practices in an inner London borough. METHOD: A community-oriented primary care approach was used to initiate innovative care for older people, supported financially by the health authority and practically by primary care academics. RESULTS: All four practices identified problems needing attention in the older population, developed different projects focused on particular needs among older people, and tested them in practice. Patient and public involvement were central to the design and implementation processes in only one practice. Innovations were sustained in only one practice, but some were adopted by a primary care group and others extended to a wider group of practices by the health authority. CONCLUSION: A modified community-oriented primary care approach can be used in British general practice, and changes can be promoted that are perceived as valuable by planning bodies. However, this methodology may have more impact at primary care trust level than at practice level. PMID:12171223
Goal oriented soil mapping: applying modern methods supported by local knowledge: A review
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Brevik, Eric; Oliva, Marc; Estebaranz, Ferran; Depellegrin, Daniel; Novara, Agata; Cerda, Artemi; Menshov, Oleksandr
2017-04-01
In the recent years the amount of soil data available increased importantly. This facilitated the production of better and accurate maps, important for sustainable land management (Pereira et al., 2017). Despite these advances, the human knowledge is extremely important to understand the natural characteristics of the landscape. The knowledge accumulated and transmitted generation after generation is priceless, and should be considered as a valuable data source for soil mapping and modelling. The local knowledge and wisdom can complement the new advances in soil analysis. In addition, farmers are the most interested in the participation and incorporation of their knowledge in the models, since they are the end-users of the study that soil scientists produce. Integration of local community's vision and understanding about nature is assumed to be an important step to the implementation of decision maker's policies. Despite this, many challenges appear regarding the integration of local and scientific knowledge, since in some cases there is no spatial correlation between folk and scientific classifications, which may be attributed to the different cultural variables that influence local soil classification. The objective of this work is to review how modern soil methods incorporated local knowledge in their models. References Pereira, P., Brevik, E., Oliva, M., Estebaranz, F., Depellegrin, D., Novara, A., Cerda, A., Menshov, O. (2017) Goal Oriented soil mapping: applying modern methods supported by local knowledge. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006
Kinetic energy partition method applied to ground state helium-like atoms
NASA Astrophysics Data System (ADS)
Chen, Yu-Hsin; Chao, Sheng D.
2017-03-01
We have used the recently developed kinetic energy partition (KEP) method to solve the quantum eigenvalue problems for helium-like atoms and obtain precise ground state energies and wave-functions. The key to treating properly the electron-electron (repulsive) Coulomb potential energies for the KEP method to be applied is to introduce a "negative mass" term into the partitioned kinetic energy. A Hartree-like product wave-function from the subsystem wave-functions is used to form the initial trial function, and the variational search for the optimized adiabatic parameters leads to a precise ground state energy. This new approach sheds new light on the all-important problem of solving many-electron Schrödinger equations and hopefully opens a new way to predictive quantum chemistry. The results presented here give very promising evidence that an effective one-electron model can be used to represent a many-electron system, in the spirit of density functional theory.
NASA Astrophysics Data System (ADS)
Lanman, Douglas; Wetzstein, Gordon; Hirsch, Matthew; Heidrich, Wolfgang; Raskar, Ramesh
2012-03-01
This paper focuses on resolving long-standing limitations of parallax barriers by applying formal optimization methods. We consider two generalizations of conventional parallax barriers. First, we consider general two-layer architectures, supporting high-speed temporal variation with arbitrary opacities on each layer. Second, we consider general multi-layer architectures containing three or more light-attenuating layers. This line of research has led to two new attenuation-based displays. The High-Rank 3D (HR3D) display contains a stacked pair of LCD panels; rather than using heuristically-defined parallax barriers, both layers are jointly-optimized using low-rank light field factorization, resulting in increased brightness, refresh rate, and battery life for mobile applications. The Layered 3D display extends this approach to multi-layered displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field when illuminated by a uniform backlight. We further introduce Polarization Fields as an optically-efficient and computationally efficient extension of Layered 3D to multi-layer LCDs. Together, these projects reveal new generalizations to parallax barrier concepts, enabled by the application of formal optimization methods to multi-layer attenuation-based designs in a manner that uniquely leverages the compressive nature of 3D scenes for display applications.
Kinetic energy partition method applied to ground state helium-like atoms.
Chen, Yu-Hsin; Chao, Sheng D
2017-03-28
We have used the recently developed kinetic energy partition (KEP) method to solve the quantum eigenvalue problems for helium-like atoms and obtain precise ground state energies and wave-functions. The key to treating properly the electron-electron (repulsive) Coulomb potential energies for the KEP method to be applied is to introduce a "negative mass" term into the partitioned kinetic energy. A Hartree-like product wave-function from the subsystem wave-functions is used to form the initial trial function, and the variational search for the optimized adiabatic parameters leads to a precise ground state energy. This new approach sheds new light on the all-important problem of solving many-electron Schrödinger equations and hopefully opens a new way to predictive quantum chemistry. The results presented here give very promising evidence that an effective one-electron model can be used to represent a many-electron system, in the spirit of density functional theory.
Karol, Jane
2007-01-01
This article describes a unique, innovative, and effective method of psychotherapy using horses to aid in the therapeutic process (Equine-facilitated Psychotherapy or EFP). The remarkable elements of the horse--power, grace, vulnerability, and a willingness to bear another--combine to form a fertile stage for psychotherapeutic exploration. Therapeutic programs using horses to work with various psychiatric presentations in children and adolescents have begun to receive attention over the past 10 years. However, few EFP programs utilize the expertise of masters and doctoral-level psychologists, clinical social workers, or psychiatrists. In contrast, the psychological practice described in this article, written and practiced by a doctoral-level clinician, applies the breadth and depth of psychological theory and practice developed over the last century to a distinctly compelling milieu. The method relies not only on the therapeutic relationship with the clinician, but is also fueled by the client's compelling attachment to the therapeutic horse. As both of these relationships progress, the child's inner world and interpersonal style come to the forefront and the EFP theater allows the clinician to explore the client's intrapersonal and interpersonal worlds on preverbal, nonverbal and verbal levels of experience.
Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.
2012-01-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924
Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P
2012-10-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.
Revising the spectral method as applied to the mantle dynamics modeling.
NASA Astrophysics Data System (ADS)
Petrunin, A. G.; Kaban, M. K.; Rogozhina, I.; Trubytsyn, V. P.
2012-04-01
The spectral method is widely used for modeling instantaneous flow and stress field distribution in a spherical shell. This method provides a high accuracy semi-analytical solution of the Navier-Stokes and Poisson equations when the viscosity is only depth- (radial-) dependent. However, the distribution of viscosity in the real Earth is essentially three-dimensional. In this case, non-linear coupling of different spherical harmonic modes does not allow obtaining a straightforward semi-analytical solution. In this study, we present a numerical approach, built on substantially revised method originally proposed by Zhang and Christensen (1993) for solving the Navier-Stokes equation in a spectral domain in case if lateral variations of viscosity (LVV) are present. We demonstrate a number of numerical algorithms allowing to efficiently calculate instantaneous Stokes flow in a sphere taking into account the effects of LVV, self-gravitation and compressibility. In particular, the Newton-Raphson procedure applied to the shooting method shows the ability to solve the boundary value problem, necessary for cross-linking solutions on spheres. In contrast to the traditionally used propagator method, our approach suggests continuous integration over depth without introducing internal interfaces. The Clenshaw-based recursion algorithms for computing associated Legendre functions and the Horner's scheme for computing partial sums allow avoiding the problems in the Poles vicinity typical for the spherical harmonic methods and obtaining a fast and robust solution on a sphere for high degree and order. Since the benchmarking technique of 3-D spherical codes is not developed substantially, we employ different approaches to test the proposed numerical algorithm. First, we show that the algorithm produces correct results for radially symmetric viscosity distribution. Second, an iterative scheme for the LVV case is validated by comparing the solution for the tetrahedral symmetric (l=3,m
Efficient Nonnegative Matrix Factorization by DC Programming and DCA.
Le Thi, Hoai An; Vo, Xuan Thanh; Dinh, Tao Pham
2016-06-01
In this letter, we consider the nonnegative matrix factorization (NMF) problem and several NMF variants. Two approaches based on DC (difference of convex functions) programming and DCA (DC algorithm) are developed. The first approach follows the alternating framework that requires solving, at each iteration, two nonnegativity-constrained least squares subproblems for which DCA-based schemes are investigated. The convergence property of the proposed algorithm is carefully studied. We show that with suitable DC decompositions, our algorithm generates most of the standard methods for the NMF problem. The second approach directly applies DCA on the whole NMF problem. Two algorithms-one computing all variables and one deploying a variable selection strategy-are proposed. The proposed methods are then adapted to solve various NMF variants, including the nonnegative factorization, the smooth regularization NMF, the sparse regularization NMF, the multilayer NMF, the convex/convex-hull NMF, and the symmetric NMF. We also show that our algorithms include several existing methods for these NMF variants as special versions. The efficiency of the proposed approaches is empirically demonstrated on both real-world and synthetic data sets. It turns out that our algorithms compete favorably with five state-of-the-art alternating nonnegative least squares algorithms.
Applying Sequential Analytic Methods to Self-Reported Information to Anticipate Care Needs
Bayliss, Elizabeth A.; Powers, J. David; Ellis, Jennifer L.; Barrow, Jennifer C.; Strobel, MaryJo; Beck, Arne
2016-01-01
Purpose: Identifying care needs for newly enrolled or newly insured individuals is important under the Affordable Care Act. Systematically collected patient-reported information can potentially identify subgroups with specific care needs prior to service use. Methods: We conducted a retrospective cohort investigation of 6,047 individuals who completed a 10-question needs assessment upon initial enrollment in Kaiser Permanente Colorado (KPCO), a not-for-profit integrated delivery system, through the Colorado State Individual Exchange. We used responses from the Brief Health Questionnaire (BHQ), to develop a predictive model for cost for receiving care in the top 25 percent, then applied cluster analytic techniques to identify different high-cost subpopulations. Per-member, per-month cost was measured from 6 to 12 months following BHQ response. Results: BHQ responses significantly predictive of high-cost care included self-reported health status, functional limitations, medication use, presence of 0–4 chronic conditions, self-reported emergency department (ED) use during the prior year, and lack of prior insurance. Age, gender, and deductible-based insurance product were also predictive. The largest possible range of predicted probabilities of being in the top 25 percent of cost was 3.5 percent to 96.4 percent. Within the top cost quartile, examples of potentially actionable clusters of patients included those with high morbidity, prior utilization, depression risk and financial constraints; those with high morbidity, previously uninsured individuals with few financial constraints; and relatively healthy, previously insured individuals with medication needs. Conclusions: Applying sequential predictive modeling and cluster analytic techniques to patient-reported information can identify subgroups of individuals within heterogeneous populations who may benefit from specific interventions to optimize initial care delivery. PMID:27563684
Applying a weighted random forests method to extract karst sinkholes from LiDAR data
NASA Astrophysics Data System (ADS)
Zhu, Junfeng; Pierskalla, William P.
2016-02-01
Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.
NASA Astrophysics Data System (ADS)
Shin, Y.; Lee, E.
2015-12-01
Under the influence of recent climate change, abnormal weather condition such as floods and droughts has issued frequently all over the world. The occurrence of abnormal weather in major crop production areas leads to soaring world grain prices because it influence the reduction of crop yield. Development of crop yield estimation method is important means to accommodate the global food crisis caused by abnormal weather. However, due to problems with the reliability of the seasonal climate prediction, application research on agricultural productivity has not been much progress yet. In this study, it is an object to develop long-term crop yield estimation method in major crop production countries worldwide using multi seasonal climate prediction data collected by APEC Climate Center. There are 6-month lead seasonal predictions produced by six state-of-the-art global coupled ocean-atmosphere models(MSC_CANCM3, MSC_CANCM4, NASA, NCEP, PNU, POAMA). First of all, we produce a customized climate data through temporal and spatial downscaling methods for use as a climatic input data to the global scale crop model. Next, we evaluate the uncertainty of climate prediction by applying multi seasonal climate prediction in the crop model. Because rice is the most important staple food crop in the Asia-Pacific region, we assess the reliability of the rice yields using seasonal climate prediction for main rice production countries. RMSE(Root Mean Squire Error) and TCC(Temporal Correlation Coefficient) analysis is performed in Asia-Pacific countries, major 14 rice production countries, to evaluate the reliability of the rice yield according to the climate prediction models. We compare the rice yield data obtained from FAOSTAT and estimated using the seasonal climate prediction data in Asia-Pacific countries. In addition, we show that the reliability of seasonal climate prediction according to the climate models in Asia-Pacific countries where rice cultivation is being carried out.
Smailienė, Dalia; Kavaliauskienė, Aistė; Pacauskienė, Ingrida
2013-01-01
BACKGROUND AND OBJECTIVE. There is considerable debate on the issues of the choice of a surgical technique for the treatment of palatally impacted maxillary canines. The aim of the study was to evaluate the posttreatment status of palatally impacted canines treated applying 2 different surgical methods, i.e., an open technique with free eruption and a closed flap technique, and to compare it with the status of naturally erupted canines. MATERIAL AND METHODS. In total, 43 patients treated for unilateral palatally impacted maxillary canines were examined at a mean follow-up of 4.19 months (SD, 1.44; range, 3-6) after a fixed appliance had been removed. The patients were distributed into 2 groups: the open technique with free eruption (group 1, n=22) and the closed technique (group 2, n=21). The posttreatment examination consisted of an intraoral and a radiological examination. RESULTS. The findings of tooth position, inclination, color, shape, and function did not differ between the groups. There was no significant difference in the measurements of the periodontal pocket depth and bone support between the groups: the mean periodontal pocket depth was 2.14 mm (SD, 0.38) in the group 1 and 2.28 mm (SD, 0.69) in the group 2; the mean bone support was 91.51% (SD, 5.78%) and 89.9% (SD, 5%) in the groups, respectively. However, differences were found when comparing the measurements of the quadrant of impacted canines with the quadrant of the contralateral normally erupted canines. The distal contact point of the lateral incisor and the medial contact point of the canine showed a significant bone loss in comparison with the contralateral corresponding teeth. CONCLUSIONS. The posttreatment status of palatally impacted canines and adjacent teeth after the surgical-orthodontic treatment did not differ significantly between the groups of the open and the closed surgical method.
The expanding photosphere method applied to SN 1992am AT cz = 14 600 km/s
NASA Technical Reports Server (NTRS)
Schmidt, Brian P.; Kirshner, Robert P.; Eastman, Ronald G.; Hamuy, Mario; Phillips, Mark M.; Suntzeff, Nicholas B.; Maza, Jose; Filippenko, Alexei V.; Ho, Luis C.; Matheson, Thomas
1994-01-01
We present photometry and spectroscopy of Supernova (SN) 1992am for five months following its discovery by the Calan Cerro-Tololo Inter-American Observatory (CTIO) SN search. These data show SN 1992am to be type II-P, displaying hydrogen in its spectrum and the typical shoulder in its light curve. The photometric data and the distance from our own analysis are used to construct the supernova's bolometric light curve. Using the bolometric light curve, we estimate SN 1992am ejected approximately 0.30 solar mass of Ni-56, an amount four times larger than that of other well studied SNe II. SN 1992am's; host galaxy lies at a redshift of cz = 14 600 km s(exp -1), making it one of the most distant SNe II discovered, and an important application of the Expanding Photsphere Method. Since z = 0.05 is large enough for redshift-dependent effects to matter, we develop the technique to derive luminosity distances with the Expanding Photosphere Method at any redshift, and apply this method to SN 1992am. The derived distance, D = 180(sub -25) (sup +30) Mpc, is independent of all other rungs in the extragalactic distance ladder. The redshift of SN 1992am's host galaxy is sufficiently large that uncertainties due to perturbations in the smooth Hubble flow should be smaller than 10%. The Hubble ratio derived from the distance and redshift of this single object is H(sub 0) = 81(sub -15) (sup +17) km s(exp -1) Mpc(exp -1). In the future, with more of these distant objects, we hope to establish an independent and statistically robust estimate of H(sub 0) based solely on type II supernovae.