Sample records for current methods require

  1. 46 CFR 11.713 - Requirements for maintaining current knowledge of waters to be navigated.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Requirements for maintaining current knowledge of waters... § 11.713 Requirements for maintaining current knowledge of waters to be navigated. (a) If a first class... current knowledge of the route. Persons using this method of re-familiarization shall certify, when...

  2. Defining Support Requirements During Conceptual Design of Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; White, N. H.; Davis, W. T.; Ebeling, C. E.

    1995-01-01

    Current methods for defining the operational support requirements of new systems are data intensive and require significant design information. Methods are being developed to aid in the analysis process of defining support requirements for new launch vehicles during their conceptual design phase that work with the level of information available during this phase. These methods will provide support assessments based on the vehicle design and the operating scenarios. The results can be used both to define expected support requirements for new launch vehicle designs and to help evaluate the benefits of using new technologies. This paper describes the models, their current status, and provides examples of their use.

  3. Debating Curricular Strategies for Teaching Statistics and Research Methods: What Does the Current Evidence Suggest?

    ERIC Educational Resources Information Center

    Barron, Kenneth E.; Apple, Kevin J.

    2014-01-01

    Coursework in statistics and research methods is a core requirement in most undergraduate psychology programs. However, is there an optimal way to structure and sequence methodology courses to facilitate student learning? For example, should statistics be required before research methods, should research methods be required before statistics, or…

  4. A Formal Approach to Requirements-Based Programming

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.

  5. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  6. Calculating Electrical Requirements for Direct Current Electric Actuators

    DTIC Science & Technology

    2017-11-29

    These requirements lead to the determination of multiple design decisions such as: operating voltage, regenerative energy capture/dissipation, and...15. SUBJECT TERMS Electro-mechanical actuation Regenerative energy Electrical power Servo control Direct current (DC...Method 6 Power Supply Requirements 7 Approaches to Handling Regenerative Energy 8 Conductor Selection 10 Results and Discussions 10 Example

  7. Efficacy Evaluation of Current and Future Naval Mine Warfare Neutralization Method

    DTIC Science & Technology

    2016-12-01

    Distribution is unlimited. EFFICACY EVALUATION OF CURRENT AND FUTURE NAVAL MINE WARFARE NEUTRALIZATION METHOD by Team MIW Cohort SE311-152O...EFFICACY EVALUATION OF CURRENT AND FUTURE NAVAL MINE WARFARE NEUTRALIZATION METHOD 5. FUNDING NUMBERS 6. AUTHOR (S) Team MIW, Systems Engineering...NEUTRALIZATION METHOD Team MIW, Systems Engineering Cohort SE311-152O Submitted in partial fulfillment of the requirements for the degrees of

  8. 76 FR 9495 - Airworthiness Directives; Air Tractor, Inc. Models AT-802 and AT-802A Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-18

    ...-18, which requires you to repetitively inspect (using the eddy current method) the two outboard... through 0101 and AT-802A-0092 through 0101: To perform, using the eddy current method, two inspections at... through 0178 and AT-802A-0102 through 0178 to perform using the eddy current method, two inspections at 5...

  9. Input current shaped ac-to-dc converters

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Input current shaping techniques for ac-to-dc converters were investigated. Input frequencies much higher than normal, up to 20 kHz were emphasized. Several methods of shaping the input current waveform in ac-to-dc converters were reviewed. The simplest method is the LC filter following the rectifier. The next simplest method is the resistor emulation approach in which the inductor size is determined by the converter switching frequency and not by the line input frequency. Other methods require complicated switch drive algorithms to construct the input current waveshape. For a high-frequency line input, on the order of 20 kHz, the simple LC cannot be discarded so peremptorily, since the inductor size can be compared with that for the resistor emulation method. In fact, since a dc regulator will normally be required after the filter anyway, the total component count is almost the same as for the resistor emulation method, in which the filter is effectively incorporated into the regulator.

  10. Optimization of Advanced ACTPol Transition Edge Sensor Bolometer Operation Using R(T,I) Transition Measurements

    NASA Astrophysics Data System (ADS)

    Salatino, Maria

    2017-06-01

    In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.

  11. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  12. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  13. Microscale Concentration Measurements Using Laser Light Scattering Methods

    NASA Technical Reports Server (NTRS)

    Niederhaus, Charles; Miller, Fletcher

    2004-01-01

    The development of lab-on-a-chip devices for microscale biochemical assays has led to the need for microscale concentration measurements of specific analyses. While fluorescence methods are the current choice, this method requires developing fluorophore-tagged conjugates for each analyte of interest. In addition, fluorescent imaging is also a volume-based method, and can be limiting as smaller detection regions are required.

  14. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  15. Survey of NASA V and V Processes/Methods

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles; Nelson, Stacy

    2002-01-01

    The purpose of this report is to describe current NASA Verification and Validation (V&V) techniques and to explain how these techniques are applicable to 2nd Generation RLV Integrated Vehicle Health Management (IVHM) software. It also contains recommendations for special V&V requirements for IVHM. This report is divided into the following three sections: 1) Survey - Current NASA V&V Processes/Methods; 2) Applicability of NASA V&V to 2nd Generation RLV IVHM; and 3) Special 2nd Generation RLV IVHM V&V Requirements.

  16. Fast tomographic methods for the tokamak ISTTOK

    NASA Astrophysics Data System (ADS)

    Carvalho, P. J.; Thomsen, H.; Gori, S.; Toussaint, U. v.; Weller, A.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.

    2008-04-01

    The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.

  17. Constant-current control method of multi-function electromagnetic transmitter.

    PubMed

    Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun

    2015-02-01

    Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.

  18. Constant-current control method of multi-function electromagnetic transmitter

    NASA Astrophysics Data System (ADS)

    Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun

    2015-02-01

    Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.

  19. An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*

    PubMed Central

    Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.

    2014-01-01

    Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144

  20. Issues that Drive Waste Management Technology Development for Space Missions

    NASA Technical Reports Server (NTRS)

    Fisher, John W.; Levri, Julie A.; Hogan, John A.; Wignarajah, Kanapathipillai

    2005-01-01

    Waste management technologies for space life support systems are currently at low development levels. Manual compaction of waste in plastic bags and overboard disposal to earth return vehicles are the primary current waste management methods. Particularly on future missions, continuance of current waste management methods would tend to expose the crew to waste hazards, forfeit recoverable resources such as water, consume valuable crew time, contaminate planetary surfaces, and risk return to Earth of extraterrestrial life. Improvement of waste management capabilities is needed for adequate management of wastes. Improvements include recovery of water and other resources, conversion of waste to states harmless to humans, long-term containment of wastes, and disposal of waste. Current NASA requirements documents on waste management are generally not highly detailed. More detailed requirements are needed to guide the development of waste management technologies that will adequately manage waste. In addition to satisfying requirements, waste management technologies must also recover resources. Recovery of resources such as water and habitat volume can reduce mission cost. This paper explores the drivers for waste management technology development including requirements and resource recovery.

  1. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Investigation of Cleanliness Verification Techniques for Rocket Engine Hardware

    NASA Technical Reports Server (NTRS)

    Fritzemeier, Marilyn L.; Skowronski, Raymund P.

    1994-01-01

    Oxidizer propellant systems for liquid-fueled rocket engines must meet stringent cleanliness requirements for particulate and nonvolatile residue. These requirements were established to limit residual contaminants which could block small orifices or ignite in the oxidizer system during engine operation. Limiting organic residues in high pressure oxygen systems, such as in the Space Shuttle Main Engine (SSME), is particularly important. The current method of cleanliness verification for the SSME uses an organic solvent flush of the critical hardware surfaces. The solvent is filtered and analyzed for particulate matter followed by gravimetric determination of the nonvolatile residue (NVR) content of the filtered solvent. The organic solvents currently specified for use (1, 1, 1-trichloroethane and CFC-113) are ozone-depleting chemicals slated for elimination by December 1995. A test program is in progress to evaluate alternative methods for cleanliness verification that do not require the use of ozone-depleting chemicals and that minimize or eliminate the use of solvents regulated as hazardous air pollutants or smog precursors. Initial results from the laboratory test program to evaluate aqueous-based methods and organic solvent flush methods for NVR verification are provided and compared with results obtained using the current method. Evaluation of the alternative methods was conducted using a range of contaminants encountered in the manufacture of rocket engine hardware.

  3. Space station contamination control study: Internal combustion, phase 1

    NASA Technical Reports Server (NTRS)

    Ruggeri, Robert T.

    1987-01-01

    Contamination inside Space Station modules was studied to determine the best methods of controlling contamination. The work was conducted in five tasks that identified existing contamination control requirements, analyzed contamination levels, developed outgassing specification for materials, wrote a contamination control plan, and evaluated current materials of offgassing tests used by NASA. It is concluded that current contamination control methods can be made to function on the Space Station for up to 1000 days, but that current methods are deficient for periods longer than about 1000 days.

  4. Fitting methods to paradigms: are ergonomics methods fit for systems thinking?

    PubMed

    Salmon, Paul M; Walker, Guy H; M Read, Gemma J; Goode, Natassia; Stanton, Neville A

    2017-02-01

    The issues being tackled within ergonomics problem spaces are shifting. Although existing paradigms appear relevant for modern day systems, it is worth questioning whether our methods are. This paper asks whether the complexities of systems thinking, a currently ubiquitous ergonomics paradigm, are outpacing the capabilities of our methodological toolkit. This is achieved through examining the contemporary ergonomics problem space and the extent to which ergonomics methods can meet the challenges posed. Specifically, five key areas within the ergonomics paradigm of systems thinking are focused on: normal performance as a cause of accidents, accident prediction, system migration, systems concepts and ergonomics in design. The methods available for pursuing each line of inquiry are discussed, along with their ability to respond to key requirements. In doing so, a series of new methodological requirements and capabilities are identified. It is argued that further methodological development is required to provide researchers and practitioners with appropriate tools to explore both contemporary and future problems. Practitioner Summary: Ergonomics methods are the cornerstone of our discipline. This paper examines whether our current methodological toolkit is fit for purpose given the changing nature of ergonomics problems. The findings provide key research and practice requirements for methodological development.

  5. Patient Accounting Systems: Are They Fit with the Users' Requirements?

    PubMed Central

    Ayatollahi, Haleh; Nazemi, Zahra

    2016-01-01

    Objectives A patient accounting system is a subsystem of a hospital information system. This system like other information systems should be carefully designed to be able to meet users' requirements. The main aim of this research was to investigate users' requirements and to determine whether current patient accounting systems meet users' needs or not. Methods This was a survey study, and the participants were the users of six patient accounting systems used in 24 teaching hospitals. A stratified sampling method was used to select the participants (n = 216). The research instruments were a questionnaire and a checklist. The mean value of ≥3 showed the importance of each data element and the capability of the system. Results Generally, the findings showed that the current patient accounting systems had some weaknesses and were able to meet between 70% and 80% of users' requirements. Conclusions The current patient accounting systems need to be improved to be able to meet users' requirements. This approach can also help to provide hospitals with more usable and reliable financial information. PMID:26893945

  6. Systems and context modeling approach to requirements analysis

    NASA Astrophysics Data System (ADS)

    Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick

    2014-08-01

    Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fertig, Fabian, E-mail: fabian.fertig@ise.fraunhofer.de; Greulich, Johannes; Rein, Stefan

    We present a spatially resolved method to determine the short-circuit current density of crystalline silicon solar cells by means of lock-in thermography. The method utilizes the property of crystalline silicon solar cells that the short-circuit current does not differ significantly from the illuminated current under moderate reverse bias. Since lock-in thermography images locally dissipated power density, this information is exploited to extract values of spatially resolved current density under short-circuit conditions. In order to obtain an accurate result, one or two illuminated lock-in thermography images and one dark lock-in thermography image need to be recorded. The method can be simplifiedmore » in a way that only one image is required to generate a meaningful short-circuit current density map. The proposed method is theoretically motivated, and experimentally validated for monochromatic illumination in comparison to the reference method of light-beam induced current.« less

  8. Formal Requirements-Based Programming for Complex Systems

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis

    2005-01-01

    Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.

  9. Resource requirements of inclusive urban development in India: insights from ten cities

    NASA Astrophysics Data System (ADS)

    Singh Nagpure, Ajay; Reiner, Mark; Ramaswami, Anu

    2018-02-01

    This paper develops a methodology to assess the resource requirements of inclusive urban development in India and compares those requirements to current community-wide material and energy flows. Methods include: (a) identifying minimum service level benchmarks for the provision of infrastructure services including housing, electricity and clean cooking fuels; (b) assessing the percentage of homes that lack access to infrastructure or that consume infrastructure services below the identified benchmarks; (c) quantifying the material requirements to provide basic infrastructure services using India-specific design data; and (d) computing material and energy requirements for inclusive development and comparing it with current community-wide material and energy flows. Applying the method to ten Indian cities, we find that: 1%-6% of households do not have electricity, 14%-71% use electricity below the benchmark of 25 kWh capita-month-1 4%-16% lack structurally sound housing; 50%-75% live in floor area less than the benchmark of 8.75 m2 floor area/capita; 10%-65% lack clean cooking fuel; and 6%-60% lack connection to a sewerage system. Across the ten cities examined, to provide basic electricity (25 kWh capita-month-1) to all will require an addition of only 1%-10% in current community-wide electricity use. To provide basic clean LPG fuel (1.2 kg capita-month-1) to all requires an increase of 5%-40% in current community-wide LPG use. Providing permanent shelter (implemented over a ten year period) to populations living in non-permanent housing in Delhi and Chandigarh would require a 6%-14% increase over current annual community-wide cement use. Conversely, to provide permanent housing to all people living in structurally unsound housing and those living in overcrowded housing (<5 m cap-2) would require 32%-115% of current community-wide cement flows. Except for the last scenario, these results suggest that social policies that seek to provide basic infrastructure provisioning for all residents would not dramatically increasing current community-wide resource flows.

  10. Modeling Requirements for Cohort and Register IT.

    PubMed

    Stäubert, Sebastian; Weber, Ulrike; Michalik, Claudia; Dress, Jochen; Ngouongo, Sylvie; Stausberg, Jürgen; Winter, Alfred

    2016-01-01

    The project KoRegIT (funded by TMF e.V.) aimed to develop a generic catalog of requirements for research networks like cohort studies and registers (KoReg). The catalog supports such kind of research networks to build up and to manage their organizational and IT infrastructure. To make transparent the complex relationships between requirements, which are described in use cases from a given text catalog. By analyzing and modeling the requirements a better understanding and optimizations of the catalog are intended. There are two subgoals: a) to investigate one cohort study and two registers and to model the current state of their IT infrastructure; b) to analyze the current state models and to find simplifications within the generic catalog. Processing the generic catalog was performed by means of text extraction, conceptualization and concept mapping. Then methods of enterprise architecture planning (EAP) are used to model the extracted information. To work on objective a) questionnaires are developed by utilizing the model. They are used for semi-structured interviews, whose results are evaluated via qualitative content analysis. Afterwards the current state was modeled. Objective b) was done by model analysis. A given generic text catalog of requirements was transferred into a model. As result of objective a) current state models of one existing cohort study and two registers are created and analyzed. An optimized model called KoReg-reference-model is the result of objective b). It is possible to use methods of EAP to model requirements. This enables a better overview of the partly connected requirements by means of visualization. The model based approach also enables the analysis and comparison of the empirical data from the current state models. Information managers could reduce the effort of planning the IT infrastructure utilizing the KoReg-reference-model. Modeling the current state and the generation of reports from the model, which could be used as requirements specification for bids, is supported, too.

  11. Motor current signature analysis method for diagnosing motor operated devices

    DOEpatents

    Haynes, Howard D.; Eissenberg, David M.

    1990-01-01

    A motor current noise signature analysis method and apparatus for remotely monitoring the operating characteristics of an electric motor-operated device such as a motor-operated valve. Frequency domain signal analysis techniques are applied to a conditioned motor current signal to distinctly identify various operating parameters of the motor driven device from the motor current signature. The signature may be recorded and compared with subsequent signatures to detect operating abnormalities and degradation of the device. This diagnostic method does not require special equipment to be installed on the motor-operated device, and the current sensing may be performed at remote control locations, e.g., where the motor-operated devices are used in accessible or hostile environments.

  12. Remote sensing of surface currents with single shipborne high-frequency surface wave radar

    NASA Astrophysics Data System (ADS)

    Wang, Zhongbao; Xie, Junhao; Ji, Zhenyuan; Quan, Taifan

    2016-01-01

    High-frequency surface wave radar (HFSWR) is a useful technology for remote sensing of surface currents. It usually requires two (or more) stations spaced apart to create a two-dimensional (2D) current vector field. However, this method can only obtain the measurements within the overlapping coverage, which wastes most of the data from only one radar observation. Furthermore, it increases observation's costs significantly. To reduce the number of required radars and increase the ocean area that can be measured, this paper proposes an economical methodology for remote sensing of the 2D surface current vector field using single shipborne HFSWR. The methodology contains two parts: (1) a real space-time multiple signal classification (MUSIC) based on sparse representation and unitary transformation techniques is developed for measuring the radial currents from the spreading first-order spectra, and (2) the stream function method is introduced to obtain the 2D surface current vector field. Some important conclusions are drawn, and simulations are included to validate the correctness of them.

  13. Management system to a photovoltaic panel based on the measurement of short-circuit currents

    NASA Astrophysics Data System (ADS)

    Dordescu, M.

    2016-12-01

    This article is devoted to fundamental issues arising from operation in terms of increased energy efficiency for photovoltaic panel (PV). By measuring the current from functioning cage determine the current value prescribed amount corresponding to maximum power point results obtained by requiring proof of pregnancy with this method are the maximum energy possible, thus justifying the usefulness of this process very simple and inexpensive to implement in practice. The proposed adjustment method is much simpler and more economical than conventional methods that rely on measuring power cut.

  14. Requirements controlled design: A method for discovery of discontinuous system boundaries in the requirements hyperspace

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Peter Michael

    The drive toward robust systems design, especially with respect to system affordability throughout the system life-cycle, has led to the development of several advanced design methods. While these methods have been extremely successful in satisfying the needs for which they have been developed, they inherently leave a critical area unaddressed. None of them fully considers the effect of requirements on the selection of solution systems. The goal of all of current modern design methodologies is to bring knowledge forward in the design process to the regions where more design freedom is available and design changes cost less. Therefore, it seems reasonable to consider the point in the design process where the greatest restrictions are placed on the final design, the point in which the system level requirements are set. Historically the requirements have been treated as something handed down from above. However, neither the customer nor the solution provider completely understood all of the options that are available in the broader requirements space. If a method were developed that provided the ability to understand the full scope of the requirements space, it would allow for a better comparison of potential solution systems with respect to both the current and potential future requirements. The key to a requirements conscious method is to treat requirements differently from the traditional approach. The method proposed herein is known as Requirements Controlled Design (RCD). By treating the requirements as a set of variables that control the behavior of the system, instead of variables that only define the response of the system, it is possible to determine a-priori what portions of the requirements space that any given system is capable of satisfying. Additionally, it should be possible to identify which systems can satisfy a given set of requirements and the locations where a small change in one or more requirements poses a significant risk to a design program. This thesis puts forth the theory and methodology to enable RCD, and details and validates a specific method called the Modified Strength Pareto Evolutionary Algorithm (MSPEA).

  15. The JPL functional requirements tool

    NASA Technical Reports Server (NTRS)

    Giffin, Geoff; Skinner, Judith; Stoller, Richard

    1987-01-01

    Planetary spacecraft are complex vehicles which are built according to many thousands of requirements. Problems encountered in documenting and maintaining these requirements led to the current attempt to reduce or eliminate these problems by a computer automated data base Functional Requirements Tool. The tool developed at JPL and in use on several JPL Projects is described. The organization and functionality of the Tool, together with an explanation of the data base inputs, their relationships, and use are presented. Methods of interfacing with external documents, representation of tables and figures, and methods of approval and change processing are discussed. The options available for disseminating information from the Tool are identified. The implementation of the Requirements Tool is outlined, and the operation is summarized. The conclusions drawn from this work is that the Requirements Tool represents a useful addition to the System Engineer's Tool kit, it is not currently available elsewhere, and a clear development path exists to expand the capabilities of the Tool to serve larger and more complex projects.

  16. High current superconductors for tokamak toroidal field coils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fietz, W.A.

    1976-01-01

    Conductors rated at 10,000 A for 8 T and 4.2 K are being purchased for the first large coil segment tests at ORNL. Requirements for these conductors, in addition to the high current rating, are low pulse losses, cryostatic stability, and acceptable mechanical properties. The conductors are required to have losses less than 0.4 W/m under pulsed fields of 0.5 T with a rise time of 1 sec in an ambient 8-T field. Methods of calculating these losses and techniques for verifying the performance by direct measurement are discussed. Conductors stabilized by two different cooling methods, pool boiling and forcedmore » helium flow, have been proposed. Analysis of these conductors is presented and a proposed definition and test of stability is discussed. Mechanical property requirements, tensile and compressive, are defined and test methods are discussed.« less

  17. Research on environmental impact of water-based fire extinguishing agents

    NASA Astrophysics Data System (ADS)

    Wang, Shuai

    2018-02-01

    This paper offers current status of application of water-based fire extinguishing agents, the environmental and research considerations of the need for the study of toxicity research. This paper also offers systematic review of test methods of toxicity and environmental impact of water-based fire extinguishing agents currently available, illustrate the main requirements and relevant test methods, and offer some research findings for future research considerations. The paper also offers limitations of current study.

  18. Nondestructive test determines overload destruction characteristics of current limiter fuses

    NASA Technical Reports Server (NTRS)

    Swartz, G. A.

    1968-01-01

    Nondestructive test predicts the time required for current limiters to blow /open the circuit/ when subjected to a given overload. The test method is based on an empirical relationship between the voltage rise across a current limiter for a fixed time interval and the time to blow.

  19. Effects of aggregate angularity on mix design characteristics and pavement performance.

    DOT National Transportation Integrated Search

    2009-12-01

    This research targeted two primary purposes: to estimate current aggregate angularity test methods and to evaluate current : aggregate angularity requirements in the Nebraska asphalt mixture/pavement specification. To meet the first research : object...

  20. The holy grail of soil metal contamination site assessment: reducing risk and increasing confidence of decision making using infield portable X-ray Fluorescence (pXRF) technology

    NASA Astrophysics Data System (ADS)

    Rouillon, M.; Taylor, M. P.; Dong, C.

    2016-12-01

    This research assesses the advantages of integrating field portable X-ray Fluorescence (pXRF) technology for reducing the risk and increase confidence of decision making for metal-contaminated site assessments. Metal-contaminated sites are often highly heterogeneous and require a high sampling density to accurately characterize the distribution and concentration of contaminants. The current regulatory assessment approaches rely on a small number of samples processed using standard wet-chemistry methods. In New South Wales (NSW), Australia, the current notification trigger for characterizing metal-contaminated sites require the upper 95% confidence interval of the site mean to equal or exceed the relevant guidelines. The method's low `minimum' sampling requirements can misclassify sites due to the heterogeneous nature of soil contamination, leading to inaccurate decision making. To address this issue, we propose integrating infield pXRF analysis with the established sampling method to overcome sampling limitations. This approach increases the minimum sampling resolution and reduces the 95% CI of the site mean. Infield pXRF analysis at contamination hotspots enhances sample resolution efficiently and without the need to return to the site. In this study, the current and proposed pXRF site assessment methods are compared at five heterogeneous metal-contaminated sites by analysing the spatial distribution of contaminants, 95% confidence intervals of site means, and the sampling and analysis uncertainty associated with each method. Finally, an analysis of costs associated with both the current and proposed methods is presented to demonstrate the advantages of incorporating pXRF into metal-contaminated site assessments. The data shows that pXRF integrated site assessments allows for faster, cost-efficient, characterisation of metal-contaminated sites with greater confidence for decision making.

  1. Estimating Logistics Support of Reusable Launch Vehicles During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; White, N. H.; Davies, W. T.; Ebeling, C. E.

    1997-01-01

    Methods exist to define the logistics support requirements for new aircraft concepts but are not directly applicable to new launch vehicle concepts. In order to define the support requirements and to discriminate among new technologies and processing choices for these systems, NASA Langley Research Center (LaRC) is developing new analysis methods. This paper describes several methods under development, gives their current status, and discusses the benefits and limitations associated with their use.

  2. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  3. Neural network based automatic limit prediction and avoidance system and method

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J. (Inventor); Prasad, Jonnalagadda V. R. (Inventor); Horn, Joseph F. (Inventor)

    2001-01-01

    A method for performance envelope boundary cueing for a vehicle control system comprises the steps of formulating a prediction system for a neural network and training the neural network to predict values of limited parameters as a function of current control positions and current vehicle operating conditions. The method further comprises the steps of applying the neural network to the control system of the vehicle, where the vehicle has capability for measuring current control positions and current vehicle operating conditions. The neural network generates a map of current control positions and vehicle operating conditions versus the limited parameters in a pre-determined vehicle operating condition. The method estimates critical control deflections from the current control positions required to drive the vehicle to a performance envelope boundary. Finally, the method comprises the steps of communicating the critical control deflection to the vehicle control system; and driving the vehicle control system to provide a tactile cue to an operator of the vehicle as the control positions approach the critical control deflections.

  4. Recommendations for Developing Alternative Test Methods for Developmental Neurotoxicity

    EPA Science Inventory

    There is great interest in developing alternative methods for developmental neurotoxicity testing (DNT) that are cost-efficient, use fewer animals and are based on current scientific knowledge of the developing nervous system. Alternative methods will require demonstration of the...

  5. Prediction techniques for jet-induced effects in hover on STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Wardwell, Douglas A.; Kuhn, Richard E.

    1991-01-01

    Prediction techniques for jet induced lift effects during hover are available, relatively easy to use, and produce adequate results for preliminary design work. Although deficiencies of the current method were found, it is still currently the best way to estimate jet induced lift effects short of using computational fluid dynamics. Its use is summarized. The new summarized method, represents the first step toward the use of surface pressure data in an empirical method, as opposed to just balance data in the current method, for calculating jet induced effects. Although the new method is currently limited to flat plate configurations having two circular jets of equal thrust, it has the potential of more accurately predicting jet induced effects including a means for estimating the pitching moment in hover. As this method was developed from a very limited amount of data, broader applications of the method require the inclusion of new data on additional configurations. However, within this small data base, the new method does a better job in predicting jet induced effects in hover than the current method.

  6. Automatic method of measuring silicon-controlled-rectifier holding current

    NASA Technical Reports Server (NTRS)

    Maslowski, E. A.

    1972-01-01

    Development of automated silicon controlled rectifier circuit for measuring minimum anode current required to maintain rectifiers in conducting state is discussed. Components of circuit are described and principles of operation are explained. Illustration of circuit is provided.

  7. Survey on multisensory feedback virtual reality dental training systems.

    PubMed

    Wang, D; Li, T; Zhang, Y; Hou, J

    2016-11-01

    Compared with traditional dental training methods, virtual reality training systems integrated with multisensory feedback possess potentials advantages. However, there exist many technical challenges in developing a satisfactory simulator. In this manuscript, we systematically survey several current dental training systems to identify the gaps between the capabilities of these systems and the clinical training requirements. After briefly summarising the components, functions and unique features of each system, we discuss the technical challenges behind these systems including the software, hardware and user evaluation methods. Finally, the clinical requirements of an ideal dental training system are proposed. Future research/development areas are identified based on an analysis of the gaps between current systems and clinical training requirements. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. A novel concept of fault current limiter based on saturable core in high voltage DC transmission system

    NASA Astrophysics Data System (ADS)

    Yuan, Jiaxin; Zhou, Hang; Gan, Pengcheng; Zhong, Yongheng; Gao, Yanhui; Muramatsu, Kazuhiro; Du, Zhiye; Chen, Baichao

    2018-05-01

    To develop mechanical circuit breaker in high voltage direct current (HVDC) system, a fault current limiter is required. Traditional method to limit DC fault current is to use superconducting technology or power electronic devices, which is quite difficult to be brought to practical use under high voltage circumstances. In this paper, a novel concept of high voltage DC transmission system fault current limiter (DCSFCL) based on saturable core was proposed. In the DCSFCL, the permanent magnets (PM) are added on both up and down side of the core to generate reverse magnetic flux that offset the magnetic flux generated by DC current and make the DC winding present a variable inductance to the DC system. In normal state, DCSFCL works as a smoothing reactor and its inductance is within the scope of the design requirements. When a fault occurs, the inductance of DCSFCL rises immediately and limits the steepness of the fault current. Magnetic field simulations were carried out, showing that compared with conventional smoothing reactor, DCSFCL can decrease the high steepness of DC fault current by 17% in less than 10ms, which verifies the feasibility and effectiveness of this method.

  9. Estimating psychiatric manpower requirements based on patients' needs.

    PubMed

    Faulkner, L R; Goldman, C R

    1997-05-01

    To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.

  10. Inflight Microbial Monitoring - An Alternative Method to Culture Based Detection Currently Used on the International Space Station

    NASA Technical Reports Server (NTRS)

    Khodadad, Christina L.; Birmele, Michele N.; Hummerick, Mary E.; Roman, Monsi; Smith, David J.

    2015-01-01

    Microorganisms including potential human pathogens have been detected on the International Space Station (ISS). The potential to introduce new microorganisms occurs with every exchange of crew or addition of equipment or supplies. Current microbial monitoring methods require enrichment of microorganisms and a 48-hour incubation time resulting in an increase in microbial load, detecting a limited number of unidentified microorganisms. An expedient, low-cost, in-flight method of microbial detection, identification, and enumeration is warranted.

  11. Recent progress in inverse methods in France

    NASA Technical Reports Server (NTRS)

    Bry, Pierre-Francois; Jacquotte, Olivier-Pierre; Lepape, Marie-Claire

    1991-01-01

    Given the current level of jet engine performance, improvement of the various turbomachinery components requires the use of advanced methods in aerodynamics, heat transfer, and aeromechanics. In particular, successful blade design can only be achieved via numerical design methods which make it possible to reach optimized solutions in a much shorter time than ever before. Two design methods which are currently being used throughout the French turbomachinery industry to obtain optimized blade geometries are presented. Examples are presented for compressor and turbine applications. The status of these methods as far as improvement and extension to new fields of applications is also reported.

  12. Limitations of the Conventional Phase Advance Method for Constant Power Operation of the Brushless DC Motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawler, J.S.

    2001-10-29

    The brushless dc motor (BDCM) has high-power density and efficiency relative to other motor types. These properties make the BDCM well suited for applications in electric vehicles provided a method can be developed for driving the motor over the 4 to 6:1 constant power speed range (CPSR) required by such applications. The present state of the art for constant power operation of the BDCM is conventional phase advance (CPA) [1]. In this paper, we identify key limitations of CPA. It is shown that the CPA has effective control over the developed power but that the current magnitude is relatively insensitivemore » to power output and is inversely proportional to motor inductance. If the motor inductance is low, then the rms current at rated power and high speed may be several times larger than the current rating. The inductance required to maintain rms current within rating is derived analytically and is found to be large relative to that of BDCM designs using high-strength rare earth magnets. Th us, the CPA requires a BDCM with a large equivalent inductance.« less

  13. Experience report: Using formal methods for requirements analysis of critical spacecraft software

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Ampo, Yoko

    1994-01-01

    Formal specification and analysis of requirements continues to gain support as a method for producing more reliable software. However, the introduction of formal methods to a large software project is difficult, due in part to the unfamiliarity of the specification languages and the lack of graphics. This paper reports results of an investigation into the effectiveness of formal methods as an aid to the requirements analysis of critical, system-level fault-protection software on a spacecraft currently under development. Our experience indicates that formal specification and analysis can enhance the accuracy of the requirements and add assurance prior to design development in this domain. The work described here is part of a larger, NASA-funded research project whose purpose is to use formal-methods techniques to improve the quality of software in space applications. The demonstration project described here is part of the effort to evaluate experimentally the effectiveness of supplementing traditional engineering approaches to requirements specification with the more rigorous specification and analysis available with formal methods.

  14. HOLDING TIME STUDY FOR FECALS/SALMONELLA & CONNECTING LANGUAGE FOR 503 REGULATIONS

    EPA Science Inventory

    Current federal regulations required monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella have been developed and are currently in use for quantification of these organisms. Recently c...

  15. Method for Estimating Patronage of Demand Responsive Transportation Systems

    DOT National Transportation Integrated Search

    1977-12-01

    This study has developed a method for estimating patronage of demand responsive transportation (DRT) systems. This procedure requires as inputs a description of the intended service area, current work trip patterns, characteristics of the served popu...

  16. 21 CFR 212.2 - What is current good manufacturing practice for PET drugs?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... PET drugs? 212.2 Section 212.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... TOMOGRAPHY DRUGS General Provisions § 212.2 What is current good manufacturing practice for PET drugs? Current good manufacturing practice for PET drugs is the minimum requirements for the methods to be used...

  17. 21 CFR 212.2 - What is current good manufacturing practice for PET drugs?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... PET drugs? 212.2 Section 212.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... TOMOGRAPHY DRUGS General Provisions § 212.2 What is current good manufacturing practice for PET drugs? Current good manufacturing practice for PET drugs is the minimum requirements for the methods to be used...

  18. 21 CFR 212.2 - What is current good manufacturing practice for PET drugs?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... PET drugs? 212.2 Section 212.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... TOMOGRAPHY DRUGS General Provisions § 212.2 What is current good manufacturing practice for PET drugs? Current good manufacturing practice for PET drugs is the minimum requirements for the methods to be used...

  19. Cathodic Protection Measurement Through Inline Inspection Technology Uses and Observations

    NASA Astrophysics Data System (ADS)

    Ferguson, Briana Ley

    This research supports the evaluation of an impressed current cathodic protection (CP) system of a buried coated steel pipeline through alternative technology and methods, via an inline inspection device (ILI, CP ILI tool, or tool), in order to prevent and mitigate external corrosion. This thesis investigates the ability to measure the current density of a pipeline's CP system from inside of a pipeline rather than manually from outside, and then convert that CP ILI tool reading into a pipe-to-soil potential as required by regulations and standards. This was demonstrated through a mathematical model that utilizes applications of Ohm's Law, circuit concepts, and attenuation principles in order to match the results of the ILI sample data by varying parameters of the model (i.e., values for over potential and coating resistivity). This research has not been conducted previously in order to determine if the protected potential range can be achieved with respect to the predicted current density from the CP ILI device. Kirchhoff's method was explored, but certain principals could not be used in the model as manual measurements were required. This research was based on circuit concepts which indirectly affected electrochemical processes. Through Ohm's law, the results show that a constant current density is possible in the protected potential range; therefore, indicates polarization of the pipeline, which leads to calcareous deposit development with respect to electrochemistry. Calcareous deposit is desirable in industry since it increases the resistance of the pipeline coating and lowers current, thus slowing the oxygen diffusion process. This research conveys that an alternative method for CP evaluation from inside of the pipeline is possible where the pipe-to-soil potential can be estimated (as required by regulations) from the ILI tool's current density measurement.

  20. Microbial Monitoring of Common Opportunistic Pathogens by Comparing Multiple Real-time PCR Platforms for Potential Space Applications

    NASA Technical Reports Server (NTRS)

    Roman, Monserrate C.; Jones, Kathy U.; Oubre, Cherie M.; Castro, Victoria; Ott, Mark C.; Birmele, Michele; Venkateswaran, Kasthuri J.; Vaishampayan, Parag A.

    2013-01-01

    Current methods for microbial detection: a) Labor & time intensive cultivation-based approaches that can fail to detect or characterize all cells present. b) Requires collection of samples on orbit and transportation back to ground for analysis. Disadvantages to current detection methods: a) Unable to perform quick and reliable detection on orbit. b) Lengthy sampling intervals. c) No microbe identification.

  1. Contraceptive needs of the adolescent.

    PubMed

    Steyn, Petrus S; Goldstuck, Norman D

    2014-08-01

    The provision of contraception to adolescents requires specific attention. Adolescents require contraceptive methods which are safe, effective and simple to use. While long-acting reversible contraceptive methods are preferable, they should have a choice and not be forced or mandated especially in situations where this may compromise safety. After counselling they should have the ability to choose any method of contraception. Under the appropriate circumstances, each method of contraception may have a place. This chapter will be devoted to evaluating the most current scientific rationale for the indication for use of each method of contraception in adolescents. Copyright © 2014. Published by Elsevier Ltd.

  2. Use of Membrane Filtration as an Alternative Method in the Extraction of Microcystins Produced by Harmful Algal Blooms

    EPA Science Inventory

    Current methods invariably require sample concentration, typically solid-phase extraction, so as to be amendable for measurement at ambient concentration levels. Such methods (i.e. EPA Method 544) are only validated for a limited number of the known variants where standards are ...

  3. Design and analysis of an automatic method of measuring silicon-controlled-rectifier holding current

    NASA Technical Reports Server (NTRS)

    Maslowski, E. A.

    1971-01-01

    The design of an automated SCR holding-current measurement system is described. The circuits used in the measurement system were designed to meet the major requirements of automatic data acquisition, reliability, and repeatability. Performance data are presented and compared with calibration data. The data verified the accuracy of the measurement system. Data taken over a 48-hr period showed that the measurement system operated satisfactorily and met all the design requirements.

  4. Modeling of shock wave propagation in large amplitude ultrasound.

    PubMed

    Pinton, Gianmarco F; Trahey, Gregg E

    2008-01-01

    The Rankine-Hugoniot relation for shock wave propagation describes the shock speed of a nonlinear wave. This paper investigates time-domain numerical methods that solve the nonlinear parabolic wave equation, or the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and the conditions they require to satisfy the Rankine-Hugoniot relation. Two numerical methods commonly used in hyperbolic conservation laws are adapted to solve the KZK equation: Godunov's method and the monotonic upwind scheme for conservation laws (MUSCL). It is shown that they satisfy the Rankine-Hugoniot relation regardless of attenuation. These two methods are compared with the current implicit solution based method. When the attenuation is small, such as in water, the current method requires a degree of grid refinement that is computationally impractical. All three numerical methods are compared in simulations for lithotripters and high intensity focused ultrasound (HIFU) where the attenuation is small compared to the nonlinearity because much of the propagation occurs in water. The simulations are performed on grid sizes that are consistent with present-day computational resources but are not sufficiently refined for the current method to satisfy the Rankine-Hugoniot condition. It is shown that satisfying the Rankine-Hugoniot conditions has a significant impact on metrics relevant to lithotripsy (such as peak pressures) and HIFU (intensity). Because the Godunov and MUSCL schemes satisfy the Rankine-Hugoniot conditions on coarse grids, they are particularly advantageous for three-dimensional simulations.

  5. Software compensation of eddy current fields in multislice high order dynamic shimming.

    PubMed

    Sengupta, Saikat; Avison, Malcolm J; Gore, John C; Brian Welch, E

    2011-06-01

    Dynamic B(0) shimming (DS) can produce better field homogeneity than static global shimming by dynamically updating slicewise shim values in a multislice acquisition. The performance of DS however is limited by eddy current fields produced by the switching of 2nd and 3rd order unshielded shims. In this work, we present a novel method of eddy field compensation (EFC) applied to higher order shim induced eddy current fields in multislice DS. This method does not require shim shielding, extra hardware for eddy current compensation or subject specific prescanning. The interactions between shim harmonics are modeled assuming steady state of the medium and long time constant, cross and self term eddy fields in a DS experiment and 'correction factors' characterizing the entire set of shim interactions are derived. The correction factors for a given time between shim switches are shown to be invariable with object scanned, shim switching pattern and actual shim values, allowing for their generalized prospective use. Phantom and human head, 2nd and 3rd order DS experiments performed without any hardware eddy current compensation using the technique show large reductions in field gradients and offsets leading to significant improvements in image quality. This method holds promise as an alternative to expensive hardware based eddy current compensation required in 2nd and 3rd order DS. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Transitioning Human, Social, Cultural Behavior (HSCB) Models and Simulations to the Operational User1

    DTIC Science & Technology

    2009-10-01

    actuelle M&S couvrant le soutien aux operations, la representation du comportement humain , la guerre asymetrique, la defense contre le terrorisme et...methods, tools, data, intellectual capital , and processes to address these capability requirements. Fourth, there is a need to compare capability...requirements to current capabilities to identify gaps that may be addressed with DoD HSCB methods, tools, data, intellectual capital , and process

  7. The impacts of climate change on global irrigation water requirements

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Cai, X.

    2011-12-01

    Climate change tends to affect the irrigation water requirement of current irrigated agricultural land, and also changes the water availability for current rain-fed land by the end of this century. We use the most up-to-date climatic and crop datasets (e.g., global irrigated/rain-fed crop areas and grid level crop growing calendar (Portmann, Siebert and Döll, 2010, Global Biogeochemical Cycles 24)) to evaluate the requirements of currently irrigated land and the water deficit for rain-fed land for all major crops under current and projected climate. Six general circulation models (GCMs) under two emission scenarios, A1B & B1, are assembled using two methods, the Simple Average Method (SAM) and Root Mean Square Error Ensemble Method (RMSEMM), to deal with the GCM regional variability. It is found that the global irrigation requirement and the water deficit are both going to increase significantly under all scenarios, particularly under the A1B emission scenario. For example, the projected irrigation requirement is expected to increase by about 2500 million m3 for wheat, 3200 million m3 for maize and another 3300 million m3 for rice. At the same time, the water deficit for current rain-fed cropland will be widened by around 3000, 4000, 2100 million m3 for wheat, maize and rice respectively. Regional analysis is conducted for Africa, China, Europe, India, South America and the United States. It is found that the U.S. may expect the greatest rise in irrigation requirements for wheat and maize, while the South America may suffer the greatest increase for rice. In addition, Africa and the U.S. may face a larger water deficit for both wheat and maize on rain-fed land, and South America just for rice. In summary, climate change is likely to bring severe challenges for irrigation systems and make global water shortage even worse by the end of this century. These pressures will call for extensive adaptation measures. The change in crop water requirements and availability will lead to changes in regional food production, demand and trade, and will affect global food markets. It is also likely that the network and paths of the so-called global virtual water flow will be altered due to the impact of climate change on food production at the regional level.

  8. Experimental evaluation of the Continuous Risk Profile (CRP) approach to the current Caltrans methodology for high collision concentration location identification

    DOT National Transportation Integrated Search

    2012-03-31

    This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...

  9. Experimental evaluation of the Continuous Risk Profile (CRP) approach to the current Caltrans methodology for high collision concentration location identification.

    DOT National Transportation Integrated Search

    2012-03-01

    This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...

  10. A FASTER METHOD OF MEASURING RECREATIONAL WATER QUALITY FOR BETTER PROTECTION OF SWIMMER'S HEALTH

    EPA Science Inventory

    Introduction

    Fecal indicator bacteria (FIB) are used to monitor recreational water quality worldwide. Current methods of measuring FIB require at least 24-hours for visible bacterial colonies to grow. We previously reported that a faster method (< 2 hours) of measuring FI...

  11. Use of Non-invasive Uterine Electromyography in the Diagnosis of Preterm Labour

    PubMed Central

    Lucovnik, M.; Novak-Antolic, Z.; Garfield, R.E.

    2012-01-01

    Predictive values of methods currently used in the clinics to diagnose preterm labour are low. This leads to missed opportunities to improve neonatal outcomes and, on the other hand, to unnecessary hospitalizations and treatments. In addition, research of new and potentially more effective preterm labour treatments is hindered by the inability to include only patients in true preterm labour into studies. Uterine electromyography (EMG) detects changes in cell excitability and coupling required for labour and has higher predictive values for preterm delivery than currently available methods. This methodology could also provide a better means to evaluate various therapeutic interventions for preterm labour. Our manuscript presents a review of uterine EMG studies examining the potential clinical value that this technology possesses over what is available to physicians currently. We also evaluated the impact that uterine EMG could have on investigation of preterm labour treatments by calculating sample sizes for studies using EMG vs. current methods to enrol women. Besides helping clinicians to make safer and more cost-effective decisions when managing patients with preterm contractions, implementation of uterine EMG for diagnosis of preterm labour would also greatly reduce sample sizes required for studies of treatments. PMID:24753891

  12. Force analysis of magnetic bearings with power-saving controls

    NASA Technical Reports Server (NTRS)

    Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.

    1992-01-01

    Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.

  13. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  14. Recommendations for the treatment of aging in standard technical specifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orton, R.D.; Allen, R.P.

    1995-09-01

    As part of the US Nuclear Regulatory Commission`s Nuclear Plant Aging Research Program, Pacific Northwest Laboratory (PNL) evaluated the standard technical specifications for nuclear power plants to determine whether the current surveillance requirements (SRs) were effective in detecting age-related degradation. Nuclear Plant Aging Research findings for selected systems and components were reviewed to identify the stressors and operative aging mechanisms and to evaluate the methods available to detect, differentiate, and trend the resulting aging degradation. Current surveillance and testing requirements for these systems and components were reviewed for their effectiveness in detecting degraded conditions and for potential contributions to prematuremore » degradation. When the current surveillance and testing requirements appeared ineffective in detecting aging degradation or potentially could contribute to premature degradation, a possible deficiency in the SRs was identified that could result in undetected degradation. Based on this evaluation, PNL developed recommendations for inspection, surveillance, trending, and condition monitoring methods to be incorporated in the SRs to better detect age- related degradation of these selected systems and components.« less

  15. Current drive at plasma densities required for thermonuclear reactors.

    PubMed

    Cesario, R; Amicucci, L; Cardinali, A; Castaldo, C; Marinucci, M; Panaccione, L; Santini, F; Tudisco, O; Apicella, M L; Calabrò, G; Cianfarani, C; Frigione, D; Galli, A; Mazzitelli, G; Mazzotta, C; Pericoli, V; Schettini, G; Tuccillo, A A

    2010-08-10

    Progress in thermonuclear fusion energy research based on deuterium plasmas magnetically confined in toroidal tokamak devices requires the development of efficient current drive methods. Previous experiments have shown that plasma current can be driven effectively by externally launched radio frequency power coupled to lower hybrid plasma waves. However, at the high plasma densities required for fusion power plants, the coupled radio frequency power does not penetrate into the plasma core, possibly because of strong wave interactions with the plasma edge. Here we show experiments performed on FTU (Frascati Tokamak Upgrade) based on theoretical predictions that nonlinear interactions diminish when the peripheral plasma electron temperature is high, allowing significant wave penetration at high density. The results show that the coupled radio frequency power can penetrate into high-density plasmas due to weaker plasma edge effects, thus extending the effective range of lower hybrid current drive towards the domain relevant for fusion reactors.

  16. Power Maximization Control of Variable Speed Wind Generation System Using Permanent Magnet Synchronous Generator

    NASA Astrophysics Data System (ADS)

    Morimoto, Shigeo; Nakamura, Tomohiko; Takeda, Yoji

    This paper proposes the sensorless output power maximization control of the wind generation system. A permanent magnet synchronous generator (PMSG) is used as a variable speed generator in the proposed system. The generator torque is suitably controlled according to the generator speed and thus the power from a wind turbine settles down on the maximum power point by the proposed MPPT control method, where the information of wind velocity is not required. Moreover, the maximum available generated power is obtained by the optimum current vector control. The current vector of PMSG is optimally controlled according to the generator speed and the required torque in order to minimize the losses of PMSG considering the voltage and current constraints. The proposed wind power generation system can be achieved without mechanical sensors such as a wind velocity detector and a position sensor. Several experimental results show the effectiveness of the proposed control method.

  17. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    NASA Astrophysics Data System (ADS)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  18. The Effects of Different Electrode Types for Obtaining Surface Machining Shape on Shape Memory Alloy Using Electrochemical Machining

    NASA Astrophysics Data System (ADS)

    Choi, S. G.; Kim, S. H.; Choi, W. K.; Moon, G. C.; Lee, E. S.

    2017-06-01

    Shape memory alloy (SMA) is important material used for the medicine and aerospace industry due to its characteristics called the shape memory effect, which involves the recovery of deformed alloy to its original state through the application of temperature or stress. Consumers in modern society demand stability in parts. Electrochemical machining is one of the methods for obtained these stabilities in parts requirements. These parts of shape memory alloy require fine patterns in some applications. In order to machine a fine pattern, the electrochemical machining method is suitable. For precision electrochemical machining using different shape electrodes, the current density should be controlled precisely. And electrode shape is required for precise electrochemical machining. It is possible to obtain precise square holes on the SMA if the insulation layer controlled the unnecessary current between electrode and workpiece. If it is adjusting the unnecessary current to obtain the desired shape, it will be a great contribution to the medical industry and the aerospace industry. It is possible to process a desired shape to the shape memory alloy by micro controlling the unnecessary current. In case of the square electrode without insulation layer, it derives inexact square holes due to the unnecessary current. The results using the insulated electrode in only side show precise square holes. The removal rate improved in case of insulated electrode than others because insulation layer concentrate the applied current to the machining zone.

  19. Core Training in Low Back Disorders: Role of the Pilates Method.

    PubMed

    Joyce, Andrew A; Kotler, Dana H

    The Pilates method is a system of exercises developed by Joseph Pilates, which emphasizes recruitment and strengthening of the core muscles, flexibility, and breathing, to promote stability and control of movement. Its focus bears similarity to current evidence-based exercise programs for low back disorders. Spinal stability is a function of three interdependent systems, osseoligamentous, muscular, and neural control; exercise addresses both the muscular and neural function. The "core" typically refers to the muscular control required to maintain functional stability. Prior research has highlighted the importance of muscular strength and recruitment, with debate over the importance of individual muscles in the wider context of core control. Though developed long before the current evidence, the Pilates method is relevant in this setting and clearly relates to current evidence-based exercise interventions. Current literature supports the Pilates method as a treatment for low back disorders, but its benefit when compared with other exercise is less clear.

  20. EXAMINATION OF THE ROLE OF PHYSICAL RESOLUTION AND SCALE ON SEDIMENT AND NUTRIENT YIELDS

    EPA Science Inventory

    Currently, watershed delineation and extraction of stream networks are accomplished with GIS databases of digital elevation maps (DEMs). The most common method for extracting channel networks requires the a-priori specification of a critical source area that is required for chann...

  1. Patient Accounting Systems: Are They Fit with the Users' Requirements?

    PubMed

    Ayatollahi, Haleh; Nazemi, Zahra; Haghani, Hamid

    2016-01-01

    A patient accounting system is a subsystem of a hospital information system. This system like other information systems should be carefully designed to be able to meet users' requirements. The main aim of this research was to investigate users' requirements and to determine whether current patient accounting systems meet users' needs or not. This was a survey study, and the participants were the users of six patient accounting systems used in 24 teaching hospitals. A stratified sampling method was used to select the participants (n = 216). The research instruments were a questionnaire and a checklist. The mean value of ≥3 showed the importance of each data element and the capability of the system. Generally, the findings showed that the current patient accounting systems had some weaknesses and were able to meet between 70% and 80% of users' requirements. The current patient accounting systems need to be improved to be able to meet users' requirements. This approach can also help to provide hospitals with more usable and reliable financial information.

  2. Assessing User Needs and Requirements for Assistive Robots at Home.

    PubMed

    Werner, Katharina; Werner, Franz

    2015-01-01

    'Robots in healthcare' is a very trending topic. This paper gives an overview of currently and commonly used methods to gather user needs and requirements in research projects in the field of assistive robotics. Common strategies between authors are presented as well as examples of exceptions, which can help future researchers to find methods suitable for their own work. Typical problems of the field are discussed and partial solutions are proposed.

  3. Curing conditions to inactivate Trichinella spiralis muscle larvae in ready-to-eat pork sausage

    USDA-ARS?s Scientific Manuscript database

    Curing processes for ready to eat (RTE) pork products currently require individual validation of methods to demonstrate inactivation of Trichinella spiralis. This is a major undertaking for each process; currently no model of meat chemistry exists that can be correlated with inactivation of Trichin...

  4. A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays

    PubMed Central

    Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.

    2013-01-01

    Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767

  5. Current antiviral drugs and their analysis in biological materials - Part II: Antivirals against hepatitis and HIV viruses.

    PubMed

    Nováková, Lucie; Pavlík, Jakub; Chrenková, Lucia; Martinec, Ondřej; Červený, Lukáš

    2018-01-05

    This review is a Part II of the series aiming to provide comprehensive overview of currently used antiviral drugs and to show modern approaches to their analysis. While in the Part I antivirals against herpes viruses and antivirals against respiratory viruses were addressed, this part concerns antivirals against hepatitis viruses (B and C) and human immunodeficiency virus (HIV). Many novel antivirals against hepatitis C virus (HCV) and HIV have been introduced into the clinical practice over the last decade. The recent broadening portfolio of these groups of antivirals is reflected in increasing number of developed analytical methods required to meet the needs of clinical terrain. Part II summarizes the mechanisms of action of antivirals against hepatitis B virus (HBV), HCV, and HIV, their use in clinical practice, and analytical methods for individual classes. It also provides expert opinion on state of art in the field of bioanalysis of these drugs. Analytical methods reflect novelty of these chemical structures and use by far the most current approaches, such as simple and high-throughput sample preparation and fast separation, often by means of UHPLC-MS/MS. Proper method validation based on requirements of bioanalytical guidelines is an inherent part of the developed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Innovative methods for calculation of freeway travel time using limited data : executive summary report.

    DOT National Transportation Integrated Search

    2008-08-01

    ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...

  7. Rapid extraction of virus-contaminated hemocytes from oysters

    USDA-ARS?s Scientific Manuscript database

    Rapid viral detection methods are necessary to employ diagnostic testing for viral contamination in shellfish to prevent and control foodborne illness. Current shellfish viral RNA extraction methods, which are time-consuming and not applicable for routine monitoring, require the testing of whole or ...

  8. Mesoscopic modelling and simulation of soft matter.

    PubMed

    Schiller, Ulf D; Krüger, Timm; Henrich, Oliver

    2017-12-20

    The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.

  9. Space infrared telescope pointing control system. Automated star pattern recognition

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Vanbezooijen, R. W. H.

    1985-01-01

    The Space Infrared Telescope Facility (SIRTF) is a free flying spacecraft carrying a 1 meter class cryogenically cooled infrared telescope nearly three oders of magnitude most sensitive than the current generation of infrared telescopes. Three automatic target acquisition methods will be presented that are based on the use of an imaging star tracker. The methods are distinguished by the number of guidestars that are required per target, the amount of computational capability necessary, and the time required for the complete acquisition process. Each method is described in detail.

  10. Ion source and beam guiding studies for an API neutron generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sy, A.; Ji, Q.; Persaud, A.

    2013-04-19

    Recently developed neutron imaging methods require high neutron yields for fast imaging times and small beam widths for good imaging resolution. For ion sources with low current density to be viable for these types of imaging methods, large extraction apertures and beam focusing must be used. We present recent work on the optimization of a Penning-type ion source for neutron generator applications. Two multi-cusp magnet configurations have been tested and are shown to increase the extracted ion current density over operation without multi-cusp magnetic fields. The use of multi-cusp magnetic confinement and gold electrode surfaces have resulted in increased ionmore » current density, up to 2.2 mA/cm{sup 2}. Passive beam focusing using tapered dielectric capillaries has been explored due to its potential for beam compression without the cost and complexity issues associated with active focusing elements. Initial results from first experiments indicate the possibility of beam compression. Further work is required to evaluate the viability of such focusing methods for associated particle imaging (API) systems.« less

  11. The SEM description of interaction of a transient electromagnetic wave with an object

    NASA Technical Reports Server (NTRS)

    Pearson, L. W.; Wilton, D. R.

    1980-01-01

    The singularity expansion method (SEM), proposed as a means for determining and representing the transient surface current density induced on a scatterer by a transient electromagnetic wave is described. The resulting mathematical description of the transient surface current on the object is discussed. The data required to represent the electromagnetic scattering properties of a given object are examined. Experimental methods which were developed for the determination of the SEM description are discussed. The feasibility of characterizing the surface current induced on aircraft flying in proximity to a lightning stroke by way of SEM is examined.

  12. Medical Grade Water Generation for Intravenous Fluid Production on Exploration Missions

    NASA Technical Reports Server (NTRS)

    Niederhaus, Charles E.; Barlow, Karen L.; Griffin, DeVon W.; Miller, Fletcher J.

    2008-01-01

    This document describes the intravenous (IV) fluids requirements for medical care during NASA s future Exploration class missions. It further discusses potential methods for generating such fluids and the challenges associated with different fluid generation technologies. The current Exploration baseline mission profiles are introduced, potential medical conditions described and evaluated for fluidic needs, and operational issues assessed. Conclusions on the fluid volume requirements are presented, and the feasibility of various fluid generation options are discussed. A separate report will document a more complete trade study on the options to provide the required fluids.At the time this document was developed, NASA had not yet determined requirements for medical care during Exploration missions. As a result, this study was based on the current requirements for care onboard the International Space Station (ISS). While we expect that medical requirements will be different for Exploration missions, this document will provide a useful baseline for not only developing hardware to generate medical water for injection (WFI), but as a foundation for meeting future requirements. As a final note, we expect WFI requirements for Exploration will be higher than for ISS care, and system capacity may well need to be higher than currently specified.

  13. The Chimera Method of Simulation for Unsteady Three-Dimensional Viscous Flow

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    The Chimera overset grid method is reviewed and discussed in the context of a method of solution and analysis of unsteady three-dimensional viscous flows. The state of maturity of the various pieces of support software required to use the approach is discussed. A variety of recent applications of the method is presented. Current limitations of the approach are defined.

  14. An efficient 3-D eddy-current solver using an independent impedance method for transcranial magnetic stimulation.

    PubMed

    De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc

    2011-02-01

    In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.

  15. Current-horn suppression for reduced coherent-synchrotron-radiation-induced emittance growth in strong bunch compression

    NASA Astrophysics Data System (ADS)

    Charles, T. K.; Paganin, D. M.; Latina, A.; Boland, M. J.; Dowd, R. T.

    2017-03-01

    Control of coherent synchrotron radiation (CSR)-induced emittance growth is essential in linear accelerators designed to deliver very high brightness electron beams. Extreme current values at the head and tail of the electron bunch, resulting from strong bunch compression, are responsible for large CSR production leading to significant transverse projected emittance growth. The Linac Coherent Light Source (LCLS) truncates the head and tail current spikes which greatly improves free electron laser (FEL) performance. Here we consider the underlying dynamics that lead to formation of current spikes (also referred to as current horns), which has been identified as caustics forming in electron trajectories. We present a method to analytically determine conditions required to avoid the caustic formation and therefore prevent the current spikes from forming. These required conditions can be easily met, without increasing the transverse slice emittance, through inclusion of an octupole magnet in the middle of a bunch compressor.

  16. A Four Channel Beam Current Monitor Data Acquisition System Using Embedded Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheat, Jr., Robert Mitchell; Dalmas, Dale A.; Dale, Gregory E.

    2015-08-11

    Data acquisition from multiple beam current monitors is required for electron accelerator production of Mo-99. A two channel system capable of recording data from two beam current monitors has been developed, is currently in use, and is discussed below. The development of a cost-effective method of extending this system to more than two channels and integrating of these measurements into an accelerator control system is the main focus of this report. Data from these current monitors is digitized, processed, and stored by a digital data acquisition system. Limitations and drawbacks with the currently deployed digital data acquisition system have beenmore » identified as have been potential solutions, or at least improvements, to these problems. This report will discuss and document the efforts we've made in improving the flexibility and lowering the cost of the data acquisition system while maintaining the minimum requirements.« less

  17. Interhemispheric currents in the ring current region as seen by the Cluster spacecraft

    NASA Astrophysics Data System (ADS)

    Tenfjord, P.; Ostgaard, N.; Haaland, S.; Laundal, K.; Reistad, J. P.

    2013-12-01

    The existence of interhemispheric currents has been predicted by several authors, but their extent in the ring current has to our knowledge never been studied systematically by using in-situ measurements. These currents have been suggested to be associated with observed asymmetries of the aurora. We perform a statistical study of current density and direction during ring current crossings using the Cluster spacecraft. We analyse the extent of the interhemispheric field aligned currents for a wide range of solar wind conditions. Direct estimations of equatorial current direction and density are achieved through the curlometer technique. The curlometer technique is based on Ampere's law and requires magnetic field measurements from all four spacecrafts. The use of this method requires careful study of factors that limit the accuracy, such as tetrahedron shape and configuration. This significantly limits our dataset, but is a necessity for accurate current calculations. Our goal is to statistically investigate the occurrence of interhemispheric currents, and determine if there are parameters or magnetospheric states on which the current magnitude and directions depend upon.

  18. Light Scattering based detection of food pathogens

    USDA-ARS?s Scientific Manuscript database

    The current methods for detecting foodborne pathogens are mostly destructive (i.e., samples need to be pretreated), and require time, personnel, and laboratories for analyses. Optical methods including light scattering based techniques have gained a lot of attention recently due to its their rapid a...

  19. RAPID HEALTH-BASED METHOD FOR MEASURING MICROBIAL INDICATORS OF RECREATIONAL WATER QUALITY

    EPA Science Inventory

    Because the currently approved cultural methods for monitoring indicator bacteria in recreational water require 24 hours to produce results, the public may be exposed to potentially contaminated water before the water has been identified as hazardous. This project was initiated t...

  20. EVALUATION OF BIOSOLID SAMPLE PROCESSING TECHNIQUES TO MAXIMIZE RECOVERY OF BACTERIA

    EPA Science Inventory

    Current federal regulations (40 CFR 503) require enumeration of fecal coliform or Salmoella prior to land application of Class A biosolids. This regulation specifies use of enumeration methods included in "Standard Methods for the Examination of Water and Wastewater 18th Edition,...

  1. Imaging transcranial direct current stimulation (tDCS) of the prefrontal cortex-correlation or causality in stimulation-mediated effects?

    PubMed

    Wörsching, Jana; Padberg, Frank; Ertl-Wagner, Birgit; Kumpf, Ulrike; Kirsch, Beatrice; Keeser, Daniel

    2016-10-01

    Transcranial current stimulation approaches include neurophysiologically distinct non-invasive brain stimulation techniques widely applied in basic, translational and clinical research: transcranial direct current stimulation (tDCS), oscillating transcranial direct current stimulation (otDCS), transcranial alternating current stimulation (tACS) and transcranial random noise stimulation (tRNS). Prefrontal tDCS seems to be an especially promising tool for clinical practice. In order to effectively modulate relevant neural circuits, systematic research on prefrontal tDCS is needed that uses neuroimaging and neurophysiology measures to specifically target and adjust this method to physiological requirements. This review therefore analyses the various neuroimaging methods used in combination with prefrontal tDCS in healthy and psychiatric populations. First, we provide a systematic overview on applications, computational models and studies combining neuroimaging or neurophysiological measures with tDCS. Second, we categorise these studies in terms of their experimental designs and show that many studies do not vary the experimental conditions to the extent required to demonstrate specific relations between tDCS and its behavioural or neurophysiological effects. Finally, to support best-practice tDCS research we provide a methodological framework for orientation among experimental designs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Cumulative impacts: current research and current opinions at PSW

    Treesearch

    R. M. Rice

    1987-01-01

    Consideration of cumulative watershed effects (CWEs) has both political and physical aspects. Regardless of the practical usefulness of present methods of dealing with CWEs, the legal requirement to address them remains. Management of federal land is regulated by the National Environmental Policy Act (NEPA) and the Federal Water Pollution Control Act of 1972. The...

  3. Two dimensional distribution measurement of electric current generated in a polymer electrolyte fuel cell using 49 NMR surface coils.

    PubMed

    Ogawa, Kuniyasu; Sasaki, Tatsuyoshi; Yoneda, Shigeki; Tsujinaka, Kumiko; Asai, Ritsuko

    2018-05-17

    In order to increase the current density generated in a PEFC (polymer electrolyte fuel cell), a method for measuring the spatial distribution of both the current and the water content of the MEA (membrane electrode assembly) is necessary. Based on the frequency shifts of NMR (nuclear magnetic resonance) signals acquired from the water contained in the MEA using 49 NMR coils in a 7 × 7 arrangement inserted in the PEFC, a method for measuring the two-dimensional spatial distribution of electric current generated in a unit cell with a power generation area of 140 mm × 160 mm was devised. We also developed an inverse analysis method to determine the two-dimensional electric current distribution that can be applied to actual PEFC connections. Two analytical techniques, namely coarse graining of segments and stepwise search, were used to shorten the calculation time required for inverse analysis of the electric current map. Using this method and techniques, spatial distributions of electric current and water content in the MEA were obtained when the PEFC generated electric power at 100 A. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Determination of eddy current response with magnetic measurements.

    PubMed

    Jiang, Y Z; Tan, Y; Gao, Z; Nakamura, K; Liu, W B; Wang, S Z; Zhong, H; Wang, B B

    2017-09-01

    Accurate mutual inductances between magnetic diagnostics and poloidal field coils are an essential requirement for determining the poloidal flux for plasma equilibrium reconstruction. The mutual inductance calibration of the flux loops and magnetic probes requires time-varying coil currents, which also simultaneously drive eddy currents in electrically conducting structures. The eddy current-induced field appearing in the magnetic measurements can substantially increase the calibration error in the model if the eddy currents are neglected. In this paper, an expression of the magnetic diagnostic response to the coil currents is used to calibrate the mutual inductances, estimate the conductor time constant, and predict the eddy currents response. It is found that the eddy current effects in magnetic signals can be well-explained by the eddy current response determination. A set of experiments using a specially shaped saddle coil diagnostic are conducted to measure the SUNIST-like eddy current response and to examine the accuracy of this method. In shots that include plasmas, this approach can more accurately determine the plasma-related response in the magnetic signals by eliminating the field due to the eddy currents produced by the external field.

  5. Visualization of medical data based on EHR standards.

    PubMed

    Kopanitsa, G; Hildebrand, C; Stausberg, J; Englmeier, K H

    2013-01-01

    To organize an efficient interaction between a doctor and an EHR the data has to be presented in the most convenient way. Medical data presentation methods and models must be flexible in order to cover the needs of the users with different backgrounds and requirements. Most visualization methods are doctor oriented, however, there are indications that the involvement of patients can optimize healthcare. The research aims at specifying the state of the art of medical data visualization. The paper analyzes a number of projects and defines requirements for a generic ISO 13606 based data visualization method. In order to do so it starts with a systematic search for studies on EHR user interfaces. In order to identify best practices visualization methods were evaluated according to the following criteria: limits of application, customizability, re-usability. The visualization methods were compared by using specified criteria. The review showed that the analyzed projects can contribute knowledge to the development of a generic visualization method. However, none of them proposed a model that meets all the necessary criteria for a re-usable standard based visualization method. The shortcomings were mostly related to the structure of current medical concept specifications. The analysis showed that medical data visualization methods use hardcoded GUI, which gives little flexibility. So medical data visualization has to turn from a hardcoded user interface to generic methods. This requires a great effort because current standards are not suitable for organizing the management of visualization data. This contradiction between a generic method and a flexible and user-friendly data layout has to be overcome.

  6. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  7. Current Options for the Treatment of Food Allergy

    PubMed Central

    Lanser, Bruce J.; Wright, Benjamin L.; Orgel, Kelly A.; Vickery, Brian P.; Fleischer, David M.

    2016-01-01

    Food allergy is increasing in prevalence; as a result, there is intense focus on developing safe and effective therapies. Current methods of specific immunotherapy include oral, sublingual, and epicutaneous, while nonspecific methods that have been investigated include: Chinese herbal medicine, probiotics, and anti-IgE antibodies. Although some studies have demonstrated efficacy in inducing desensitization, questions regarding safety and the potential for achieving immune tolerance remain. Although some of these therapies demonstrate promise, further investigation is required before their incorporation into routine clinical practice. PMID:26456449

  8. Accurate measurement of transgene copy number in crop plants using droplet digital PCR

    USDA-ARS?s Scientific Manuscript database

    Technical abstract: Genetic transformation is a powerful means for the improvement of crop plants, but requires labor and resource intensive methods. An efficient method for identifying single copy transgene insertion events from a population of independent transgenic lines is desirable. Currently ...

  9. Accurate measure of transgene copy number in crop plants using droplet digital PCR

    USDA-ARS?s Scientific Manuscript database

    Genetic transformation is a powerful means for the improvement of crop plants, but requires labor- and resource-intensive methods. An efficient method for identifying single-copy transgene insertion events from a population of independent transgenic lines is desirable. Currently, transgene copy numb...

  10. 43 CFR 3922.20 - Application contents.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., and transportation methods, including: (1) A description of the mining, retorting, or in situ mining... applications must be filed in the proper BLM State Office. No specific form of application is required, but the... is substantially identical to a technology or method currently in use to produce marketable...

  11. 43 CFR 3922.20 - Application contents.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., and transportation methods, including: (1) A description of the mining, retorting, or in situ mining... applications must be filed in the proper BLM State Office. No specific form of application is required, but the... is substantially identical to a technology or method currently in use to produce marketable...

  12. 43 CFR 3922.20 - Application contents.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., and transportation methods, including: (1) A description of the mining, retorting, or in situ mining... applications must be filed in the proper BLM State Office. No specific form of application is required, but the... is substantially identical to a technology or method currently in use to produce marketable...

  13. 43 CFR 3922.20 - Application contents.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., and transportation methods, including: (1) A description of the mining, retorting, or in situ mining... applications must be filed in the proper BLM State Office. No specific form of application is required, but the... is substantially identical to a technology or method currently in use to produce marketable...

  14. USE OF A MOLECULAR PROBE ASSAY FOR MONITORING SALMONELLA SPP. IN BIOSOLIDS SAMPLES

    EPA Science Inventory

    Current federal regulations (40 CFR 503) require enumeration of fecal coliform or salmonellae prior to land application of biosolids. This regulation specifies use of enumeration methods included in "Standard methods for the Examination of Water and Wastewater 18th Edition," (SM)...

  15. STANDARDIZATION AND VALIDATION OF METHODS FOR ENUMERATION OF FECAL COLIFORM AND SALMONELLA IN BIOSOLIDS

    EPA Science Inventory

    Current federal regulations required monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella were reviewed and a standard protocol was developed. The protocols were then...

  16. STANDARDIZATION AND VALIDATION OF METHODS FOR ENUMERATION OF FECAL COLIFORM AND SALMONELLA IN BIOSOLIDS

    EPA Science Inventory

    Current federal regulations require monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella were reviewed and a standard protocol was developed. The protocols were then evaluated by testi...

  17. [Development method of healthcare information system integration based on business collaboration model].

    PubMed

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  18. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean

    2012-01-01

    The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.

  19. Cerebral blood flow and autoregulation: current measurement techniques and prospects for noninvasive optical methods

    PubMed Central

    Fantini, Sergio; Sassaroli, Angelo; Tgavalekos, Kristen T.; Kornbluth, Joshua

    2016-01-01

    Abstract. Cerebral blood flow (CBF) and cerebral autoregulation (CA) are critically important to maintain proper brain perfusion and supply the brain with the necessary oxygen and energy substrates. Adequate brain perfusion is required to support normal brain function, to achieve successful aging, and to navigate acute and chronic medical conditions. We review the general principles of CBF measurements and the current techniques to measure CBF based on direct intravascular measurements, nuclear medicine, X-ray imaging, magnetic resonance imaging, ultrasound techniques, thermal diffusion, and optical methods. We also review techniques for arterial blood pressure measurements as well as theoretical and experimental methods for the assessment of CA, including recent approaches based on optical techniques. The assessment of cerebral perfusion in the clinical practice is also presented. The comprehensive description of principles, methods, and clinical requirements of CBF and CA measurements highlights the potentially important role that noninvasive optical methods can play in the assessment of neurovascular health. In fact, optical techniques have the ability to provide a noninvasive, quantitative, and continuous monitor of CBF and autoregulation. PMID:27403447

  20. Improved power control using optimal adjustable coefficients for three-phase photovoltaic inverter under unbalanced grid voltage.

    PubMed

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software.

  1. Improved Power Control Using Optimal Adjustable Coefficients for Three-Phase Photovoltaic Inverter under Unbalanced Grid Voltage

    PubMed Central

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software. PMID:25243215

  2. Overview of mycotoxin methods, present status and future needs.

    PubMed

    Gilbert, J

    1999-01-01

    This article reviews current requirements for the analysis for mycotoxins in foods and identifies legislative as well as other factors that are driving development and validation of new methods. New regulatory limits for mycotoxins and analytical quality assurance requirements for laboratories to only use validated methods are seen as major factors driving developments. Three major classes of methods are identified which serve different purposes and can be categorized as screening, official and research. In each case the present status and future needs are assessed. In addition to an overview of trends in analytical methods, some other areas of analytical quality assurance such as participation in proficiency testing and reference materials are identified.

  3. A general method for decomposing the causes of socioeconomic inequality in health.

    PubMed

    Heckley, Gawain; Gerdtham, Ulf-G; Kjellsson, Gustav

    2016-07-01

    We introduce a general decomposition method applicable to all forms of bivariate rank dependent indices of socioeconomic inequality in health, including the concentration index. The technique is based on recentered influence function regression and requires only the application of OLS to a transformed variable with similar interpretation. Our method requires few identifying assumptions to yield valid estimates in most common empirical applications, unlike current methods favoured in the literature. Using the Swedish Twin Registry and a within twin pair fixed effects identification strategy, our new method finds no evidence of a causal effect of education on income-related health inequality. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Current Practices in Instruction in the Literary Braille Code University Personnel Preparation Programs

    ERIC Educational Resources Information Center

    Rosenblum, L. Penny; Lewis, Sandra; D'Andrea, Frances Mary

    2010-01-01

    University instructors were surveyed to determine the requirements for their literary braille courses. Twenty-one instructors provided information on the textbooks they used; how they determined errors; reading proficiency requirements; and other pertinent information, such as methods of assessing mastery of the production of braille using a…

  5. Optimization of the current potential for stellarator coils

    NASA Astrophysics Data System (ADS)

    Boozer, Allen H.

    2000-02-01

    Stellarator plasma confinement devices have no continuous symmetries, which makes the design of appropriate coils far more subtle than for axisymmetric devices such as tokamaks. The modern method for designing coils for stellarators was developed by Peter Merkel [P. Merkel, Nucl. Fusion 27, 867 (1987)]. Although his method has yielded a number of successful stellarator designs, Merkel's method has a systematic tendency to give coils with a larger current than that required to produce a stellarator plasma with certain properties. In addition, Merkel's method does not naturally lead to a coil set with the flexibility to produce a number of interesting plasma configurations. The issues of coil efficiency and flexibility are addressed in this paper by a new method of optimizing the current potential, the first step in Merkel's method. The new method also allows the coil design to be based on a freer choice for the plasma-coil separation and to be constrained so space is preserved for plasma access.

  6. Optimization of the current potential for stellarator coils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boozer, Allen H.; Max-Planck-Institut fuer Plasmaphysik, EURATOM-Association, D-85748 Garching,

    2000-02-01

    Stellarator plasma confinement devices have no continuous symmetries, which makes the design of appropriate coils far more subtle than for axisymmetric devices such as tokamaks. The modern method for designing coils for stellarators was developed by Peter Merkel [P. Merkel, Nucl. Fusion 27, 867 (1987)]. Although his method has yielded a number of successful stellarator designs, Merkel's method has a systematic tendency to give coils with a larger current than that required to produce a stellarator plasma with certain properties. In addition, Merkel's method does not naturally lead to a coil set with the flexibility to produce a number ofmore » interesting plasma configurations. The issues of coil efficiency and flexibility are addressed in this paper by a new method of optimizing the current potential, the first step in Merkel's method. The new method also allows the coil design to be based on a freer choice for the plasma-coil separation and to be constrained so space is preserved for plasma access. (c) 2000 American Institute of Physics.« less

  7. Novel operation and control of an electric vehicle aluminum/air battery system

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Yang, Shao Hua; Knickle, Harold

    The objective of this paper is to create a method to size battery subsystems for an electric vehicle to optimize battery performance. Optimization of performance includes minimizing corrosion by operating at a constant current density. These subsystems will allow for easy mechanical recharging. A proper choice of battery subsystem will allow for longer battery life, greater range and performance. For longer life, the current density and reaction rate should be nearly constant. The control method requires control of power by controlling electrolyte flow in battery sub modules. As power is increased more sub modules come on line and more electrolyte is needed. Solenoid valves open in a sequence to provide the required power. Corrosion is limited because there is no electrolyte in the modules not being used.

  8. A Manually Operated, Advance Off-Stylet Insertion Tool for Minimally Invasive Cochlear Implantation Surgery

    PubMed Central

    Kratchman, Louis B.; Schurzig, Daniel; McRackan, Theodore R.; Balachandran, Ramya; Noble, Jack H.; Webster, Robert J.; Labadie, Robert F.

    2014-01-01

    The current technique for cochlear implantation (CI) surgery requires a mastoidectomy to gain access to the cochlea for electrode array insertion. It has been shown that microstereotactic frames can enable an image-guided, minimally invasive approach to CI surgery called percutaneous cochlear implantation (PCI) that uses a single drill hole for electrode array insertion, avoiding a more invasive mastoidectomy. Current clinical methods for electrode array insertion are not compatible with PCI surgery because they require a mastoidectomy to access the cochlea; thus, we have developed a manually operated electrode array insertion tool that can be deployed through a PCI drill hole. The tool can be adjusted using a preoperative CT scan for accurate execution of the advance off-stylet (AOS) insertion technique and requires less skill to operate than is currently required to implant electrode arrays. We performed three cadaver insertion experiments using the AOS technique and determined that all insertions were successful using CT and microdissection. PMID:22851233

  9. Digital control of highly augmented combat rotorcraft

    NASA Technical Reports Server (NTRS)

    Tischler, Mark B.

    1987-01-01

    Proposed concepts for the next generation of combat helicopters are to be embodied in a complex, highly maneuverable, multiroled vehicle with avionics systems. Single pilot and nap-of-the-Earth operations require handling qualities which minimize the involvement of the pilot in basic stabilization tasks. To meet these requirements will demand a full authority, high-gain, multimode, multiply-redundant, digital flight-control system. The gap between these requirements and current low-authority, low-bandwidth operational rotorcraft flight-control technology is considerable. This research aims at smoothing the transition between current technology and advanced concept requirements. The state of the art of high-bandwidth digital flight-control systems are reviewed; areas of specific concern for flight-control systems of modern combat are exposed; and the important concepts are illustrated in design and analysis of high-gain, digital systems with a detailed case study involving a current rotorcraft system. Approximate and exact methods are explained and illustrated for treating the important concerns which are unique to digital systems.

  10. A Method for Selection of Appropriate Assistive Technology for Computer Access

    ERIC Educational Resources Information Center

    Jenko, Mojca

    2010-01-01

    Assistive technologies (ATs) for computer access enable people with disabilities to be included in the information society. Current methods for assessment and selection of the most appropriate AT for each individual are nonstandardized, lengthy, subjective, and require substantial clinical experience of a multidisciplinary team. This manuscript…

  11. Calculating the Financial Impact of Population Growth on Education.

    ERIC Educational Resources Information Center

    Cline, Daniel H.

    It is particularly difficult to make accurate enrollment projections for areas that are experiencing a rapid expansion in their population. The traditional method of calculating cohort survival ratios must be modified and supplemented with additional information to ensure accuracy; cost projection methods require detailed analyses of current costs…

  12. Genotypic Detection of Antibiotic Resistance in "Escherichia Coli.": A Classroom Exercise

    ERIC Educational Resources Information Center

    Longtin, Sarah; Guilfoile, Patrick; Asper, Andrea

    2004-01-01

    Bacterial antibiotic resistance remains a problem of clinical importance. Current microbiological methods for determining antibiotic resistance are based on culturing bacteria, and may require up to 48 hours to complete. Molecular methods are increasingly being developed to speed the identification of antibiotic resistance and to determine its…

  13. Comparative efficacy of multimodal digital methods in assessing trail/resource degradation

    Treesearch

    Logan O. Park

    2014-01-01

    Outdoor recreation can cause both positive and negative impacts on associated forest ecosystems. Forest recreation trails localize negative impacts to a controlled spatial extent while providing recreation access beyond developed areas and transportation networks. Current methods for assessing extent and severity of trail and proximal resource degradation require...

  14. RAPID HEALTH-BASED METHOD FOR MEASURING MICROBIAL INDICATORS OF RECREATIONAL WATER QUALITY - 2006 EPA SCIENCE FORUM

    EPA Science Inventory

    Because the current approved cultural methods for monitoring indicator bacteria in recreational water require 24 hours to produce results, the public may be exposed to potentially contaminated water before the water has been identified as hazardous. This project was initiated to...

  15. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection

    EPA Science Inventory

    Methods are needed improve the timeliness and accuracy of recreational water‐quality assessments. Traditional culture methods require 18–24 h to obtain results and may not reflect current conditions. Predictive models, based on environmental and water quality variables, have been...

  16. Automated detection of insect-damaged sunflower seeds by X-ray imaging

    USDA-ARS?s Scientific Manuscript database

    The development of insect-resistant sunflowers is hindered by the lack of a quick and effective method for scoring samples in terms of insect damage. The current method for scoring insect damage, which involves manual inspection of seeds for holes bored into the shell, is tedious, requiring approxi...

  17. How Much Do Our Students Learn by Attending Lectures?

    ERIC Educational Resources Information Center

    Sistek, Vladimir

    Considerations that affect the type of teaching method employed in undergraduate studies and medical schools are addressed, with attention to the current emphasis on the lecture method and alternative educational experiences that require students to be active, independent learners and problem solvers. Perceived academic priorities and the…

  18. Eddy current correction in volume-localized MR spectroscopy

    NASA Technical Reports Server (NTRS)

    Lin, C.; Wendt, R. E. 3rd; Evans, H. J.; Rowe, R. M.; Hedrick, T. D.; LeBlanc, A. D.

    1994-01-01

    The quality of volume-localized magnetic resonance spectroscopy is affected by eddy currents caused by gradient switching. Eddy currents can be reduced with improved gradient systems; however, it has been suggested that the distortion due to eddy currents can be compensated for during postprocessing with a single-frequency reference signal. The authors propose modifying current techniques for acquiring the single-frequency reference signal by using relaxation weighting to reduce interference from components that cannot be eliminated by digital filtering alone. Additional sequences with T1 or T2 weighting for reference signal acquisition are shown to have the same eddy current characteristics as the original signal without relaxation weighting. The authors also studied a new eddy current correction method that does not require a single-frequency reference signal. This method uses two free induction decays (FIDs) collected from the same volume with two sequences with opposite gradients. Phase errors caused by eddy currents are opposite in these two FIDs and can be canceled completely by combining the FIDs. These methods were tested in a phantom. Eddy current distortions were corrected, allowing quantitative measurement of structures such as the -CH = CH- component, which is otherwise undetectable.

  19. Passive detection of copy-move forgery in digital images: state-of-the-art.

    PubMed

    Al-Qershi, Osamah M; Khoo, Bee Ee

    2013-09-10

    Currently, digital images and videos have high importance because they have become the main carriers of information. However, the relative ease of tampering with images and videos makes their authenticity untrustful. Digital image forensics addresses the problem of the authentication of images or their origins. One main branch of image forensics is passive image forgery detection. Images could be forged using different techniques, and the most common forgery is the copy-move, in which a region of an image is duplicated and placed elsewhere in the same image. Active techniques, such as watermarking, have been proposed to solve the image authenticity problem, but those techniques have limitations because they require human intervention or specially equipped cameras. To overcome these limitations, several passive authentication methods have been proposed. In contrast to active methods, passive methods do not require any previous information about the image, and they take advantage of specific detectable changes that forgeries can bring into the image. In this paper, we describe the current state-of-the-art of passive copy-move forgery detection methods. The key current issues in developing a robust copy-move forgery detector are then identified, and the trends of tackling those issues are addressed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  1. Towards an Automated Development Methodology for Dependable Systems with Application to Sensor Networks

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a probably equivalent model has yet to appeal: Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a probably equivalent implementation are valuable but not su8cient. The "gap" unfilled by such tools and methods is that their. formal models cannot be proven to be equivalent to the system requirements as originated by the customel: For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a probably equivalent formal model that can be used as the basis for code generation and other transformations.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Jennifer N.; Hwang, Wonjun; Horn, John

    We report that the rupture of an intracranial aneurysm, which can result in severe mental disabilities or death, affects approximately 30,000 people in the United States annually. The traditional surgical method of treating these arterial malformations involves a full craniotomy procedure, wherein a clip is placed around the aneurysm neck. In recent decades, research and device development have focused on new endovascular treatment methods to occlude the aneurysm void space. These methods, some of which are currently in clinical use, utilize metal, polymeric, or hybrid devices delivered via catheter to the aneurysm site. In this review, we present several suchmore » devices, including those that have been approved for clinical use, and some that are currently in development. We present several design requirements for a successful aneurysm filling device and discuss the success or failure of current and past technologies. Lastly, we also present novel polymeric based aneurysm filling methods that are currently being tested in animal models that could result in superior healing.« less

  3. Design and biocompatibility of endovascular aneurysm filling devices

    DOE PAGES

    Rodriguez, Jennifer N.; Hwang, Wonjun; Horn, John; ...

    2014-08-04

    We report that the rupture of an intracranial aneurysm, which can result in severe mental disabilities or death, affects approximately 30,000 people in the United States annually. The traditional surgical method of treating these arterial malformations involves a full craniotomy procedure, wherein a clip is placed around the aneurysm neck. In recent decades, research and device development have focused on new endovascular treatment methods to occlude the aneurysm void space. These methods, some of which are currently in clinical use, utilize metal, polymeric, or hybrid devices delivered via catheter to the aneurysm site. In this review, we present several suchmore » devices, including those that have been approved for clinical use, and some that are currently in development. We present several design requirements for a successful aneurysm filling device and discuss the success or failure of current and past technologies. Lastly, we also present novel polymeric based aneurysm filling methods that are currently being tested in animal models that could result in superior healing.« less

  4. Design and biocompatibility of endovascular aneurysm filling devices

    PubMed Central

    Rodriguez, Jennifer N.; Hwang, Wonjun; Horn, John; Landsman, Todd L.; Boyle, Anthony; Wierzbicki, Mark A.; Hasan, Sayyeda M.; Follmer, Douglas; Bryant, Jesse; Small, Ward; Maitland, Duncan J.

    2014-01-01

    The rupture of an intracranial aneurysm, which can result in severe mental disabilities or death, affects approximately 30,000 people in the United States annually. The traditional surgical method of treating these arterial malformations involves a full craniotomy procedure, wherein a clip is placed around the aneurysm neck. In recent decades, research and device development have focused on new endovascular treatment methods to occlude the aneurysm void space. These methods, some of which are currently in clinical use, utilize metal, polymeric, or hybrid devices delivered via catheter to the aneurysm site. In this review, we present several such devices, including those that have been approved for clinical use, and some that are currently in development. We present several design requirements for a successful aneurysm filling device and discuss the success or failure of current and past technologies. We also present novel polymeric based aneurysm filling methods that are currently being tested in animal models that could result in superior healing. PMID:25044644

  5. Coil optimisation for transcranial magnetic stimulation in realistic head geometry.

    PubMed

    Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J

    Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Quantum rings in magnetic fields and spin current generation.

    PubMed

    Cini, Michele; Bellucci, Stefano

    2014-04-09

    We propose three different mechanisms for pumping spin-polarized currents in a ballistic circuit using a time-dependent magnetic field acting on an asymmetrically connected quantum ring at half filling. The first mechanism works thanks to a rotating magnetic field and produces an alternating current with a partial spin polarization. The second mechanism works by rotating the ring in a constant field; like the former case, it produces an alternating charge current, but the spin current is dc. Both methods do not require a spin-orbit interaction to achieve the polarized current, but the rotating ring could be used to measure the spin-orbit interaction in the ring using characteristic oscillations. On the other hand, the last mechanism that we propose depends on the spin-orbit interaction in an essential way, and requires a time-dependent magnetic field in the plane of the ring. This arrangement can be designed to pump a purely spin current. The absence of a charge current is demonstrated analytically. Moreover, a simple formula for the current is derived and compared with the numerical results.

  7. COMPARISON OF THE RECOVERIES OF ESCHERICHIA COLI AND TOTAL COLIFORMS FROM DRINKING WATER BY THE MI AGAR METHOD AND THE U.S. ENVIRONMENTAL PROTECTION AGENCY-APPROVED MEMBRANE FILTER METHOD

    EPA Science Inventory

    Drinking water regulations under the Final Coliform Rule require that total coliform-positive drinking water samples be examined for the presence of Escherichia coli or fecal coliforms. The current U.S. Environmental Protection Agency-approved membrane filter (MF) method for E. c...

  8. Global flowfield about the V-22 Tiltrotor Aircraft

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1995-01-01

    The Chimera overset grid method is reviewed and discussed in the context of a method of solution and analysis of unsteady three-dimensional viscous flows. The state of maturity of the various pieces of support software required to use the approach is discussed. A variety of recent applications of the method is presented. Current limitations of the approach are identified.

  9. Method for making high-critical-current-density YBa.sub.2 Cu.sub.3 O.sub.7 superconducting layers on metallic substrates

    DOEpatents

    Feenstra, Roeland; Christen, David; Paranthaman, Mariappan

    1999-01-01

    A method is disclosed for fabricating YBa.sub.2 Cu.sub.3 O.sub.7 superconductor layers with the capability of carrying large superconducting currents on a metallic tape (substrate) supplied with a biaxially textured oxide buffer layer. The method represents a simplification of previously established techniques and provides processing requirements compatible with scale-up to long wire (tape) lengths and high processing speeds. This simplification has been realized by employing the BaF.sub.2 method to grow a YBa.sub.2 Cu.sub.3 O.sub.7 film on a metallic substrate having a biaxially textured oxide buffer layer.

  10. Artificial intelligence in the materials processing laboratory

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Kaukler, William F.

    1990-01-01

    Materials science and engineering provides a vast arena for applications of artificial intelligence. Advanced materials research is an area in which challenging requirements confront the researcher, from the drawing board through production and into service. Advanced techniques results in the development of new materials for specialized applications. Hand-in-hand with these new materials are also requirements for state-of-the-art inspection methods to determine the integrity or fitness for service of structures fabricated from these materials. Two problems of current interest to the Materials Processing Laboratory at UAH are an expert system to assist in eddy current inspection of graphite epoxy components for aerospace and an expert system to assist in the design of superalloys for high temperature applications. Each project requires a different approach to reach the defined goals. Results to date are described for the eddy current analysis, but only the original concepts and approaches considered are given for the expert system to design superalloys.

  11. Nutritional requirements and assessing nutritional status in camelids.

    PubMed

    Van Saun, Robert J

    2009-07-01

    It has been nearly 30 years since the first imported llamas and alpacas have been commercially raised in the United States. Nutritional requirements for these animals have not been well understood and most feeding practices were based on extrapolated and experiential information. Only recently has a National Research Council committee reviewed the available published information relative to nutrient requirements of llamas and alpacas. This article summarizes current nutrient requirement recommendations and provides some practical feeding recommendations and methods to assess nutritional status.

  12. A Survey of Hospice Volunteer Coordinators: Training Methods and Objectives of Current Hospice Volunteer Training Programs.

    PubMed

    Brock, Cara M; Herndon, Christopher M

    2017-06-01

    Currently more than 5800 hospice organizations operate in the United States. 1 Hospice organizations are required by the Centers for Medicare and Medicaid Services (CMS) to use volunteers for services provided to patients. 2 Although CMS regulates the amount of hours hospice volunteers should provide, there are currently no national requirements for objectives of training. 3 The purpose of this study was to gather information from a sample of hospices regarding volunteer coordinator background, current training for volunteers, importance of training objectives, and any comments regarding additional objectives. Representative state hospice organizations were contacted by e-mail requesting their participation and distribution of the survey throughout their member hospices. The survey asked demographical questions, along with ratings of training components based on perceived level of importance and time spent on each objective. A total of 90 surveys were received, and the response rate was undeterminable. Results showed the majority of hospices were nonprofit, had less than 100 currently trained volunteers, and maintained an average daily patient census of less than 50. Questions regarding training programs indicated that most use live lecture methods of approximately 19 hours or less in duration. Overall, responding hospice organizations agreed that all objectives surveyed were important in training volunteers. The small number of respondents to this survey makes generalization nationwide difficult, however it is a strong starting point for the development of further surveys on hospice volunteer training and achieving a standardized set of training objectives and delivery methods.

  13. Finite difference time domain (FDTD) method for modeling the effect of switched gradients on the human body in MRI.

    PubMed

    Zhao, Huawei; Crozier, Stuart; Liu, Feng

    2002-12-01

    Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model. Copyright 2002 Wiley-Liss, Inc.

  14. Post Viking planetary protection requirements study

    NASA Technical Reports Server (NTRS)

    Wolfson, R. P.

    1977-01-01

    Past planetary quarantine requirements were reviewed in the light of present Viking data to determine the steps necessary to prevent contamination of the Martian surface on future missions. The currently used term planetary protection reflects a broader scope of understanding of the problems involved. Various methods of preventing contamination are discussed in relation to proposed projects, specifically the 1984 Rover Mission.

  15. Analysis of Job Announcements and the Required Competencies for Instructional Technology Professionals.

    ERIC Educational Resources Information Center

    Moallem, Mahnaz

    A study was conducted to analyze current job announcements in the field of instructional design and technology and to produce descriptive information that portrays the required skills and areas of knowledge for instructional technology graduates. Content analysis, in its general terms, was used as the research method for this study. One hundred…

  16. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, C.E.; Boothe, R.W.

    1994-02-15

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements. 6 figures.

  17. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, C.E.; Boothe, R.W.

    1996-01-23

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements. 6 figs.

  18. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, Charles E.; Boothe, Richard W.

    1996-01-01

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements.

  19. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, Charles E.; Boothe, Richard W.

    1994-01-01

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements.

  20. Optimized use of superconducting magnetic energy storage for electromagnetic rail launcher powering

    NASA Astrophysics Data System (ADS)

    Badel, Arnaud; Tixador, Pascal; Arniet, Michel

    2012-01-01

    Electromagnetic rail launchers (EMRLs) require very high currents, from hundreds of kA to several MA. They are usually powered by capacitors. The use of superconducting magnetic energy storage (SMES) in the supply chain of an EMRL is investigated, as an energy buffer and as direct powering source. Simulations of direct powering are conducted to quantify the benefits of this method in terms of required primary energy. In order to enhance further the benefits of SMES powering, a novel integration concept is proposed, the superconducting self-supplied electromagnetic launcher (S3EL). In the S3EL, the SMES is used as a power supply for the EMRL but its coil serves also as an additional source of magnetic flux density, in order to increase the thrust (or reduce the required current for a given thrust). Optimization principles for this new concept are presented. Simulations based on the characteristics of an existing launcher demonstrate that the required current could be reduced by a factor of seven. Realizing such devices with HTS cables should be possible in the near future, especially if the S3EL concept is used in combination with the XRAM principle, allowing current multiplication.

  1. Correlates of, and barriers to, Internet use among older adults.

    PubMed

    Chang, Janet; McAllister, Carolyn; McCaslin, Rosemary

    2015-01-01

    Older adults constitute the group with the greatest increase in Internet usage in the past decade; however, usage varies greatly within this population. Services to older adults require a current understanding of Internet-use trends. This study utilized a quantitative survey method to examine correlates of, and barriers to, current Internet use in a demographically diverse county in Southern California. Findings indicate that the presence of a computer at home, a job requiring computer use, age, education, and ethnicity are important factors in predicting Internet use in older adults. Implications for social work practice with older adults is discussed.

  2. Design of diversity and focused combinatorial libraries in drug discovery.

    PubMed

    Young, S Stanley; Ge, Nanxiang

    2004-05-01

    Using well-characterized chemical reactions and readily available monomers, chemists are able to create sets of compounds, termed libraries, which are useful in drug discovery processes. The design of combinatorial chemical libraries can be complex and there has been much information recently published offering suggestions on how the design process can be carried out. This review focuses on literature with the goal of organizing current thinking. At this point in time, it is clear that benchmarking of current suggested methods is required as opposed to further new methods.

  3. Effect-Based Screening Methods for Water Quality Characterization Will Augment Conventional Analyte-by-Analyte Chemical Methods in Research As Well As Regulatory Monitoring

    EPA Science Inventory

    Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...

  4. Global challenges/chemistry solutions: Promoting personal safety and national security

    USDA-ARS?s Scientific Manuscript database

    Joe Alper: Can you provide a little background about why there is a need for this type of assay? Mark Carter: Ricin is considered a biosecurity threat agent. A more efficient detection method was required. Joe Alper: How are these type of assays done today, or are current methods unsuitable for ...

  5. N400 Response Indexes Word Learning from Linguistic Context in Children

    ERIC Educational Resources Information Center

    Abel, Alyson D.; Schneider, Julie; Maguire, Mandy J

    2018-01-01

    Word learning from linguistic context is essential for vocabulary growth from grade school onward; however, little is known about the mechanisms underlying successful word learning in children. Current methods for studying word learning development require children to identify the meaning of the word after each exposure, a method that interacts…

  6. Possible strategies for EDC testing in the future: exploring roles of pathway-based in silico, in vitro and in vivo methods

    EPA Science Inventory

    Current methods for screening, testing and monitoring endocrine-disrupting chemicals (EDCs) rely relatively substantially upon moderate- to long-term assays that can, in some instances, require significant numbers of animals. Recent developments in the areas of in vitro testing...

  7. Multiscale Reactive Molecular Dynamics

    DTIC Science & Technology

    2012-08-15

    biology cannot be described without considering electronic and nuclear-level dynamics and their coupling to slower, cooperative motions of the system ...coupling to slower, cooperative motions of the system . These inherently multiscale problems require computationally efficient and accurate methods to...condensed phase systems with computational efficiency orders of magnitudes greater than currently possible with ab initio simulation methods, thus

  8. Adjusting slash pine growth and yield for silvicultural treatments

    Treesearch

    Stephen R. Logan; Barry D. Shiver

    2006-01-01

    With intensive silvicultural treatments such as fertilization and competition control now commonplace in today's slash pine (Pinus elliottii Engelm.) plantations, a method to adjust current growth and yield models is required to accurately account for yield increases due to these practices. Some commonly used ad-hoc methods, such as raising site...

  9. 76 FR 36623 - Proposed Collection; Comment Request for Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-22

    ... 1995, Public Law 104-13 (44 U.S.C. 3506(c)(2)(A)). Currently, the IRS is soliciting comments concerning requirements respecting the adoption or change of accounting method; extensions of time to make elections... or Change of Accounting Method; Extensions of Time to Make Elections. OMB Number: 1545-1488...

  10. A calibration-free electrode compensation method

    PubMed Central

    Rossant, Cyrille; Fontaine, Bertrand; Magnusson, Anna K.

    2012-01-01

    In a single-electrode current-clamp recording, the measured potential includes both the response of the membrane and that of the measuring electrode. The electrode response is traditionally removed using bridge balance, where the response of an ideal resistor representing the electrode is subtracted from the measurement. Because the electrode is not an ideal resistor, this procedure produces capacitive transients in response to fast or discontinuous currents. More sophisticated methods exist, but they all require a preliminary calibration phase, to estimate the properties of the electrode. If these properties change after calibration, the measurements are corrupted. We propose a compensation method that does not require preliminary calibration. Measurements are compensated offline by fitting a model of the neuron and electrode to the trace and subtracting the predicted electrode response. The error criterion is designed to avoid the distortion of compensated traces by spikes. The technique allows electrode properties to be tracked over time and can be extended to arbitrary models of electrode and neuron. We demonstrate the method using biophysical models and whole cell recordings in cortical and brain-stem neurons. PMID:22896724

  11. ISO 15859 Propellant and Fluid Specifications: A Review and Comparison with Military and NASA Specifications

    NASA Technical Reports Server (NTRS)

    Greene, Ben; McClure, Mark B.; Baker, David L.

    2006-01-01

    This work presents an overview of the International Organization for Standardization (ISO) 15859 International Standard for Space Systems Fluid Characteristics, Sampling and Test Methods Parts 1 through 13 issued in June 2004. These standards establish requirements for fluid characteristics, sampling, and test methods for 13 fluids of concern to the propellant community and propellant characterization laboratories: oxygen, hydrogen, nitrogen, helium, nitrogen tetroxide, monomethylhydrazine, hydrazine, kerosene, argon, water, ammonia, carbon dioxide, and breathing air. A comparison of the fluid characteristics, sampling, and test methods required by the ISO standards to the current military and NASA specifications, which are in use at NASA facilities and elsewhere, is presented. Many ISO standards composition limits and other content agree with those found in the applicable parts of NASA SE-S-0073, NASA SSP 30573, military performance standards and details, and Compressed Gas Association (CGA) commodity specifications. The status of a current project managed at NASA Johnson Space Center White Sands Test Facility (WSTF) to rewrite these documents is discussed.

  12. A Conceptual Model of the Information Requirements of Nursing Organizations

    PubMed Central

    Miller, Emmy

    1989-01-01

    Three related issues play a role in the identification of the information requirements of nursing organizations. These issues are the current state of computer systems in health care organizations, the lack of a well-defined data set for nursing, and the absence of models representing data and information relevant to clinical and administrative nursing practice. This paper will examine current methods of data collection, processing, and storage in clinical and administrative nursing practice for the purpose of identifying the information requirements of nursing organizations. To satisfy these information requirements, database technology can be used; however, a model for database design is needed that reflects the conceptual framework of nursing and the professional concerns of nurses. A conceptual model of the types of data necessary to produce the desired information will be presented and the relationships among data will be delineated.

  13. A QR accelerated volume-to-surface boundary condition for finite element solution of eddy current problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D; Fasenfest, B; Rieben, R

    2006-09-08

    We are concerned with the solution of time-dependent electromagnetic eddy current problems using a finite element formulation on three-dimensional unstructured meshes. We allow for multiple conducting regions, and our goal is to develop an efficient computational method that does not require a computational mesh of the air/vacuum regions. This requires a sophisticated global boundary condition specifying the total fields on the conductor boundaries. We propose a Biot-Savart law based volume-to-surface boundary condition to meet this requirement. This Biot-Savart approach is demonstrated to be very accurate. In addition, this approach can be accelerated via a low-rank QR approximation of the discretizedmore » Biot-Savart law.« less

  14. Protein and Amino Acid Requirements during Pregnancy.

    PubMed

    Elango, Rajavel; Ball, Ronald O

    2016-07-01

    Protein forms an essential component of a healthy diet in humans to support both growth and maintenance. During pregnancy, an exceptional stage of life defined by rapid growth and development, adequate dietary protein is crucial to ensure a healthy outcome. Protein deposition in maternal and fetal tissues increases throughout pregnancy, with most occurring during the third trimester. Dietary protein intake recommendations are based on factorial estimates because the traditional method of determining protein requirements, nitrogen balance, is invasive and undesirable during pregnancy. The current Estimated Average Requirement and RDA recommendations of 0.88 and 1.1 g · kg(-1) · d(-1), respectively, are for all stages of pregnancy. The single recommendation does not take into account the changing needs during different stages of pregnancy. Recently, with the use of the minimally invasive indicator amino acid oxidation method, we defined the requirements to be, on average, 1.2 and 1.52 g · kg(-1) · d(-1) during early (∼16 wk) and late (∼36 wk) stages of pregnancy, respectively. Although the requirements are substantially higher than current recommendations, our values are ∼14-18% of total energy and fit within the Acceptable Macronutrient Distribution Range. Using swine as an animal model we showed that the requirements for several indispensable amino acids increase dramatically during late gestation compared with early gestation. Additional studies should be conducted during pregnancy to confirm the newly determined protein requirements and to determine the indispensable amino acid requirements during pregnancy in humans. © 2016 American Society for Nutrition.

  15. Protein and Amino Acid Requirements during Pregnancy123

    PubMed Central

    Elango, Rajavel; Ball, Ronald O

    2016-01-01

    Protein forms an essential component of a healthy diet in humans to support both growth and maintenance. During pregnancy, an exceptional stage of life defined by rapid growth and development, adequate dietary protein is crucial to ensure a healthy outcome. Protein deposition in maternal and fetal tissues increases throughout pregnancy, with most occurring during the third trimester. Dietary protein intake recommendations are based on factorial estimates because the traditional method of determining protein requirements, nitrogen balance, is invasive and undesirable during pregnancy. The current Estimated Average Requirement and RDA recommendations of 0.88 and 1.1 g · kg−1 · d−1, respectively, are for all stages of pregnancy. The single recommendation does not take into account the changing needs during different stages of pregnancy. Recently, with the use of the minimally invasive indicator amino acid oxidation method, we defined the requirements to be, on average, 1.2 and 1.52 g · kg−1 · d−1 during early (∼16 wk) and late (∼36 wk) stages of pregnancy, respectively. Although the requirements are substantially higher than current recommendations, our values are ∼14–18% of total energy and fit within the Acceptable Macronutrient Distribution Range. Using swine as an animal model we showed that the requirements for several indispensable amino acids increase dramatically during late gestation compared with early gestation. Additional studies should be conducted during pregnancy to confirm the newly determined protein requirements and to determine the indispensable amino acid requirements during pregnancy in humans. PMID:27422521

  16. Estimating Energy Consumption of Mobile Fluid Power in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Lauren; Zigler, Bradley T.

    This report estimates the market size and energy consumption of mobile off-road applications utilizing hydraulic fluid power, and summarizes technology gaps and implementation barriers. Mobile fluid power is the use of hydraulic fluids under pressure to transmit power in mobile equipment applications. The mobile off-road fluid power sector includes various uses of hydraulic fluid power equipment with fundamentally diverse end-use application and operational requirements, such as a skid steer loader, a wheel loader or an agriculture tractor. The agriculture and construction segments dominate the mobile off-road fluid power market in component unit sales volume. An estimated range of energy consumedmore » by the mobile off-road fluid power sector is 0.36 - 1.8 quads per year, which was 1.3 percent - 6.5 percent of the total energy consumed in 2016 by the transportation sector. Opportunities for efficiency improvements within the fluid power system result from needs to level and reduce the peak system load requirements and develop new technologies to reduce fluid power system level losses, both of which may be facilitated by characterizing duty cycles to define standardized performance test methods. There are currently no commonly accepted standardized test methods for evaluating equipment level efficiency over a duty cycle. The off-road transportation sector currently meets criteria emissions requirements, and there are no efficiency regulations requiring original equipment manufacturers (OEM) to invest in new architecture development to improve the fuel economy of mobile off-road fluid power systems. In addition, the end-user efficiency interests are outweighed by low equipment purchase or lease price concerns, required payback periods, and reliability and durability requirements of new architecture. Current economics, low market volumes with high product diversity, and regulation compliance challenge OEM investment in commercialization of new architecture development.« less

  17. Quantitative imaging for clinical dosimetry

    NASA Astrophysics Data System (ADS)

    Bardiès, Manuel; Flux, Glenn; Lassmann, Michael; Monsieurs, Myriam; Savolainen, Sauli; Strand, Sven-Erik

    2006-12-01

    Patient-specific dosimetry in nuclear medicine is now a legal requirement in many countries throughout the EU for targeted radionuclide therapy (TRT) applications. In order to achieve that goal, an increased level of accuracy in dosimetry procedures is needed. Current research in nuclear medicine dosimetry should not only aim at developing new methods to assess the delivered radiation absorbed dose at the patient level, but also to ensure that the proposed methods can be put into practice in a sufficient number of institutions. A unified dosimetry methodology is required for making clinical outcome comparisons possible.

  18. An analytical approach to customer requirement information processing

    NASA Astrophysics Data System (ADS)

    Zhou, Zude; Xiao, Zheng; Liu, Quan; Ai, Qingsong

    2013-11-01

    'Customer requirements' (CRs) management is a key component of customer relationship management (CRM). By processing customer-focused information, CRs management plays an important role in enterprise systems (ESs). Although two main CRs analysis methods, quality function deployment (QFD) and Kano model, have been applied to many fields by many enterprises in the past several decades, the limitations such as complex processes and operations make them unsuitable for online businesses among small- and medium-sized enterprises (SMEs). Currently, most SMEs do not have the resources to implement QFD or Kano model. In this article, we propose a method named customer requirement information (CRI), which provides a simpler and easier way for SMEs to run CRs analysis. The proposed method analyses CRs from the perspective of information and applies mathematical methods to the analysis process. A detailed description of CRI's acquisition, classification and processing is provided.

  19. Validation studies and proficiency testing.

    PubMed

    Ankilam, Elke; Heinze, Petra; Kay, Simon; Van den Eede, Guy; Popping, Bert

    2002-01-01

    Genetically modified organisms (GMOs) entered the European food market in 1996. Current legislation demands the labeling of food products if they contain <1% GMO, as assessed for each ingredient of the product. To create confidence in the testing methods and to complement enforcement requirements, there is an urgent need for internationally validated methods, which could serve as reference methods. To date, several methods have been submitted to validation trials at an international level; approaches now exist that can be used in different circumstances and for different food matrixes. Moreover, the requirement for the formal validation of methods is clearly accepted; several national and international bodies are active in organizing studies. Further validation studies, especially on the quantitative polymerase chain reaction methods, need to be performed to cover the rising demand for new extraction methods and other background matrixes, as well as for novel GMO constructs.

  20. High frequency-heated air turbojet

    NASA Technical Reports Server (NTRS)

    Miron, J. H. D.

    1986-01-01

    A description is given of a method to heat air coming from a turbojet compressor to a temperature necessary to produce required expansion without requiring fuel. This is done by high frequency heating, which heats the walls corresponding to the combustion chamber in existing jets, by mounting high frequency coils in them. The current transformer and high frequency generator to be used are discussed.

  1. 50 CFR 218.24 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... experience collecting behavioral data. (iii) MMOs shall not be placed aboard Navy platforms for every Navy..., Navy R&D, and current science to use for potential modification of mitigation or monitoring methods. (3...

  2. 50 CFR 218.24 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... experience collecting behavioral data. (iii) MMOs shall not be placed aboard Navy platforms for every Navy..., Navy R&D, and current science to use for potential modification of mitigation or monitoring methods. (3...

  3. Review and status of liquid-cooling technology for gas turbines

    NASA Technical Reports Server (NTRS)

    Vanfossen, G. J., Jr.; Stepka, F. S.

    1979-01-01

    A review was conducted of liquid-cooled turbine technology. Selected liquid-cooled systems and methods are presented along with an assessment of the current technology status and requirements. A comprehensive bibliography is presented.

  4. Evaluation of beam wobbling methods for heavy-ion radiotherapy.

    PubMed

    Yonai, Shunsuke; Kanematsu, Nobuyuki; Komori, Masataka; Kanai, Tatsuaki; Takei, Yuka; Takahashi, Osamu; Isobe, Yoshiharu; Tashiro, Mutsumi; Koikegami, Hajime; Tomita, Hideki

    2008-03-01

    The National Institute of Radiological Sciences (NIRS) has extensively studied carbon-ion radiotherapy at the Heavy-Ion Medical Accelerator in Chiba (HIMAC) with some positive outcomes, and has established its efficacy. Therefore, efforts to distribute the therapy to the general public should be made, for which it is essential to enable direct application of clinical and technological experiences obtained at NIRS. For widespread use, it is very important to reduce the cost through facility downsizing with minimal acceleration energy to deliver the HIMAC-equivalent clinical beams. For the beam delivery system, the requirement of miniaturization is translated to reduction in length while maintaining the clinically available field size and penetration range for range-modulated uniform broad beams of regular fields that are either circular or square for simplicity. In this paper, we evaluate the various wobbling methods including original improvements, especially for application to the compact facilities through the experimental and computational studies. The single-ring wobbling method used at HIMAC is the best one including a lot of experience at HIMAC but the residual range is a fatal problem in the case of a compact facility. On the other hand, uniform wobbling methods such as the spiral and zigzag wobbling methods are effective and suitable for a compact facility. Furthermore, these methods can be applied for treatment with passive range modulation including respiratory gated irradiation. In theory, the choice between the spiral and zigzag wobbling methods depends on the shape of the required irradiation field. However, we found that it is better to use the zigzag wobbling method with transformation of the wobbling pattern even when a circular uniform irradiation field is required, because it is difficult to maintain the stability of the wobbler magnet due to the rapid change of the wobbler current in the spiral wobbling method. The regulated wobbling method, which is our improvement, can well expand the uniform irradiation field and lead to reducing the power requirement of the wobbler magnets. Our evaluations showed that the regulated zigzag wobbling method is the most suitable method for use in currently designed compact carbon-therapy facilities.

  5. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. A parameter-free method to extract the superconductor’s J c(B,θ) field-dependence from in-field current-voltage characteristics of high temperature superconductor tapes

    NASA Astrophysics Data System (ADS)

    Zermeño, Víctor M. R.; Habelok, Krzysztof; Stępień, Mariusz; Grilli, Francesco

    2017-03-01

    The estimation of the critical current (I c) and AC losses of high-temperature superconductor devices through modeling and simulation requires the knowledge of the critical current density (J c) of the superconducting material. This J c is in general not constant and depends both on the magnitude (B loc) and the direction (θ, relative to the tape) of the local magnetic flux density. In principle, J c(B loc,θ) can be obtained from the experimentally measured critical current I c(B a,θ), where B a is the magnitude of the applied magnetic field. However, for applications where the superconducting materials experience a local field that is close to the self-field of an isolated conductor, obtaining J c(B loc,θ) from I c(B a,θ) is not a trivial task. It is necessary to solve an inverse problem to correct for the contribution derived from the self-field. The methods presented in the literature comprise a series of approaches dealing with different degrees of mathematical regularization to fit the parameters of preconceived nonlinear formulas by means of brute force or optimization methods. In this contribution, we present a parameter-free method that provides excellent reproduction of experimental data and requires no human interaction or preconception of the J c dependence with respect to the magnetic field. In particular, it allows going from the experimental data to a ready-to-run J c(B loc,θ) model in a few minutes.

  7. Resonant interaction of the electron beam with a synchronous wave in controlled magnetrons for high-current superconducting accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazakevich, G.; Johnson, R.; Lebedev, V.

    A simplified analytical model of the resonant interaction of the beam of Larmor electrons drifting in the crossed constant fields of a magnetron with a synchronous wave providing a phase grouping of the drifting charge was developed to optimize the parameters of an rf resonant injected signal driving the magnetrons for management of phase and power of rf sources with a rate required for superconducting high-current accelerators. The model, which considers the impact of the rf resonant signal injected into the magnetron on the operation of the injection-locked tube, substantiates the recently developed method of fast power control of magnetronsmore » in the range up to 10 dB at the highest generation efficiency, with low noise, precise stability of the carrier frequency, and the possibility of wideband phase control. Experiments with continuous wave 2.45 GHz, 1 kW microwave oven magnetrons have verified the correspondence of the behavior of these tubes to the analytical model. A proof of the principle of the novel method of power control in magnetrons, based on the developed model, was demonstrated in the experiments. The method is attractive for high-current superconducting rf accelerators. This study also discusses vector methods of power control with the rates required for superconducting accelerators, the impact of the rf resonant signal injected into the magnetron on the rate of phase control of the injection-locked tubes, and a conceptual scheme of the magnetron transmitter with highest efficiency for high-current accelerators.« less

  8. Resonant interaction of the electron beam with a synchronous wave in controlled magnetrons for high-current superconducting accelerators

    DOE PAGES

    Kazakevich, G.; Johnson, R.; Lebedev, V.; ...

    2018-06-14

    A simplified analytical model of the resonant interaction of the beam of Larmor electrons drifting in the crossed constant fields of a magnetron with a synchronous wave providing a phase grouping of the drifting charge was developed to optimize the parameters of an rf resonant injected signal driving the magnetrons for management of phase and power of rf sources with a rate required for superconducting high-current accelerators. The model, which considers the impact of the rf resonant signal injected into the magnetron on the operation of the injection-locked tube, substantiates the recently developed method of fast power control of magnetronsmore » in the range up to 10 dB at the highest generation efficiency, with low noise, precise stability of the carrier frequency, and the possibility of wideband phase control. Experiments with continuous wave 2.45 GHz, 1 kW microwave oven magnetrons have verified the correspondence of the behavior of these tubes to the analytical model. A proof of the principle of the novel method of power control in magnetrons, based on the developed model, was demonstrated in the experiments. The method is attractive for high-current superconducting rf accelerators. This study also discusses vector methods of power control with the rates required for superconducting accelerators, the impact of the rf resonant signal injected into the magnetron on the rate of phase control of the injection-locked tubes, and a conceptual scheme of the magnetron transmitter with highest efficiency for high-current accelerators.« less

  9. Electric current locator

    DOEpatents

    King, Paul E [Corvallis, OR; Woodside, Charles Rigel [Corvallis, OR

    2012-02-07

    The disclosure herein provides an apparatus for location of a quantity of current vectors in an electrical device, where the current vector has a known direction and a known relative magnitude to an input current supplied to the electrical device. Mathematical constants used in Biot-Savart superposition equations are determined for the electrical device, the orientation of the apparatus, and relative magnitude of the current vector and the input current, and the apparatus utilizes magnetic field sensors oriented to a sensing plane to provide current vector location based on the solution of the Biot-Savart superposition equations. Description of required orientations between the apparatus and the electrical device are disclosed and various methods of determining the mathematical constants are presented.

  10. Focused intracochlear electric stimulation with phased array channels.

    PubMed

    van den Honert, Chris; Kelsall, David C

    2007-06-01

    A method is described for producing focused intracochlear electric stimulation using an array of N electrodes. For each electrode site, N weights are computed that define the ratios of positive and negative electrode currents required to produce cancellation of the voltage within scala tympani at all of the N-1 other sites. Multiple sites can be stimulated simultaneously by superposition of their respective current vectors. The method allows N independent stimulus waveforms to be delivered to each of the N electrode sites without spatial overlap. Channel interaction from current spread associated with monopolar stimulation is substantially eliminated. The method operates by inverting the spread functions of individual monopoles as measured with the other electrodes. The method was implemented and validated with data from three human subjects implanted with 22-electrode perimodiolar arrays. Results indicate that (1) focusing is realizable with realistic precision; (2) focusing comes at the cost of increased total stimulation current; (3) uncanceled voltages that arise beyond the ends of the array are weak except when stimulating the two end channels; and (4) close perimodiolar positioning of the electrodes may be important for minimizing stimulation current and sensitivity to measurement errors.

  11. Low-jitter high-power thyristor array pulse driver and generator

    DOEpatents

    Hanks, Roy L.

    2002-01-01

    A method and apparatus for generating low-jitter, high-voltage and high-current pulses for driving low impedance loads such as detonator fuses uses a MOSFET driver which, when triggered, discharges a high-voltage pre-charged capacitor into the primary of a toroidal current-multiplying transformer with multiple isolated secondary windings. The secondary outputs are suitable for driving an array of thyristors that discharge a precharged high-voltage capacitor and thus generating the required high-voltage and high-current pulse.

  12. Characteristics of switching plasma in an inverse-pinch switch

    NASA Technical Reports Server (NTRS)

    Lee, Ja H.; Choi, Sang H.; Venable, Demetrius D.; Han, Kwang S.; Nam, Sang H.

    1993-01-01

    Characteristics of the plasma that switches on tens of giga volt-ampere in an inverse-pinch plasma switch (INPIStron) have been made. Through optical and spectroscopic diagnostics of the current carrying plasma, the current density, the motion of current paths, dominant ionic species have been determined in order to access their effects on circuit parameters and material erosion. Also the optimum operational condition of the plasma-puff triggering method required for azimuthally uniform conduction in the INPIStron has been determined.

  13. Analysis method to determine and characterize the mask mean-to-target and uniformity specification

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Woo; Leunissen, Leonardus H. A.; Van de Kerkhove, Jeroen; Philipsen, Vicky; Jonckheere, Rik; Lee, Suk-Joo; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae

    2006-06-01

    The specification of the mask mean-to-target (MTT) and uniformity is related to functions as: mask error enhancement factor, dose sensitivity and critical dimension (CD) tolerances. The mask MTT shows a trade-off relationship with the uniformity. Simulations for the mask MTT and uniformity (M-U) are performed for LOGIC devices of 45 and 37 nm nodes according to mask type, illumination condition and illuminator polarization state. CD tolerances and after develop inspection (ADI) target CD's in the simulation are taken from the 2004 ITRS roadmap. The simulation results allow for much smaller tolerances in the uniformity and larger offsets in the MTT than the values as given in the ITRS table. Using the parameters in the ITRS table, the mask uniformity contributes to nearly 95% of total CDU budget for the 45 nm node, and is even larger than the CDU specification of the ITRS for the 37 nm node. We also compared the simulation requirements with the current mask making capabilities. The current mask manufacturing status of the mask uniformity is barely acceptable for the 45 nm node, but requires process improvements towards future nodes. In particular, for the 37 nm node, polarized illumination is necessary to meet the ITRS requirements. The current mask linearity deviates for pitches smaller than 300 nm, which is not acceptable even for the 45 nm node. More efforts on the proximity correction method are required to improve the linearity behavior.

  14. Cloud cover archiving on a global scale - A discussion of principles

    NASA Technical Reports Server (NTRS)

    Henderson-Sellers, A.; Hughes, N. A.; Wilson, M.

    1981-01-01

    Monitoring of climatic variability and climate modeling both require a reliable global cloud data set. Examination is made of the temporal and spatial variability of cloudiness in light of recommendations made by GARP in 1975 (and updated by JOC in 1978 and 1980) for cloud data archiving. An examination of the methods of comparing cloud cover frequency curves suggests that the use of the beta distribution not only facilitates objective comparison, but also reduces overall storage requirements. A specific study of the only current global cloud climatology (the U.S. Air Force's 3-dimensional nephanalysis) over the United Kingdom indicates that discussion of methods of validating satellite-based data sets is urgently required.

  15. Load-Dependent Soft-Switching Method of Half-Bridge Current Doubler for High-Voltage Point-of-Load Converter in Data Center Power Supplies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yutian; Yang, Fei; Tolbert, Leon M.

    With the increased cloud computing and digital information storage, the energy requirement of data centers keeps increasing. A high-voltage point of load (HV POL) with an input series output parallel structure is proposed to convert 400 to 1 VDC within a single stage to increase the power conversion efficiency. The symmetrical controlled half-bridge current doubler is selected as the converter topology in the HV POL. A load-dependent soft-switching method has been proposed with an auxiliary circuit that includes inductor, diode, and MOSFETs so that the hard-switching issue of typical symmetrical controlled half-bridge converters is resolved. The operation principles of themore » proposed soft-switching half-bridge current doubler have been analyzed in detail. Then, the necessity of adjusting the timing with the loading in the proposed method is analyzed based on losses, and a controller is designed to realize the load-dependent operation. A lossless RCD current sensing method is used to sense the output inductor current value in the proposed load-dependent operation. In conclusion, experimental efficiency of a hardware prototype is provided to show that the proposed method can increase the converter's efficiency in both heavy- and light-load conditions.« less

  16. Load-Dependent Soft-Switching Method of Half-Bridge Current Doubler for High-Voltage Point-of-Load Converter in Data Center Power Supplies

    DOE PAGES

    Cui, Yutian; Yang, Fei; Tolbert, Leon M.; ...

    2016-06-14

    With the increased cloud computing and digital information storage, the energy requirement of data centers keeps increasing. A high-voltage point of load (HV POL) with an input series output parallel structure is proposed to convert 400 to 1 VDC within a single stage to increase the power conversion efficiency. The symmetrical controlled half-bridge current doubler is selected as the converter topology in the HV POL. A load-dependent soft-switching method has been proposed with an auxiliary circuit that includes inductor, diode, and MOSFETs so that the hard-switching issue of typical symmetrical controlled half-bridge converters is resolved. The operation principles of themore » proposed soft-switching half-bridge current doubler have been analyzed in detail. Then, the necessity of adjusting the timing with the loading in the proposed method is analyzed based on losses, and a controller is designed to realize the load-dependent operation. A lossless RCD current sensing method is used to sense the output inductor current value in the proposed load-dependent operation. In conclusion, experimental efficiency of a hardware prototype is provided to show that the proposed method can increase the converter's efficiency in both heavy- and light-load conditions.« less

  17. Application of high speed machining technology in aviation

    NASA Astrophysics Data System (ADS)

    Bałon, Paweł; Szostak, Janusz; Kiełbasa, Bartłomiej; Rejman, Edward; Smusz, Robert

    2018-05-01

    Aircraft structures are exposed to many loads during their working lifespan. Every particular action made during a flight is composed of a series of air movements which generate various aircraft loads. The most rigorous requirement which modern aircraft structures must fulfill is to maintain their high durability and reliability. This requirement involves taking many restrictions into account during the aircraft design process. The most important factor is the structure's overall mass, which has a crucial impact on both utility properties and cost-effectiveness. This makes aircraft one of the most complex results of modern technology. Additionally, there is currently an increasing utilization of high strength aluminum alloys, which requires the implementation of new manufacturing processes. High Speed Machining technology (HSM) is currently one of the most important machining technologies used in the aviation industry, especially in the machining of aluminium alloys. The primary difference between HSM and other milling techniques is the ability to select cutting parameters - depth of the cut layer, feed rate, and cutting speed in order to simultaneously ensure high quality, precision of the machined surface, and high machining efficiency, all of which shorten the manufacturing process of the integral components. In this paper, the authors explain the implementation of the HSM method in integral aircraft constructions. It presents the method of the airframe manufacturing method, and the final results. The HSM method is compared to the previous method where all subcomponents were manufactured by bending and forming processes, and then, they were joined by riveting.

  18. Grid related issues for static and dynamic geometry problems using systems of overset structured grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1995-01-01

    Grid related issues of the Chimera overset grid method are discussed in the context of a method of solution and analysis of unsteady three-dimensional viscous flows. The state of maturity of the various pieces of support software required to use the approach is considered. Current limitations of the approach are identified.

  19. Simplified methods for evaluating road prism stability

    Treesearch

    William J. Elliot; Mark Ballerini; David Hall

    2003-01-01

    Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...

  20. A Model for Engaging Students in a Research Experience Involving Variational Techniques, Mathematica, and Descent Methods.

    ERIC Educational Resources Information Center

    Mahavier, W. Ted

    2002-01-01

    Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…

  1. Adapting Western research methods to indigenous ways of knowing.

    PubMed

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  2. Multiple Frequency Audio Signal Communication as a Mechanism for Neurophysiology and Video Data Synchronization

    PubMed Central

    Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.

    2014-01-01

    BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648

  3. Investigation of direct solar-to-microwave energy conversion techniques

    NASA Technical Reports Server (NTRS)

    Chatterton, N. E.; Mookherji, T. K.; Wunsch, P. K.

    1978-01-01

    Identification of alternative methods of producing microwave energy from solar radiation for purposes of directing power to the Earth from space is investigated. Specifically, methods of conversion of optical radiation into microwave radiation by the most direct means are investigated. Approaches based on demonstrated device functioning and basic phenomenologies are developed. There is no system concept developed, that is competitive with current baseline concepts. The most direct methods of conversion appear to require an initial step of production of coherent laser radiation. Other methods generally require production of electron streams for use in solid-state or cavity-oscillator systems. Further development is suggested to be worthwhile for suggested devices and on concepts utilizing a free-electron stream for the intraspace station power transport mechanism.

  4. Evaluation of immunoturbidimetric rheumatoid factor method from Diagam on Abbott c8000 analyzer: comparison with immunonephelemetric method.

    PubMed

    Dupuy, Anne Marie; Hurstel, Rémy; Bargnoux, Anne Sophie; Badiou, Stéphanie; Cristol, Jean Paul

    2014-01-01

    Rheumatoid factor (RF) consists of autoantibodies and because of its heterogeneity its determination is not easy. Currently, nephelometry and Elisa method are considered as reference methods. Due to consolidation, many laboratories have fully automated turbidimetric apparatus, and specific nephelemetric systems are not always available. In addition, nephelemetry is more accurate, but time consuming, expensive, and requires a specific device, resulting in a lower efficiency. Turbidimetry could be an attractive alternative. The turbidimetric RF test from Diagam meets the requirements of accuracy and precision for optimal clinical use, with an acceptable measuring range, and could be an alternative in the determination of RF, without the associated cost of a dedicated instrument, making consolidation and saving blood possible.

  5. Current status of quantitative rotational spectroscopy for atmospheric research

    NASA Technical Reports Server (NTRS)

    Drouin, Brian J.; Wlodarczak, Georges; Colmont, Jean-Marcel; Rohart, Francois

    2004-01-01

    Remote sensing of rotational transitions in the Earth's atmosphere has become an important method for the retrieval of geophysical temperatures, pressures and chemical composition profiles that requires accurate spectral information. This paper highlights the current status of rotational data that are useful for atmospheric measurements, with a discussion of the types the rotational lineshape measurements that are not generally available in either online repository.

  6. Properties of a Formal Method to Model Emergence in Swarm-Based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Truszkowski, Walt; Rash, James; Hinchey, Mike

    2004-01-01

    Future space missions will require cooperation between multiple satellites and/or rovers. Developers are proposing intelligent autonomous swarms for these missions, but swarm-based systems are difficult or impossible to test with current techniques. This viewgraph presentation examines the use of formal methods in testing swarm-based systems. The potential usefulness of formal methods in modeling the ANTS asteroid encounter mission is also examined.

  7. Iterative Addition of Kinetic Effects to Cold Plasma RF Wave Solvers

    NASA Astrophysics Data System (ADS)

    Green, David; Berry, Lee; RF-SciDAC Collaboration

    2017-10-01

    The hot nature of fusion plasmas requires a wave vector dependent conductivity tensor for accurate calculation of wave heating and current drive. Traditional methods for calculating the linear, kinetic full-wave plasma response rely on a spectral method such that the wave vector dependent conductivity fits naturally within the numerical method. These methods have seen much success for application to the well-confined core plasma of tokamaks. However, quantitative prediction of high power RF antenna designs for fusion applications has meant a requirement of resolving the geometric details of the antenna and other plasma facing surfaces for which the Fourier spectral method is ill-suited. An approach to enabling the addition of kinetic effects to the more versatile finite-difference and finite-element cold-plasma full-wave solvers was presented by where an operator-split iterative method was outlined. Here we expand on this approach, examine convergence and present a simplified kinetic current estimator for rapidly updating the right-hand side of the wave equation with kinetic corrections. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  8. Systematic Model-in-the-Loop Test of Embedded Control Systems

    NASA Astrophysics Data System (ADS)

    Krupp, Alexander; Müller, Wolfgang

    Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.

  9. Rechargeable thin film battery and method for making the same

    DOEpatents

    Goldner, Ronald B.; Liu, Te-Yang; Goldner, Mark A.; Gerouki, Alexandra; Haas, Terry E.

    2006-01-03

    A rechargeable, stackable, thin film, solid-state lithium electrochemical cell, thin film lithium battery and method for making the same is disclosed. The cell and battery provide for a variety configurations, voltage and current capacities. An innovative low temperature ion beam assisted deposition method for fabricating thin film, solid-state anodes, cathodes and electrolytes is disclosed wherein a source of energetic ions and evaporants combine to form thin film cell components having preferred crystallinity, structure and orientation. The disclosed batteries are particularly useful as power sources for portable electronic devices and electric vehicle applications where high energy density, high reversible charge capacity, high discharge current and long battery lifetimes are required.

  10. Standardizing lightweight deflectometer modulus measurements for compaction quality assurance : research summary.

    DOT National Transportation Integrated Search

    2017-09-01

    The mechanistic-empirical pavement design method requires the elastic resilient modulus as the key input for characterization of geomaterials. Current density-based QA procedures do not measure resilient modulus. Additionally, the density-based metho...

  11. Basic principles of coaxial launch technology

    NASA Technical Reports Server (NTRS)

    Kolm, H.; Mongeau, P.

    1984-01-01

    Already in the 1930s, a discrete-coil mechanically synchronized launcher was built. At the present time, research is almost entirely directed towards railguns. However, although coaxial accelerators are more complex than railguns, they have certain unique advantages. Some of these advantages are related to the absence of physical contact requirements with the projectile, the possibility of a scale-up to very large projectile size, and the availability of up to 100 times more thrust for a given current. The price of the advantages is the need for a drive current in the form of pulses synchronized precisely with transit of each projectile coil through each drive coil. At high velocities, high voltages are required, and high voltage switching represents the technology limit on launch velocity. Attention is given to inductance gradients, the double hump, methods of excitation, methods of synchronization, projectile configuration, energy supply, fundamental limits, trends, and research needs.

  12. A gas-kinetic BGK scheme for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    2000-01-01

    This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.

  13. The harmonic impact of electric vehicle battery charging

    NASA Astrophysics Data System (ADS)

    Staats, Preston Trent

    The potential widespread introduction of the electric vehicle (EV) presents both opportunities and challenges to the power systems engineers who will be required to supply power to EV batteries. One of the challenges associated with EV battery charging comes from the potentially high harmonic currents associated with the conversion of ac power system voltages to dc EV battery voltages. Harmonic currents lead to increased losses in distribution circuits and reduced life expectancy of such power distribution components as capacitors and transformers. Harmonic current injections also cause harmonic voltages on power distribution networks. These distorted voltages can affect power system loads and specific standards exist regulating acceptable voltage distortion. This dissertation develops and presents the theory required to evaluate the electric vehicle battery charger as a harmonic distorting load and its possible harmonic impact on various aspects of power distribution systems. The work begins by developing a method for evaluating the net harmonic current injection of a large collection of EV battery chargers which accounts for variation in the start-time and initial battery state-of-charge between individual chargers. Next, this method is analyzed to evaluate the effect of input parameter variation on the net harmonic currents predicted by the model. We then turn to an evaluation of the impact of EV charger harmonic currents on power distribution systems, first evaluating the impact of these currents on a substation transformer and then on power distribution system harmonic voltages. The method presented accounts for the uncertainty in EV harmonic current injections by modeling the start-time and initial battery state-of-charge (SOC) of an individual EV battery charger as random variables. Thus, the net harmonic current, and distribution system harmonic voltages are formulated in a stochastic framework. Results indicate that considering variation in start-time and SOC leads to reduced estimates of harmonic current injection when compared to more traditional methods that do not account for variation. Evaluation of power distribution system harmonic voltages suggests that for any power distribution network there is a definite threshold penetration of EVs, below which the total harmonic distortion of voltage exceeds 5% at an insignificant number of buses. Thus, most existing distribution systems will probably be able to accommodate the early introduction of EV battery charging without widespread harmonic voltage problems.

  14. A fast sequence assembly method based on compressed data structures.

    PubMed

    Liang, Peifeng; Zhang, Yancong; Lin, Kui; Hu, Jinglu

    2014-01-01

    Assembling a large genome using next generation sequencing reads requires large computer memory and a long execution time. To reduce these requirements, a memory and time efficient assembler is presented from applying FM-index in JR-Assembler, called FMJ-Assembler, where FM stand for FMR-index derived from the FM-index and BWT and J for jumping extension. The FMJ-Assembler uses expanded FM-index and BWT to compress data of reads to save memory and jumping extension method make it faster in CPU time. An extensive comparison of the FMJ-Assembler with current assemblers shows that the FMJ-Assembler achieves a better or comparable overall assembly quality and requires lower memory use and less CPU time. All these advantages of the FMJ-Assembler indicate that the FMJ-Assembler will be an efficient assembly method in next generation sequencing technology.

  15. Mechanical strength of laser-welded cobalt-chromium alloy.

    PubMed

    Baba, N; Watanabe, I; Liu, J; Atsuta, M

    2004-05-15

    The purpose of this study was to investigate the effect of the output energy of laser welding and welding methods on the joint strength of cobalt-chromium (Co-Cr) alloy. Two types of cast Co-Cr plates were prepared, and transverse sections were made at the center of the plate. The cut surfaces were butted against one another, and the joints welded with a laser-welding machine at several levels of output energy with the use of two methods. The fracture force required to break specimens was determined by means of tensile testing. For the 0.5-mm-thick specimens, the force required to break the 0.5-mm laser-welded specimens at currents of 270 and 300 A was not statistically different (p > 0.05) from the results for the nonwelded control specimens. The force required to break the 1.0-mm specimens double-welded at a current of 270 A was the highest value among the 1.0-mm laser-welded specimens. The results suggested that laser welding under the appropriate conditions improved the joint strength of cobalt- chromium alloy. Copyright 2004 Wiley Periodicals, Inc.

  16. A feasibility assessment of installation, operation and disposal options for nuclear reactor power system concepts for a NASA growth space station

    NASA Technical Reports Server (NTRS)

    Bloomfield, Harvey S.; Heller, Jack A.

    1987-01-01

    A preliminary feasibility assessment of the integration of reactor power system concepts with a projected growth space station architecture was conducted to address a variety of installation, operational disposition, and safety issues. A previous NASA sponsored study, which showed the advantages of space station - attached concepts, served as the basis for this study. A study methodology was defined and implemented to assess compatible combinations of reactor power installation concepts, disposal destinations, and propulsion methods. Three installation concepts that met a set of integration criteria were characterized from a configuration and operational viewpoint, with end-of-life disposal mass identified. Disposal destinations that met current aerospace nuclear safety criteria were identified and characterized from an operational and energy requirements viewpoint, with delta-V energy requirement as a key parameter. Chemical propulsion methods that met current and near-term application criteria were identified and payload mass and delta-V capabilities were characterized. These capabilities were matched against concept disposal mass and destination delta-V requirements to provide the feasibility of each combination.

  17. A feasibility assessment of nuclear reactor power system concepts for the NASA Growth Space Station

    NASA Technical Reports Server (NTRS)

    Bloomfield, H. S.; Heller, J. A.

    1986-01-01

    A preliminary feasibility assessment of the integration of reactor power system concepts with a projected growth Space Station architecture was conducted to address a variety of installation, operational, disposition and safety issues. A previous NASA sponsored study, which showed the advantages of Space Station - attached concepts, served as the basis for this study. A study methodology was defined and implemented to assess compatible combinations of reactor power installation concepts, disposal destinations, and propulsion methods. Three installation concepts that met a set of integration criteria were characterized from a configuration and operational viewpoint, with end-of-life disposal mass identified. Disposal destinations that met current aerospace nuclear safety criteria were identified and characterized from an operational and energy requirements viewpoint, with delta-V energy requirement as a key parameter. Chemical propulsion methods that met current and near-term application criteria were identified and payload mass and delta-V capabilities were characterized. These capabilities were matched against concept disposal mass and destination delta-V requirements to provide a feasibility of each combination.

  18. Neurology education: current and emerging concepts in residency and fellowship training.

    PubMed

    Stern, Barney J; Józefowicz, Ralph F; Kissela, Brett; Lewis, Steven L

    2010-05-01

    This article discusses the current and future state of neurology training. A priority is to attract sufficient numbers of qualified candidates for the existing residency programs. A majority of neurology residents elects additional training in a neurologic subspecialty, and programs will have to be accredited accordingly. Attempts are being made to standardize and strengthen the existing general residency and subspecialty programs through cooperative efforts. Ultimately, residency programs must comply with the increasing requirements and try to adapt these requirements to the unique demands and realities of neurology training. An effort is underway to establish consistent competency-testing methods. Copyright 2010 Elsevier Inc. All rights reserved.

  19. Recent research related to prediction of stall/spin characteristics of fighter aircraft

    NASA Technical Reports Server (NTRS)

    Nguyen, L. T.; Anglin, E. L.; Gilbert, W. P.

    1976-01-01

    The NASA Langley Research Center is currently engaged in a stall/spin research program to provide the fundamental information and design guidelines required to predict the stall/spin characteristics of fighter aircraft. The prediction methods under study include theoretical spin prediction techniques and piloted simulation studies. The paper discusses the overall status of theoretical techniques including: (1) input data requirements, (2) math model requirements, and (3) correlation between theoretical and experimental results. The Langley Differential Maneuvering Simulator (DMS) facility has been used to evaluate the spin susceptibility of several current fighters during typical air combat maneuvers and to develop and evaluate the effectiveness of automatic departure/spin prevention concepts. The evaluation procedure is described and some of the more significant results of the studies are presented.

  20. Evaluation of a rapid diagnostic field test kit for identification of Phytophthora ramorum, P. kernoviae and other Phytophthora species at the point of inspection

    Treesearch

    C.R. Lane; E. Hobden; L. Laurenson; V.C. Barton; K.J.D. Hughes; H. Swan; N. Boonham; A.J. Inman

    2008-01-01

    Plant health regulations to prevent the introduction and spread of Phytophthora ramorum and P. kernoviae require rapid, cost effective diagnostic methods for screening large numbers of plant samples at the time of inspection. Current on-site techniques require expensive equipment, considerable expertise and are not suited for plant...

  1. Electrical probe characteristic recovery by measuring only one time-dependent parameter

    NASA Astrophysics Data System (ADS)

    Costin, C.; Popa, G.; Anita, V.

    2016-03-01

    Two straightforward methods for recovering the current-voltage characteristic of an electrical probe are proposed. Basically, they consist of replacing the usual power supply from the probe circuit with a capacitor which can be charged or discharged by the probe current drained from the plasma. The experiment requires the registration of only one time-dependent electrical parameter, either the probe current or the probe voltage. The corresponding time-dependence of the second parameter, the probe voltage, or the probe current, respectively, can be calculated using an integral or a differential relation and the current-voltage characteristic of the probe can be obtained.

  2. Metal- and additive-free photoinduced borylation of haloarenes.

    PubMed

    Mfuh, Adelphe M; Schneider, Brett D; Cruces, Westley; Larionov, Oleg V

    2017-03-01

    Boronic acids and esters have critical roles in the areas of synthetic organic chemistry, molecular sensors, materials science, drug discovery, and catalysis. Many of the current applications of boronic acids and esters require materials with very low levels of transition metal contamination. Most of the current methods for the synthesis of boronic acids, however, require transition metal catalysts and ligands that must be removed via additional purification procedures. This protocol describes a simple, metal- and additive-free method of conversion of haloarenes directly to boronic acids and esters. This photoinduced borylation protocol does not require expensive and toxic metal catalysts or ligands, and it produces innocuous and easy-to-remove by-products. Furthermore, the reaction can be carried out on multigram scales in common-grade solvents without the need for reaction mixtures to be deoxygenated. The setup and purification steps are typically accomplished within 1-3 h. The reactions can be run overnight, and the protocol can be completed within 13-16 h. Two representative procedures that are described in this protocol provide details for preparation of a boronic acid (3-cyanopheylboronic acid) and a boronic ester (1,4-benzenediboronic acid bis(pinacol)ester). We also discuss additional details of the method that will be helpful in the application of the protocol to other haloarene substrates.

  3. Method of Conjugate Radii for Solving Linear and Nonlinear Systems

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.

    1999-01-01

    This paper describes a method to solve a system of N linear equations in N steps. A quadratic form is developed involving the sum of the squares of the residuals of the equations. Equating the quadratic form to a constant yields a surface which is an ellipsoid. For different constants, a family of similar ellipsoids can be generated. Starting at an arbitrary point an orthogonal basis is constructed and the center of the family of similar ellipsoids is found in this basis by a sequence of projections. The coordinates of the center in this basis are the solution of linear system of equations. A quadratic form in N variables requires N projections. That is, the current method is an exact method. It is shown that the sequence of projections is equivalent to a special case of the Gram-Schmidt orthogonalization process. The current method enjoys an advantage not shared by the classic Method of Conjugate Gradients. The current method can be extended to nonlinear systems without modification. For nonlinear equations the Method of Conjugate Gradients has to be augmented with a line-search procedure. Results for linear and nonlinear problems are presented.

  4. Medical Data Architecture (MDA) Project Status

    NASA Technical Reports Server (NTRS)

    Krihak, M.; Middour, C.; Gurram, M.; Wolfe, S.; Marker, N.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2018-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current in-flight medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are a variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable a more medically autonomous crew than the current paradigm. The medical system requirements are being developed in parallel with the exploration mission architecture and vehicle design. ExMC has recognized that in order to make informed decisions about a medical data architecture framework, current methods for medical data management must not only be understood, but an architecture must also be identified that provides the crew with actionable insight to medical conditions. This medical data architecture will provide the necessary functionality to address the challenges of executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. Hence, the products supported by current prototype development will directly inform exploration medical system requirements.

  5. Medical Data Architecture Project Status

    NASA Technical Reports Server (NTRS)

    Krihak, M.; Middour, C.; Gurram, M.; Wolfe, S.; Marker, N.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2018-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current in-flight medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are a variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable a more medically autonomous crew than the current paradigm. The medical system requirements are being developed in parallel with the exploration mission architecture and vehicle design. ExMC has recognized that in order to make informed decisions about a medical data architecture framework, current methods for medical data management must not only be understood, but an architecture must also be identified that provides the crew with actionable insight to medical conditions. This medical data architecture will provide the necessary functionality to address the challenges of executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. Hence, the products supported by current prototype development will directly inform exploration medical system requirements.

  6. Outstanding Questions In First Amendment Law Related To Food Labeling Disclosure Requirements For Health.

    PubMed

    Pomeranz, Jennifer L

    2015-11-01

    The federal and state governments are increasingly focusing on food labeling as a method to support good health. Many such laws are opposed by the food industry and may be challenged in court, raising the question of what is legally feasible. This article analyzes outstanding questions in First Amendment law related to commercial disclosure requirements and conducts legal analysis and policy evaluation for three current policies. These include the Food and Drug Administration's draft regulation requiring an added sugar disclosure on the Nutrition Facts panel, California's proposed sugar-sweetened beverage safety warning label bill, and Vermont's law requiring labels of genetically engineered food to disclose this information. I recommend several methods for policy makers to enact food labeling laws within First Amendment parameters, including imposing factual commercial disclosure requirements, disclosing the government entity issuing a warning, collecting evidence, and identifying legitimate governmental interests. Project HOPE—The People-to-People Health Foundation, Inc.

  7. Current Applications of Chromatographic Methods in the Study of Human Body Fluids for Diagnosing Disorders.

    PubMed

    Jóźwik, Jagoda; Kałużna-Czaplińska, Joanna

    2016-01-01

    Currently, analysis of various human body fluids is one of the most essential and promising approaches to enable the discovery of biomarkers or pathophysiological mechanisms for disorders and diseases. Analysis of these fluids is challenging due to their complex composition and unique characteristics. Development of new analytical methods in this field has made it possible to analyze body fluids with higher selectivity, sensitivity, and precision. The composition and concentration of analytes in body fluids are most often determined by chromatography-based techniques. There is no doubt that proper use of knowledge that comes from a better understanding of the role of body fluids requires the cooperation of scientists of diverse specializations, including analytical chemists, biologists, and physicians. This article summarizes current knowledge about the application of different chromatographic methods in analyses of a wide range of compounds in human body fluids in order to diagnose certain diseases and disorders.

  8. CAN CONTINGENT VALUATION MEASURE PASSIVE USE VALUES

    EPA Science Inventory

    Contingent valuation (CV) is the only method currently available for practically measuring passive-use values. Because proposed laws may require that environmental regulations pass a benefit-cost test, CV has become central to the policy debate on environmental protection. Crit...

  9. Training requirements for railroad dispatchers : objectives, syllabi and test designs

    DOT National Transportation Integrated Search

    1998-11-01

    This report presents the results of a study to develop railroad dispatcher training objectives, syllabi for three types of training programs and test designs for the three programs. Information about current railroad dispatching methods and training ...

  10. Concrete testing device provides substantial savings : fact sheet.

    DOT National Transportation Integrated Search

    2011-11-01

    Current practices require a permeability test, ASTM C1202: "Standard Test Method for Electrical Indication of Concrete's Ability to resist Chloride Ion Penetration," for structures with potential salt water intrusion. The test is run at 56 days of ag...

  11. LANDSCAPE CHARACTERIZATION AND CHANGE DETECTION METHODS DEVELOPMENT RESEARCH (2005-2007)

    EPA Science Inventory

    The characterization of land-cover (LC) type, extent, and distribution represent important landscape characterization element required for monitoring ecosystem conditions and for primary data input to biogenic emission and atmospheric deposition models. Current spectral-based ch...

  12. SITE-SPECIFIC DIAGNOSTIC TOOLS

    EPA Science Inventory

    US EPA's Office of Water is proposing Combined Assessment and Listing Methods (CALM) to
    meet reporting requirements under both Sections 305b and 303d for chemical and nonchemical
    stressors in the nation's waterbodies. Current Environmental Monitoring and Assessment
    Prog...

  13. Investigation of high-strength bolt-tightening verification techniques.

    DOT National Transportation Integrated Search

    2016-03-01

    The current means and methods of verifying that high-strength bolts have been properly tightened are very laborious and time : consuming. In some cases, the techniques require special equipment and, in other cases, the verification itself may be some...

  14. Detection of S-Nitrosothiols

    PubMed Central

    Diers, Anne R.; Keszler, Agnes; Hogg, Neil

    2015-01-01

    BACKGROUND S-Nitrosothiols have been recognized as biologically-relevant products of nitric oxide that are involved in many of the diverse activities of this free radical. SCOPE OF REVIEW This review serves to discuss current methods for the detection and analysis of protein S-nitrosothiols. The major methods of S-nitrosothiol detection include chemiluminescence-based methods and switch-based methods, each of which comes in various flavors with advantages and caveats. MAJOR CONCLUSIONS The detection of S-nitrosothiols is challenging and prone to many artifacts. Accurate measurements require an understanding of the underlying chemistry of the methods involved and the use of appropriate controls. GENERAL SIGNIFICANCE Nothing is more important to a field of research than robust methodology that is generally trusted. The field of S-Nitrosation has developed such methods but, as S-nitrosothiols are easy to introduce as artifacts, it is vital that current users learn from the lessons of the past. PMID:23988402

  15. Clinical perspective of cell-free DNA testing for fetal aneuploidies.

    PubMed

    Gratacós, Eduard; Nicolaides, Kypros

    2014-01-01

    Cell-free DNA testing in maternal blood provides the most effective method of screening for trisomy 21, with a reported detection rate of 99% and a false positive rate of less than 0.1%. After many years of research, this method is now commercially available and is carried out in an increasing number of patients, and there is an expanding number of conditions that can be screened for. However, the application of these methods in clinical practice requires a careful analysis. Current first-trimester screening strategies are based on a complex combination of tests, aiming at detecting fetal defects and predicting the risk of main pregnancy complications. It is therefore necessary to define the optimal way of combining cell-free DNA testing with current first-trimester screening methods. In this concise review we describe the basis of cell-free DNA testing and discuss the potential approaches for its implementation in combination with current tests in the first trimester. © 2014 S. Karger AG, Basel.

  16. A Comparative Study on Electronic versus Traditional Data Collection in a Special Education Setting

    ERIC Educational Resources Information Center

    Ruf, Hernan Dennis

    2012-01-01

    The purpose of the current study was to determine the efficiency of an electronic data collection method compared to a traditional paper-based method in the educational field, in terms of the accuracy of data collected and the time required to do it. In addition, data were collected to assess users' preference and system usability. The study…

  17. Tree shelters and other methods for reducing deer damage to hardwood regeneration in the eastern United States

    Treesearch

    Gary W. Miller

    1998-01-01

    This report summarizes the basic silvicultural problems associated with regenerating commercial hardwood (broadleaf) species in the eastern United States and includes a review of current methods used to reduce the impact of deer browsing. The following topics are discussed: 1) the biological requirements and regeneration mechanism associated with several important tree...

  18. Equity Implications of Methods of Funding State Teachers' Retirement Systems. Working Paper in Education Finance No. 30.

    ERIC Educational Resources Information Center

    Wendling, Wayne

    Current methods of funding teachers' retirement systems, which base pensions on final salaries, are inequitable because they are not related to school districts' ability to pay and because they require some teachers to subsidize others. A five-state survey shows it is common for pensions to be funded by school districts and teachers, sometimes…

  19. Report to the Congress on alternative methods for the Strategic Petroleum Reserve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-02-01

    The purpose of this study is to fulfill the requirements of Public Law No. 101-46, approved June 30, 1989. The study describes and evaluates alternative methods for financing the future expansion of the Strategic petroleum Reserve (SPR), both to the current target level of 750 million barrels and to potential future levels of up to one billion barrels.

  20. BLENDING BIOSOLIDS SAMPLES MAKES A DIFFERENCE IN ORGANISM RECOVERY, PRINTED IN WATER ENVIRONMENT LABORATORY SOLUTIONS, VOL 8, NO. 3, PGS 10-14, PUBLISHED BY WATER ENVIRONMENT FEDERATION, 2001

    EPA Science Inventory

    Current federal regulations (40 CFR 503) require enumeration of fecal coliform or Salmoella prior to land application of Class A biosolids. This regulation specifies use of enumeration methods included in "Standard Methods for the Examination of Water and Wastewater 18th Edi...

  1. Helicopter Fatigue. A Review of Current Requirements and Substantiation Procedures

    DTIC Science & Technology

    1979-01-01

    which the applications differ between contractors based cn their individual experience. Load Application: The ideal method of measuring flight loads would... method is different for the parts mainly dimensioned by high cycle fatigue (rotors and gearboxes) and for those subjected to low cycle fatigue (e.g...into damage per hour. Z.. A 58 2.3. Calculation of the service life Two methods are available, both with advantages and drawbacks. They only differ by

  2. Antimicrobial Materials for Advanced Microbial Control in Spacecraft Water Systems

    NASA Technical Reports Server (NTRS)

    Birmele, Michele; Caro, Janicce; Newsham, Gerard; Roberts, Michael; Morford, Megan; Wheeler, Ray

    2012-01-01

    Microbial detection, identification, and control are essential for the maintenance and preservation of spacecraft water systems. Requirements set by NASA put limitations on the energy, mass, materials, noise, cost, and crew time that can be devoted to microbial control. Efforts are being made to attain real-time detection and identification of microbial contamination in microgravity environments. Research for evaluating technologies for capability enhancement on-orbit is currently focused on the use of adenosine triphosphate (ATP) analysis for detection purposes and polymerase chain reaction (peR) for microbial identification. Additional research is being conducted on how to control for microbial contamination on a continual basis. Existing microbial control methods in spacecraft utilize iodine or ionic silver biocides, physical disinfection, and point-of-use sterilization filters. Although these methods are effective, they require re-dosing due to loss of efficacy, have low human toxicity thresholds, produce poor taste, and consume valuable mass and crew time. Thus, alternative methods for microbial control are needed. This project also explores ultraviolet light-emitting diodes (UV-LEDs), surface passivation methods for maintaining residual biocide levels, and several antimicrobial materials aimed at improving current microbial control techniques, as well as addressing other materials presently under analysis and future directions to be pursued.

  3. An Overview of Computational Aeroacoustic Modeling at NASA Langley

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2001-01-01

    The use of computational techniques in the area of acoustics is known as computational aeroacoustics and has shown great promise in recent years. Although an ultimate goal is to use computational simulations as a virtual wind tunnel, the problem is so complex that blind applications of traditional algorithms are typically unable to produce acceptable results. The phenomena of interest are inherently unsteady and cover a wide range of frequencies and amplitudes. Nonetheless, with appropriate simplifications and special care to resolve specific phenomena, currently available methods can be used to solve important acoustic problems. These simulations can be used to complement experiments, and often give much more detailed information than can be obtained in a wind tunnel. The use of acoustic analogy methods to inexpensively determine far-field acoustics from near-field unsteadiness has greatly reduced the computational requirements. A few examples of current applications of computational aeroacoustics at NASA Langley are given. There remains a large class of problems that require more accurate and efficient methods. Research to develop more advanced methods that are able to handle the geometric complexity of realistic problems using block-structured and unstructured grids are highlighted.

  4. In Search of Easy-to-Use Methods for Calibrating ADCP's for Velocity and Discharge Measurements

    USGS Publications Warehouse

    Oberg, K.; ,

    2002-01-01

    A cost-effective procedure for calibrating acoustic Doppler current profilers (ADCP) in the field was presented. The advantages and disadvantages of various methods which are used for calibrating ADCP were discussed. The proposed method requires the use of differential global positioning system (DGPS) with sub-meter accuracy and standard software for collecting ADCP data. The method involves traversing a long (400-800 meter) course at a constant compass heading and speed, while collecting simultaneous DGPS and ADCP data.

  5. Formal Methods for Life-Critical Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1993-01-01

    The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.

  6. Configurations and calibration methods for passive sampling techniques.

    PubMed

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  7. Manufacturing Methods & Technology Project Execution Report. First CY 83.

    DTIC Science & Technology

    1983-11-01

    UCCURRENCE. H 83 5180 MMT FOR METAL DEWAR AND UNBONDED LEADS THE GOLD WIRE BONDED CONNECTIOkS ARE MADE BY HAND WHICH IS A TEDIOUS AND EXPENSIVE PROCESS. THE...ATTACHMENTS CURRENT FILAMENT WOUND COMPOSIIE ROCKET MOTOR CASES REQUIRE FORGED METAL POLE PIECESt NOZZLE CLOSURE ATTACHMENT RINGS, AND OTHER ATTACHMENT RINGS... ELASTOMER INSULATOR PROCESS LARGE TACTICAL ROCKET MOTOR INSULATORS ARE COSTLY, LACK DESIGN CHANGE FLEXIBILITY AND SUFFER LONG LEAD TIMES. CURRENT

  8. Adapting Western Research Methods to Indigenous Ways of Knowing

    PubMed Central

    Christopher, Suzanne

    2013-01-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid. PMID:23678897

  9. A Roadmap for Using Agile Development in a Traditional Environment

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara; Starbird, Thomas; Grenander, Sven

    2006-01-01

    One of the newer classes of software engineering techniques is called 'Agile Development'. In Agile Development software engineers take small implementation steps and, in some cases, they program in pairs. In addition, they develop automatic tests prior to implementing their small functional piece. Agile Development focuses on rapid turnaround, incremental planning, customer involvement and continuous integration. Agile Development is not the traditional waterfall method or even a rapid prototyping method (although this methodology is closer to Agile Development). At the Jet Propulsion Laboratory (JPL) a few groups have begun Agile Development software implementations. The difficulty with this approach becomes apparent when Agile Development is used in an organization that has specific criteria and requirements handed down for how software development is to be performed. The work at the JPL is performed for the National Aeronautics and Space Agency (NASA). Both organizations have specific requirements, rules and processes for developing software. This paper will discuss some of the initial uses of the Agile Development methodology, the spread of this method and the current status of the successful incorporation into the current JPL development policies and processes.

  10. A Roadmap for Using Agile Development in a Traditional Environment

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; Starbird, Thomas; Grenander, Sven

    2006-01-01

    One of the newer classes of software engineering techniques is called 'Agile Development'. In Agile Development software engineers take small implementation steps and, in some cases they program in pairs. In addition, they develop automatic tests prior to implementing their small functional piece. Agile Development focuses on rapid turnaround, incremental planning, customer involvement and continuous integration. Agile Development is not the traditional waterfall method or even a rapid prototyping method (although this methodology is closer to Agile Development). At Jet Propulsion Laboratory (JPL) a few groups have begun Agile Development software implementations. The difficulty with this approach becomes apparent when Agile Development is used in an organization that has specific criteria and requirements handed down for how software development is to be performed. The work at the JPL is performed for the National Aeronautics and Space Agency (NASA). Both organizations have specific requirements, rules and procedure for developing software. This paper will discuss the some of the initial uses of the Agile Development methodology, the spread of this method and the current status of the successful incorporation into the current JPL development policies.

  11. Models for forecasting hospital bed requirements in the acute sector.

    PubMed Central

    Farmer, R D; Emami, J

    1990-01-01

    STUDY OBJECTIVE--The aim was to evaluate the current approach to forecasting hospital bed requirements. DESIGN--The study was a time series and regression analysis. The time series for mean duration of stay for general surgery in the age group 15-44 years (1969-1982) was used in the evaluation of different methods of forecasting future values of mean duration of stay and its subsequent use in the formation of hospital bed requirements. RESULTS--It has been suggested that the simple trend fitting approach suffers from model specification error and imposes unjustified restrictions on the data. Time series approach (Box-Jenkins method) was shown to be a more appropriate way of modelling the data. CONCLUSION--The simple trend fitting approach is inferior to the time series approach in modelling hospital bed requirements. PMID:2277253

  12. A fast method for detecting Cryptosporidium parvum oocysts in real world samples

    NASA Astrophysics Data System (ADS)

    Stewart, Shona; McClelland, Lindy; Maier, John

    2005-04-01

    Contamination of drinking water with pathogenic microorganisms such as Cryptosporidium has become an increasing concern in recent years. Cryptosporidium oocysts are particularly problematic, as infections caused by this organism can be life threatening in immunocompromised patients. Current methods for monitoring and analyzing water are often laborious and require experts to conduct. In addition, many of the techniques require very specific reagents to be employed. These factors add considerable cost and time to the analytical process. Raman spectroscopy provides specific molecular information on samples, and offers advantages of speed, sensitivity and low cost over current methods of water monitoring. Raman spectroscopy is an optical method that has demonstrated the capability to identify and differentiate microorganisms at the species and strain levels. In addition, this technique has exhibited sensitivities down to the single organism detection limit. We have employed Raman spectroscopy and Raman Chemical Imaging, in conjunction with chemometric techniques, to detect small numbers of oocysts in the presence of interferents derived from real-world water samples. Our investigations have also indicated that Raman Chemical Imaging may provide chemical and physiological information about an oocyst sample which complements information provided by the traditional methods. This work provides evidence that Raman imaging is a useful technique for consideration in the water quality industry.

  13. A bridging study for oxytetracycline in the edible fillet of rainbow trout: Analysis by a liquid chromatographic method and the official microbial inhibition assay

    USGS Publications Warehouse

    Stehly, G.R.; Gingerich, W.H.; Kiessling, C.R.; Cutting, J.H.

    1999-01-01

    Oxytetracycline (OTC) is a drug approved by the U.S. Food and Drug Administration (FDA) to control certain diseases in salmonids and catfish. OTC is also a likely control agent for diseases of other fish species and for other diseases of salmonids and catfish not currently on the label. One requirement for FDA to extend and expand the approval of this antibacterial agent to other fish species is residue depletion studies. The current regulatory method for OTC in fish tissue, based on microbial inhibition, lacks sensitivity and specificity. To conduct residue depletion studies for OTC in fish with a liquid chromatographic method, a bridging study was required to determine its relationship with the official microbial inhibition assay. Triplicate samples of rainbow trout fillet tissue fortified with OTC at 0.3, 0.6, 1.2, 2.4, 4.8, and 9.6 ppm and fillet tissue with incurred OTC at approximately 0.75, 1.5, and 3.75 ppm were analyzed by high-performance liquid chromatography (HPLC) and the microbial inhibition assay. The results indicated that the 2 methods are essentially identical in the tested range, with mean coefficients of variation of 1.05% for the HPLC method and 3.94% for the microbial inhibition assay.

  14. Propulsion Trade Studies for Spacecraft Swarm Mission Design

    NASA Technical Reports Server (NTRS)

    Dono, Andres; Plice, Laura; Mueting, Joel; Conn, Tracie; Ho, Michael

    2018-01-01

    Spacecraft swarms constitute a challenge from an orbital mechanics standpoint. Traditional mission design involves the application of methodical processes where predefined maneuvers for an individual spacecraft are planned in advance. This approach does not scale to spacecraft swarms consisting of many satellites orbiting in close proximity; non-deterministic maneuvers cannot be preplanned due to the large number of units and the uncertainties associated with their differential deployment and orbital motion. For autonomous small sat swarms in LEO, we investigate two approaches for controlling the relative motion of a swarm. The first method involves modified miniature phasing maneuvers, where maneuvers are prescribed that cancel the differential delta V of each CubeSat's deployment vector. The second method relies on artificial potential functions (APFs) to contain the spacecraft within a volumetric boundary and avoid collisions. Performance results and required delta V budgets are summarized, indicating that each method has advantages and drawbacks for particular applications. The mini phasing maneuvers are more predictable and sustainable. The APF approach provides a more responsive and distributed performance, but at considerable propellant cost. After considering current state of the art CubeSat propulsion systems, we conclude that the first approach is feasible, but the modified APF method of requires too much control authority to be enabled by current propulsion systems.

  15. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  16. A Theory of Information Quality and a Framework for its Implementation in the Requirements Engineering Process

    NASA Astrophysics Data System (ADS)

    Grenn, Michael W.

    This dissertation introduces a theory of information quality to explain macroscopic behavior observed in the systems engineering process. The theory extends principles of Shannon's mathematical theory of communication [1948] and statistical mechanics to information development processes concerned with the flow, transformation, and meaning of information. The meaning of requirements information in the systems engineering context is estimated or measured in terms of the cumulative requirements quality Q which corresponds to the distribution of the requirements among the available quality levels. The requirements entropy framework (REF) implements the theory to address the requirements engineering problem. The REF defines the relationship between requirements changes, requirements volatility, requirements quality, requirements entropy and uncertainty, and engineering effort. The REF is evaluated via simulation experiments to assess its practical utility as a new method for measuring, monitoring and predicting requirements trends and engineering effort at any given time in the process. The REF treats the requirements engineering process as an open system in which the requirements are discrete information entities that transition from initial states of high entropy, disorder and uncertainty toward the desired state of minimum entropy as engineering effort is input and requirements increase in quality. The distribution of the total number of requirements R among the N discrete quality levels is determined by the number of defined quality attributes accumulated by R at any given time. Quantum statistics are used to estimate the number of possibilities P for arranging R among the available quality levels. The requirements entropy H R is estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process. The information I increases as HR and uncertainty decrease, and the change in information AI needed to reach the desired state of quality is estimated from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels. Current requirements trend metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF evaluates the quantity of requirements changes over time, distinguishes between their positive and negative effects by calculating their impact on HR, Q, and AI, and forecasts when the desired state will be reached, enabling more accurate assessment of the status and progress of the requirements engineering effort. Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The increase in I, or decrease in H R and uncertainty, is proportional to the engineering effort E input into the requirements engineering process. The REF estimates the AE needed to transition R from their current state of quality to the desired end state or some other interim state of interest. Simulation results are compared with measured engineering effort data for Department of Defense programs published in the SE literature, and the results suggest the REF is a promising new method for estimation of AE.

  17. Extended Constant Power Speed Range of the Brushless DC Motor Through Dual Mode Inverter Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawler, J.S.

    2000-06-23

    The trapezoidal back electromotive force (emf) brushless direct current (dc) motor (BDCM) with surface-mounted magnets has high-power density and efficiency especially when rare-earth magnet materials are used. Traction applications, such as electric vehicles, could benefit significantly from the use of such motors. Unfortunately, a practical means for driving the motor over a constant power speed ratio (CPSR) of 5:1 or more has not yet been developed. A key feature of these motors is that they have low internal inductance. The phase advance method is effective in controlling the motor power over such a speed range, but the current at highmore » speed may be several times greater than that required at the base speed. The increase in current during high-speed operation is due to the low motor inductance and the action of the bypass diodes of the inverter. The use of such a control would require increased current rating of the inverter semiconductors and additional cooling for the inverter, where the conduction losses increase proportionally with current, and especially for the motor, where the losses increase with the square of the current. The high current problems of phase advance can be mitigated by adding series inductance; however, this reduces power density, requires significant increase in supply voltage, and leaves the CPSR performance of the system highly sensitive to variations in the available voltage. A new inverter topology and control scheme has been developed that can drive low-inductance BDCMs over the CPSR that would be required in electric vehicle applications. This new controller is called the dual-mode inverter control (DMIC). It is shown that the BDCM has an infinite CPSR when it is driven by the DMIC.« less

  18. Intravenous Solutions for Exploration Missions

    NASA Technical Reports Server (NTRS)

    Miller, Fletcher J.; Niederhaus, Charles; Barlow, Karen; Griffin, DeVon

    2007-01-01

    This paper describes the intravenous (IV) fluids requirements being developed for medical care during NASA s future exploration class missions. Previous research on IV solution generation and mixing in space is summarized. The current exploration baseline mission profiles are introduced, potential medical conditions described and evaluated for fluidic needs, and operational issues assessed. We briefly introduce potential methods for generating IV fluids in microgravity. Conclusions on the recommended fluid volume requirements are presented.

  19. Addressing the Barriers to Agile Development in the Department of Defense: Program Structure, Requirements, and Contracting

    DTIC Science & Technology

    2015-04-30

    approach directly contrast with the traditional DoD acquisition model designed for a single big-bang waterfall approach (Broadus, 2013). Currently...progress, reduce technical and programmatic risk, and respond to feedback and changes more quickly than traditional waterfall methods (Modigliani...requirements, and contracting. The DoD can address these barriers by utilizing a proactively tailored Agile acquisition model , implementing an IT Box

  20. Robot-Assisted Fracture Surgery: Surgical Requirements and System Design.

    PubMed

    Georgilas, Ioannis; Dagnino, Giulio; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2018-03-09

    The design of medical devices is a complex and crucial process to ensure patient safety. It has been shown that improperly designed devices lead to errors and associated accidents and costs. A key element for a successful design is incorporating the views of the primary and secondary stakeholders early in the development process. They provide insights into current practice and point out specific issues with the current processes and equipment in use. This work presents how information from a user-study conducted in the early stages of the RAFS (Robot Assisted Fracture Surgery) project informed the subsequent development and testing of the system. The user needs were captured using qualitative methods and converted to operational, functional, and non-functional requirements based on the methods derived from product design and development. This work presents how the requirements inform a new workflow for intra-articular joint fracture reduction using a robotic system. It is also shown how the various elements of the system are developed to explicitly address one or more of the requirements identified, and how intermediate verification tests are conducted to ensure conformity. Finally, a validation test in the form of a cadaveric trial confirms the ability of the designed system to satisfy the aims set by the original research question and the needs of the users.

  1. Protective immunity of Nile tilapia against Ichthyophthirius

    USDA-ARS?s Scientific Manuscript database

    Tilapia are currently cultured in different types of production systems ranging from pond, tank, cage, flowing water and intensive water reuse culture systems. Intensification of tilapia culture requires methods to prevent and control diseases to minimize the loss. Ichthyophthirius multifiliis (I...

  2. Joint Contracture Orthosis (JCO)

    NASA Technical Reports Server (NTRS)

    Lunsford, Thomas R.; Parsons, Ken; Krouskop, Thomas; McGee, Kevin

    1997-01-01

    The purpose of this project was to develop an advanced orthosis which is effective in reducing upper and lower limb contractures in significantly less time than currently required with conventional methods. The team that developed the JCO consisted of an engineer, orthotist, therapist, and physician.

  3. Stratified Diffractive Optic Approach for Creating High Efficiency Gratings

    NASA Technical Reports Server (NTRS)

    Chambers, Diana M.; Nordin, Gregory P.

    1998-01-01

    Gratings with high efficiency in a single diffracted order can be realized with both volume holographic and diffractive optical elements. However, each method has limitations that restrict the applications in which they can be used. For example, high efficiency volume holographic gratings require an appropriate combination of thickness and permittivity modulation throughout the bulk of the material. Possible combinations of those two characteristics are limited by properties of currently available materials, thus restricting the range of applications for volume holographic gratings. Efficiency of a diffractive optic grating is dependent on its approximation of an ideal analog profile using discrete features. The size of constituent features and, consequently, the number that can be used within a required grating period restricts the applications in which diffractive optic gratings can be used. These limitations imply that there are applications which cannot be addressed by either technology. In this paper we propose to address a number of applications in this category with a new method of creating high efficiency gratings which we call stratified diffractive optic gratings. In this approach diffractive optic techniques are used to create an optical structure that emulates volume grating behavior. To illustrate the stratified diffractive optic grating concept we consider a specific application, a scanner for a space-based coherent wind lidar, with requirements that would be difficult to meet by either volume holographic or diffractive optic methods. The lidar instrument design specifies a transmissive scanner element with the input beam normally incident and the exiting beam deflected at a fixed angle from the optical axis. The element will be rotated about the optical axis to produce a conical scan pattern. The wavelength of the incident beam is 2.06 microns and the required deflection angle is 30 degrees, implying a grating period of approximately 4 microns. Creating a high efficiency volume grating with these parameters would require a grating thickness that cannot be attained with current photosensitive materials. For a diffractive optic grating, the number of binary steps necessary to produce high efficiency combined with the grating period requires feature sizes and alignment tolerances that are also unattainable with current techniques. Rotation of the grating and integration into a space-based lidar system impose the additional requirements that it be insensitive to polarization orientation, that its mass be minimized and that it be able to withstand launch and space environments.

  4. Optimum runway orientation relative to crosswinds

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Brown, S. C.

    1972-01-01

    Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.

  5. Current advances on polynomial resultant formulations

    NASA Astrophysics Data System (ADS)

    Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar

    2017-08-01

    Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.

  6. Requirements for effective use of CFD in aerospace design

    NASA Technical Reports Server (NTRS)

    Raj, Pradeep

    1995-01-01

    This paper presents a perspective on the requirements that Computational Fluid Dynamics (CFD) technology must meet for its effective use in aerospace design. General observations are made on current aerospace design practices and deficiencies are noted that must be rectified for the U.S. aerospace industry to maintain its leadership position in the global marketplace. In order to rectify deficiencies, industry is transitioning to an integrated product and process development (IPPD) environment and design processes are undergoing radical changes. The role of CFD in producing data that design teams need to support flight vehicle development is briefly discussed. An overview of the current state of the art in CFD is given to provide an assessment of strengths and weaknesses of the variety of methods currently available, or under development, to produce aerodynamic data. Effectiveness requirements are examined from a customer/supplier view point with design team as customer and CFD practitioner as supplier. Partnership between the design team and CFD team is identified as an essential requirement for effective use of CFD. Rapid turnaround, reliable accuracy, and affordability are offered as three key requirements that CFD community must address if CFD is to play its rightful role in supporting the IPPD design environment needed to produce high quality yet affordable designs.

  7. Low-cost manufacturing of the point focus concentrating module and its key component, the Fresnel lens. Final subcontract report, 31 January 1991--6 May 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saifee, T.; Konnerth, A. III

    1991-11-01

    Solar Kinetics, Inc. (SKI) has been developing point-focus concentrating PV modules since 1986. SKI is currently in position to manufacture between 200 to 600 kilowatts annually of the current design by a combination of manual and semi-automated methods. This report reviews the current status of module manufacture and specifies the required approach to achieve a high-volume manufacturing capability and low cost. The approach taken will include process development concurrent with module design for automated manufacturing. The current effort reviews the major manufacturing costs and identifies components and processes whose improvements would produce the greatest effect on manufacturability and cost reduction.more » The Fresnel lens is one such key component. Investigating specific alternative manufacturing methods and sources has substantially reduced the lens costs and has exceeded the DOE cost-reduction goals. 15 refs.« less

  8. Security Analysis and Improvements to the PsychoPass Method

    PubMed Central

    2013-01-01

    Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458

  9. Current Drive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faulconer, D.W

    2004-03-15

    Certain devices aimed at magnetic confinement of thermonuclear plasma rely on the steady flow of an electric current in the plasma. In view of the dominant place it occupies in both the world magnetic-confinement fusion effort and the author's own activity, the tokamak toroidal configuration is selected as prototype for discussing the question of how such a current can be maintained. Tokamaks require a stationary toroidal plasma current, this being traditionally provided by a pulsed magnetic induction which drives the plasma ring as the secondary of a transformer. Since this mechanism is essentially transient, and steady-state fusion reactor operation hasmore » manifold advantages, significant effort is now devoted to developing alternate steady-state means of generating toroidal current. These methods are classed under the global heading of 'noninductive current drive' or simply 'current drive', generally, though not exclusively, employing the injection of waves and/or toroidally directed particle beams. In what follows we highlight the physical mechanisms underlying surprisingly various approaches to driving current in a tokamak, downplaying a number of practical and technical issues. When a significant data base exists for a given method, its experimental current drive efficiency and future prospects are detailed.« less

  10. Stratospheric measurement requirements and satellite-borne remote sensing capabilities

    NASA Technical Reports Server (NTRS)

    Carmichael, J. J.; Eldridge, R. G.; Frey, E. J.; Friedman, E. J.; Ghovanlou, A. H.

    1976-01-01

    The capabilities of specific NASA remote sensing systems to provide appropriate measurements of stratospheric parameters for potential user needs were assessed. This was used to evaluate the capabilities of the remote sensing systems to perform global monitoring of the stratosphere. The following conclusions were reached: (1) The performance of current remote stratospheric sensors, in some cases, compares quite well with identified measurement requirements. Their ability to measure other species has not been demonstrated. (2) None of the current, in-situ methods have the capability to satisfy the requirements for global monitoring and the temporal constraints derived from the users needs portion of the study. (3) Existing, non-remote techniques will continue to play an important role in stratospheric investigations for both corroboration of remotely collected data and in the evolutionary development of future remote sensors.

  11. Energy Requirement Assessment and Water Turnover in Japanese College Wrestlers Using the Doubly Labeled Water Method.

    PubMed

    Sagayama, Hiroyuki; Kondo, Emi; Shiose, Keisuke; Yamada, Yosuke; Motonaga, Keiko; Ouchi, Shiori; Kamei, Akiko; Osawa, Takuya; Nakajima, Kohei; Takahashi, Hideyuki; Higaki, Yasuki; Tanaka, Hiroaki

    2017-01-01

    Estimated energy requirements (EERs) are important for sports based on body weight classifications to aid in weight management. The basis for establishing EERs varies and includes self-reported energy intake (EI), predicted energy expenditure, and measured daily energy expenditure. Currently, however, no studies have been performed with male wrestlers using the highly accurate and precise doubly labeled water (DLW) method to estimate energy and fluid requirement. The primary aim of this study was to compare total energy expenditure (TEE), self-reported EI, and the difference in collegiate wrestlers during a normal training period using the DLW method. The secondary aims were to measure the water turnover and the physical activity level (PAL) of the athletes, and to examine the accuracy of two currently used equations to predict EER. Ten healthy males (age, 20.4±0.5 y) belonging to the East-Japan college league participated in this study. TEE was measured using the DLW method, and EI was assessed with self-reported dietary records for ~1 wk. There was a significant difference between TEE (17.9±2.5 MJ•d -1 [4,283±590 kcal•d -1 ]) and self-reported EI (14.4±3.3 MJ•d -1 [3,446±799 kcal•d -1 ]), a difference of 19%. The water turnover was 4.61±0.73 L•d -1 . The measured PAL (2.6±0.3) was higher than two predicted values during the training season and thus the two EER prediction equations produced underestimated values relative to DLW. We found that previous EERs were underestimating requirements in collegiate wrestlers and that those estimates should be revised.

  12. Determination of Mercury in Aqueous and Geologic Materials by Continuous Flow-Cold Vapor-Atomic Fluorescence Spectrometry (CVAFS)

    USGS Publications Warehouse

    Hageman, Philip L.

    2007-01-01

    New methods for the determination of total mercury in geologic materials and dissolved mercury in aqueous samples have been developed that will replace the methods currently (2006) in use. The new methods eliminate the use of sodium dichromate (Na2Cr2O7 ?2H2O) as an oxidizer and preservative and significantly lower the detection limit for geologic and aqueous samples. The new methods also update instrumentation from the traditional use of cold vapor-atomic absorption spectrometry to cold vapor-atomic fluorescence spectrometry. At the same time, the new digestion procedures for geologic materials use the same size test tubes, and the same aluminum heating block and hot plate as required by the current methods. New procedures for collecting and processing of aqueous samples use the same procedures that are currently (2006) in use except that the samples are now preserved with concentrated hydrochloric acid/bromine monochloride instead of sodium dichromate/nitric acid. Both the 'old' and new methods have the same analyst productivity rates. These similarities should permit easy migration to the new methods. Analysis of geologic and aqueous reference standards using the new methods show that these procedures provide mercury recoveries that are as good as or better than the previously used methods.

  13. A novel magnetic resonance imaging segmentation technique for determining diffuse intrinsic pontine glioma tumor volume.

    PubMed

    Singh, Ranjodh; Zhou, Zhiping; Tisnado, Jamie; Haque, Sofia; Peck, Kyung K; Young, Robert J; Tsiouris, Apostolos John; Thakur, Sunitha B; Souweidane, Mark M

    2016-11-01

    OBJECTIVE Accurately determining diffuse intrinsic pontine glioma (DIPG) tumor volume is clinically important. The aims of the current study were to 1) measure DIPG volumes using methods that require different degrees of subjective judgment; and 2) evaluate interobserver agreement of measurements made using these methods. METHODS Eight patients from a Phase I clinical trial testing convection-enhanced delivery (CED) of a therapeutic antibody were included in the study. Pre-CED, post-radiation therapy axial T2-weighted images were analyzed using 2 methods requiring high degrees of subjective judgment (picture archiving and communication system [PACS] polygon and Volume Viewer auto-contour methods) and 1 method requiring a low degree of subjective judgment (k-means clustering segmentation) to determine tumor volumes. Lin's concordance correlation coefficients (CCCs) were calculated to assess interobserver agreement. RESULTS The CCCs of measurements made by 2 observers with the PACS polygon and the Volume Viewer auto-contour methods were 0.9465 (lower 1-sided 95% confidence limit 0.8472) and 0.7514 (lower 1-sided 95% confidence limit 0.3143), respectively. Both were considered poor agreement. The CCC of measurements made using k-means clustering segmentation was 0.9938 (lower 1-sided 95% confidence limit 0.9772), which was considered substantial strength of agreement. CONCLUSIONS The poor interobserver agreement of PACS polygon and Volume Viewer auto-contour methods highlighted the difficulty in consistently measuring DIPG tumor volumes using methods requiring high degrees of subjective judgment. k-means clustering segmentation, which requires a low degree of subjective judgment, showed better interobserver agreement and produced tumor volumes with delineated borders.

  14. It's no debate, debates are great.

    PubMed

    Dy-Boarman, Eliza A; Nisly, Sarah A; Costello, Tracy J

    A debate can be a pedagogical method used to instill essential functions in pharmacy students. This non-traditional teaching method may help to further develop a number of skills that are highlighted in the current Accreditation Council for Pharmacy Education Standards 2016 and Center for the Advancement of Pharmacy Education Educational Outcomes 2013. Debates have also been used as an educational tool in other health disciplines. Current pharmacy literature does illustrate the use of debates in various areas within the pharmacy curriculum in both required and elective courses; however, the current body of literature would suggest that debates are an underutilized teaching tool in pharmacy experiential education. With all potential benefits of debates as a teaching tool, pharmacy experiential preceptors should further explore their use in the experiential setting. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Analysis of off-axis solenoid fields using the magnetic scalar potential: An application to a Zeeman-slower for cold atoms

    NASA Astrophysics Data System (ADS)

    Muniz, Sérgio R.; Bagnato, Vanderlei S.; Bhattacharya, M.

    2015-06-01

    In a region free of currents, magnetostatics can be described by the Laplace equation of a scalar magnetic potential, and one can apply the same methods commonly used in electrostatics. Here, we show how to calculate the general vector field inside a real (finite) solenoid, using only the magnitude of the field along the symmetry axis. Our method does not require integration or knowledge of the current distribution and is presented through practical examples, including a nonuniform finite solenoid used to produce cold atomic beams via laser cooling. These examples allow educators to discuss the nontrivial calculation of fields off-axis using concepts familiar to most students, while offering the opportunity to introduce themes of current modern research.

  16. Spectral methods in edge-diffraction theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, J.M.

    Spectral methods for the construction of uniform asymptotic representations of the field diffracted by an aperture in a plane screen are reviewed. These are separated into contrasting approaches, roughly described as physical and geometrical. It is concluded that the geometrical methods provide a direct route to the construction of uniform representations that are formally identical to the equivalent-edge-current concept. Some interpretive and analytical difficulties that complicate the physical methods of obtaining uniform representations are analyzed. Spectral synthesis proceeds directly from the ray geometry and diffraction coefficients, without any intervening current representation, and the representation is uniform at shadow boundaries andmore » caustics of the diffracted field. The physical theory of diffraction postulates currents on the diffracting screen that give rise to the diffracted field. The difficulties encountered in evaluating the current integrals are throughly examined, and it is concluded that the additional data provided by the physical theory of diffraction (diffraction coefficients off the Keller diffraction cone) are not actually required for obtaining uniform asymptotics at the leading order. A new diffraction representation that generalizes to arbitrary plane-convex apertures a formula given by Knott and Senior [Proc. IEEE 62, 1468 (1974)] for circular apertures is deduced. 34 refs., 1 fig.« less

  17. [Effect of non-pharmacological methods for alleviation of pain in newborns].

    PubMed

    Chromá, Jana; Sikorová, Lucie

    2012-01-01

    The aim of the paper is to analyze currently most used non-pharmacological methods for pain alleviation in newborns for the best evidence-based practice. Source of the required data for the period 2000-2011 were electronic licensed and freely accessible databases. Evaluation found evidence (30 studies) was carried out according to the table-level evidence (Fineout-Overholt, Johnston 2005). The selection was included in the evidence level I, II, III. Nutritive sucking is currently considered the most effective method for alleviating pain in newborns. Analysis of studies shows that non-pharmacological methods used to control pain in neonates are much more effective when used in combination with other non-pharmacological methods, such as music therapy, swaddling, facilitated tucking, multiple-stimulation, kangaroo care and non-nutritive suction. Non-pharmacological procedures are effective and lead to pain relief especially in procedural performance as heel lancet and venipuncture for blood sampling, etc.

  18. General field and office procedures for indirect discharge measurements

    USGS Publications Warehouse

    Benson, M.A.; Dalrymple, Tate

    2001-04-01

    The discharge of streams is usually measured by the current-meter method. During flood periods, however, it is frequently impossible or impractical to measure the discharges by this method when they occur. Consequently, many peak discharges must be determined after the passage of the flood by indirect methods, such as slope-area, contracted-opening, flow-over-dam, and flow-through-culvert, rather than by direct current-meter measurement. Indirect methods of determining peak discharge are based on hydraulic equations which relate the discharge to the water-surface profile and the geometry of the channel. A field survey is made after the flood to determine the location and elevation of high-water marks and the characteristics of the channel. Detailed descriptions of the general procedures used in collecting the field data and in computing the discharge are given in this report. Each of the methods requires special procedures described in subsequent chapters.

  19. The role of finite-difference methods in design and analysis for supersonic cruise

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.

    1976-01-01

    Finite-difference methods for analysis of steady, inviscid supersonic flows are described, and their present state of development is assessed with particular attention to their applicability to vehicles designed for efficient cruise flight. Current work is described which will allow greater geometric latitude, improve treatment of embedded shock waves, and relax the requirement that the axial velocity must be supersonic.

  20. Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association

    DTIC Science & Technology

    2011-03-24

    based Simultaneous Localization and Mapping ( SLAM ). The problem, however, is that vision-based navigation techniques can re- quire excessive amounts of...up and optimizing the data association process in vision-based SLAM . Specifically, this work studies the current methods that algorithms use to...required for location identification than that of other methods. This work can then be extended into a vision- SLAM implementation to subsequently

  1. METHOD OF OBTAINING AN IMPROVED WELD IN INERT ARC WELDING

    DOEpatents

    Correy, T.B.

    1962-12-11

    A method is reported for inert arc welding. An a-c welding current is applied to the workpiece and welding electrode such that the positive portion of each cycle thereof, with the electrode positive, has only sufficient energy to clean the surface of the workpiece and the negative portion of each cycle thereof, with the electrode negative, contains the energy required to weld. (AEC)

  2. Evaluation of methods for determining hardware projected life

    NASA Technical Reports Server (NTRS)

    1971-01-01

    An investigation of existing methods of predicting hardware life is summarized by reviewing programs having long life requirements, current research efforts on long life problems, and technical papers reporting work on life predicting techniques. The results indicate that there are no accurate quantitative means to predict hardware life for system level hardware. The effectiveness of test programs and the cause of hardware failures is considered.

  3. A Simplified 4-Site Economical Intradermal Post-Exposure Rabies Vaccine Regimen: A Randomised Controlled Comparison with Standard Methods

    PubMed Central

    Warrell, Mary J.; Riddell, Anna; Yu, Ly-Mee; Phipps, Judith; Diggle, Linda; Bourhy, Hervé; Deeks, Jonathan J.; Fooks, Anthony R.; Audry, Laurent; Brookes, Sharon M.; Meslin, François-Xavier; Moxon, Richard; Pollard, Andrew J.; Warrell, David A.

    2008-01-01

    Background The need for economical rabies post-exposure prophylaxis (PEP) is increasing in developing countries. Implementation of the two currently approved economical intradermal (ID) vaccine regimens is restricted due to confusion over different vaccines, regimens and dosages, lack of confidence in intradermal technique, and pharmaceutical regulations. We therefore compared a simplified 4-site economical PEP regimen with standard methods. Methods Two hundred and fifty-four volunteers were randomly allocated to a single blind controlled trial. Each received purified vero cell rabies vaccine by one of four PEP regimens: the currently accepted 2-site ID; the 8-site regimen using 0.05 ml per ID site; a new 4-site ID regimen (on day 0, approximately 0.1 ml at 4 ID sites, using the whole 0.5 ml ampoule of vaccine; on day 7, 0.1 ml ID at 2 sites and at one site on days 28 and 90); or the standard 5-dose intramuscular regimen. All ID regimens required the same total amount of vaccine, 60% less than the intramuscular method. Neutralising antibody responses were measured five times over a year in 229 people, for whom complete data were available. Findings All ID regimens showed similar immunogenicity. The intramuscular regimen gave the lowest geometric mean antibody titres. Using the rapid fluorescent focus inhibition test, some sera had unexpectedly high antibody levels that were not attributable to previous vaccination. The results were confirmed using the fluorescent antibody virus neutralisation method. Conclusions This 4-site PEP regimen proved as immunogenic as current regimens, and has the advantages of requiring fewer clinic visits, being more practicable, and having a wider margin of safety, especially in inexperienced hands, than the 2-site regimen. It is more convenient than the 8-site method, and can be used economically with vaccines formulated in 1.0 or 0.5 ml ampoules. The 4-site regimen now meets all requirements of immunogenicity for PEP and can be introduced without further studies. Trial Registration Controlled-Trials.com ISRCTN 30087513 PMID:18431444

  4. Leveraging Past and Current Measurements to Probabilistically Nowcast Low Visibility Procedures at an Airport

    NASA Astrophysics Data System (ADS)

    Mayr, G. J.; Kneringer, P.; Dietz, S. J.; Zeileis, A.

    2016-12-01

    Low visibility or low cloud ceiling reduce the capacity of airports by requiring special low visibility procedures (LVP) for incoming/departing aircraft. Probabilistic forecasts when such procedures will become necessary help to mitigate delays and economic losses.We compare the performance of probabilistic nowcasts with two statistical methods: ordered logistic regression, and trees and random forests. These models harness historic and current meteorological measurements in the vicinity of the airport and LVP states, and incorporate diurnal and seasonal climatological information via generalized additive models (GAM). The methods are applied at Vienna International Airport (Austria). The performance is benchmarked against climatology, persistence and human forecasters.

  5. Review of fire test methods and incident data for portable electric cables in underground coal mines

    NASA Astrophysics Data System (ADS)

    Braun, E.

    1981-06-01

    Electrically powered underground coal mining machinery is connected to a load center or distribution box by electric cables. The connecting cables used on mobile machines are required to meet fire performance requirements defined in the Code of Federal Regulations. This report reviews Mine Safety and Health Administration's (MSHA) current test method and compares it to British practices. Incident data for fires caused by trailing cable failures and splice failures were also reviewed. It was found that the MSHA test method is more severe than the British but that neither evaluated grouped cable fire performance. The incident data indicated that the grouped configuration of cables on a reel accounted for a majority of the fires since 1970.

  6. Methods for quantification of soil-transmitted helminths in environmental media: current techniques and recent advances

    PubMed Central

    Collender, Philip A.; Kirby, Amy E.; Addiss, David G.; Freeman, Matthew C.; Remais, Justin V.

    2015-01-01

    Limiting the environmental transmission of soil-transmitted helminths (STH), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost effective methods to detect and quantify STH in the environment. We review the state of the art of STH quantification in soil, biosolids, water, produce, and vegetation with respect to four major methodological issues: environmental sampling; recovery of STH from environmental matrices; quantification of recovered STH; and viability assessment of STH ova. We conclude that methods for sampling and recovering STH require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. PMID:26440788

  7. DNA methods: critical review of innovative approaches.

    PubMed

    Kok, Esther J; Aarts, Henk J M; Van Hoef, A M Angeline; Kuiper, Harry A

    2002-01-01

    The presence of ingredients derived from genetically modified organisms (GMOs) in food products in the market place is subject to a number of European regulations that stipulate which product consisting of or containing GMO-derived ingredients should be labeled as such. In order to maintain these labeling requirements, a variety of different GMO detection methods have been developed to screen for either the presence of DNA or protein derived from (approved) GM varieties. Recent incidents where unapproved GM varieties entered the European market show that more powerful GMO detection and identification methods will be needed to maintain European labeling requirements in an adequate, efficient, and cost-effective way. This report discusses the current state-of-the-art as well as future developments in GMO detection.

  8. Security analysis and improvements to the PsychoPass method.

    PubMed

    Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko

    2013-08-13

    In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.

  9. A simple test procedure for evaluating low temperature crack resistance of asphalt concrete.

    DOT National Transportation Integrated Search

    2009-11-01

    The current means of evaluating the low temperature cracking resistance of HMA relies on extensive test : methods that require assumptions about material behaviors and the use of complicated loading equipment. The purpose : of this study was to devel...

  10. CATALYTIC ENZYME-BASED METHODS FOR WATER TREATMENT AND WATER DISTRIBUTION SYSTEM DECONTAMINATION

    EPA Science Inventory

    Current chemistry-based decontaminants for chemical or biological warfare agents and related toxic materials are caustic and have the potential for causing material and environmental damage. In addition, most are bulk liquids that require significant logistics and storage capabil...

  11. DERMAL AND MOUTHING TRANSFERS OF SURFACE RESIDUES MEASURED USING FLUORESCENCE IMAGING

    EPA Science Inventory

    To reduce the uncertainty associated with current estimates of children's exposure to pesticides by dermal contact and non-dietary ingestion, residue transfer data are required. Prior to conducting exhaustive studies, a screening study to develop and test methods for measuring...

  12. Developing cost effective plans for low volume bridges

    DOT National Transportation Integrated Search

    2006-09-01

    There is currently an escalating concern across the state of Kansas with respect to the age : and condition of low volume bridges and methods available to modify or replace them. A : high percentage of low volume bridges in the state of Kansas requir...

  13. Induced Angular Momentum

    ERIC Educational Resources Information Center

    Parker, G. W.

    1978-01-01

    Discusses, classically and quantum mechanically, the angular momentum induced in the bound motion of an electron by an external magnetic field. Calculates the current density and its magnetic moment, and then uses two methods to solve the first-order perturbation theory equation for the required eigenfunction. (Author/GA)

  14. Quantitative analysis of relationships between irradiation parameters and the reproducibility of cyclotron-produced (99m)Tc yields.

    PubMed

    Tanguay, J; Hou, X; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A

    2015-05-21

    Cyclotron production of (99m)Tc through the (100)Mo(p,2n) (99m)Tc reaction channel is actively being investigated as an alternative to reactor-based (99)Mo generation by nuclear fission of (235)U. An exciting aspect of this approach is that it can be implemented using currently-existing cyclotron infrastructure to supplement, or potentially replace, conventional (99m)Tc production methods that are based on aging and increasingly unreliable nuclear reactors. Successful implementation will require consistent production of large quantities of high-radionuclidic-purity (99m)Tc. However, variations in proton beam currents and the thickness and isotopic composition of enriched (100)Mo targets, in addition to other irradiation parameters, may degrade reproducibility of both radionuclidic purity and absolute (99m)Tc yields. The purpose of this article is to present a method for quantifying relationships between random variations in production parameters, including (100)Mo target thicknesses and proton beam currents, and reproducibility of absolute (99m)Tc yields (defined as the end of bombardment (EOB) (99m)Tc activity). Using the concepts of linear error propagation and the theory of stochastic point processes, we derive a mathematical expression that quantifies the influence of variations in various irradiation parameters on yield reproducibility, quantified in terms of the coefficient of variation of the EOB (99m)Tc activity. The utility of the developed formalism is demonstrated with an example. We show that achieving less than 20% variability in (99m)Tc yields will require highly-reproducible target thicknesses and proton currents. These results are related to the service rate which is defined as the percentage of (99m)Tc production runs that meet the minimum daily requirement of one (or many) nuclear medicine departments. For example, we show that achieving service rates of 84.0%, 97.5% and 99.9% with 20% variations in target thicknesses requires producing on average 1.2, 1.5 and 1.9 times the minimum daily activity requirement. The irradiation parameters that would be required to achieve these service rates are described. We believe the developed formalism will aid in the development of quality-control criteria required to ensure consistent supply of large quantities of high-radionuclidic-purity cyclotron-produced (99m)Tc.

  15. Quantitative analysis of relationships between irradiation parameters and the reproducibility of cyclotron-produced 99mTc yields

    NASA Astrophysics Data System (ADS)

    Tanguay, J.; Hou, X.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.

    2015-05-01

    Cyclotron production of 99mTc through the 100Mo(p,2n) 99mTc reaction channel is actively being investigated as an alternative to reactor-based 99Mo generation by nuclear fission of 235U. An exciting aspect of this approach is that it can be implemented using currently-existing cyclotron infrastructure to supplement, or potentially replace, conventional 99mTc production methods that are based on aging and increasingly unreliable nuclear reactors. Successful implementation will require consistent production of large quantities of high-radionuclidic-purity 99mTc. However, variations in proton beam currents and the thickness and isotopic composition of enriched 100Mo targets, in addition to other irradiation parameters, may degrade reproducibility of both radionuclidic purity and absolute 99mTc yields. The purpose of this article is to present a method for quantifying relationships between random variations in production parameters, including 100Mo target thicknesses and proton beam currents, and reproducibility of absolute 99mTc yields (defined as the end of bombardment (EOB) 99mTc activity). Using the concepts of linear error propagation and the theory of stochastic point processes, we derive a mathematical expression that quantifies the influence of variations in various irradiation parameters on yield reproducibility, quantified in terms of the coefficient of variation of the EOB 99mTc activity. The utility of the developed formalism is demonstrated with an example. We show that achieving less than 20% variability in 99mTc yields will require highly-reproducible target thicknesses and proton currents. These results are related to the service rate which is defined as the percentage of 99mTc production runs that meet the minimum daily requirement of one (or many) nuclear medicine departments. For example, we show that achieving service rates of 84.0%, 97.5% and 99.9% with 20% variations in target thicknesses requires producing on average 1.2, 1.5 and 1.9 times the minimum daily activity requirement. The irradiation parameters that would be required to achieve these service rates are described. We believe the developed formalism will aid in the development of quality-control criteria required to ensure consistent supply of large quantities of high-radionuclidic-purity cyclotron-produced 99mTc.

  16. Evaluation of a method estimating real-time individual lysine requirements in two lines of growing-finishing pigs.

    PubMed

    Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J

    2015-04-01

    The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be used in precision feeding systems without adjustments. However, the method's ability to accommodate large genetic differences in feed intake and protein deposition patterns needs to be studied further.

  17. Future experimental needs to support applied aerodynamics - A transonic perspective

    NASA Technical Reports Server (NTRS)

    Gloss, Blair B.

    1992-01-01

    Advancements in facilities, test techniques, and instrumentation are needed to provide data required for the development of advanced aircraft and to verify computational methods. An industry survey of major users of wind tunnel facilities at Langley Research Center (LaRC) was recently carried out to determine future facility requirements, test techniques, and instrumentation requirements; results from this survey are reflected in this paper. In addition, areas related to transonic testing at LaRC which are either currently being developed or are recognized as needing improvements are discussed.

  18. Project FIRES. Volume 1: Program Overview and Summary, Phase 1B

    NASA Technical Reports Server (NTRS)

    Abeles, F. J.

    1980-01-01

    Overall performance requirements and evaluation methods for firefighters protective equipment were established and published as the Protective Ensemble Performance Standards (PEPS). Current firefighters protective equipment was tested and evaluated against the PEPS requirements, and the preliminary design of a prototype protective ensemble was performed. In phase 1B, the design of the prototype ensemble was finalized. Prototype ensembles were fabricated and then subjected to a series of qualification tests which were based upon the PEPS requirements. Engineering drawings and purchase specifications were prepared for the new protective ensemble.

  19. A simple transformation independent method for outlier definition.

    PubMed

    Johansen, Martin Berg; Christensen, Peter Astrup

    2018-04-10

    Definition and elimination of outliers is a key element for medical laboratories establishing or verifying reference intervals (RIs). Especially as inclusion of just a few outlying observations may seriously affect the determination of the reference limits. Many methods have been developed for definition of outliers. Several of these methods are developed for the normal distribution and often data require transformation before outlier elimination. We have developed a non-parametric transformation independent outlier definition. The new method relies on drawing reproducible histograms. This is done by using defined bin sizes above and below the median. The method is compared to the method recommended by CLSI/IFCC, which uses Box-Cox transformation (BCT) and Tukey's fences for outlier definition. The comparison is done on eight simulated distributions and an indirect clinical datasets. The comparison on simulated distributions shows that without outliers added the recommended method in general defines fewer outliers. However, when outliers are added on one side the proposed method often produces better results. With outliers on both sides the methods are equally good. Furthermore, it is found that the presence of outliers affects the BCT, and subsequently affects the determined limits of current recommended methods. This is especially seen in skewed distributions. The proposed outlier definition reproduced current RI limits on clinical data containing outliers. We find our simple transformation independent outlier detection method as good as or better than the currently recommended methods.

  20. Evaluation of Analysis Techniques for Fluted-Core Sandwich Cylinders

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Schultz, Marc R.

    2012-01-01

    Buckling-critical launch-vehicle structures require structural concepts that have high bending stiffness and low mass. Fluted-core, also known as truss-core, sandwich construction is one such concept. In an effort to identify an analysis method appropriate for the preliminary design of fluted-core cylinders, the current paper presents and compares results from several analysis techniques applied to a specific composite fluted-core test article. The analysis techniques are evaluated in terms of their ease of use and for their appropriateness at certain stages throughout a design analysis cycle (DAC). Current analysis techniques that provide accurate determination of the global buckling load are not readily applicable early in the DAC, such as during preliminary design, because they are too costly to run. An analytical approach that neglects transverse-shear deformation is easily applied during preliminary design, but the lack of transverse-shear deformation results in global buckling load predictions that are significantly higher than those from more detailed analysis methods. The current state of the art is either too complex to be applied for preliminary design, or is incapable of the accuracy required to determine global buckling loads for fluted-core cylinders. Therefore, it is necessary to develop an analytical method for calculating global buckling loads of fluted-core cylinders that includes transverse-shear deformations, and that can be easily incorporated in preliminary design.

  1. Towards tailored and targeted adherence assessment to optimise asthma management

    PubMed Central

    van Boven, Job FM; Trappenburg, Jaap CA; van der Molen, Thys; Chavannes, Niels H

    2015-01-01

    In this paper, we aim to emphasise the need for a more comprehensive and tailored approach to manage the broad nature of non-adherence, to personalise current asthma management. Although currently several methods are available to measure the extent of asthma patients’ adherence, the vast majority do not incorporate confirmation of the actual inhalation, dose and inhalation technique. Moreover, most current measures lack detailed information on the individual consequences of non-adherence and on when and how to take action if non-adherence is identified. Notably, one has to realise there are several forms of non-adherence (erratic non-adherence, intelligent non-adherence and unwitting non-adherence), each requiring a different approach. To improve asthma management, more accurate methods are needed that integrate measures of non-adherence, asthma disease control and patient preferences. Integrating information from the latest inhaler devices and patient-reported outcomes using mobile monitoring- and feedback systems (‘mHealth’) is considered a promising strategy, but requires careful implementation. Key issues to be considered before large-scale implementation include patient preferences, large heterogeneity in patient and disease characteristics, economic consequences, and long-term persistence with new digital technologies. PMID:26181850

  2. Plasma characteristics in the discharge region of a 20 A emission current hollow cathode

    NASA Astrophysics Data System (ADS)

    Mingming, SUN; Tianping, ZHANG; Xiaodong, WEN; Weilong, GUO; Jiayao, SONG

    2018-02-01

    Numerical calculation and fluid simulation methods were used to obtain the plasma characteristics in the discharge region of the LIPS-300 ion thruster’s 20 A emission current hollow cathode and to verify the structural design of the emitter. The results of the two methods indicated that the highest plasma density and electron temperature, which improved significantly in the orifice region, were located in the discharge region of the hollow cathode. The magnitude of plasma density was about 1021 m-3 in the emitter and orifice regions, as obtained by numerical calculations, but decreased exponentially in the plume region with the distance from the orifice exit. Meanwhile, compared to the emitter region, the electron temperature and current improved by about 36% in the orifice region. The hollow cathode performance test results were in good agreement with the numerical calculation results, which proved that that the structural design of the emitter and the orifice met the requirements of a 20 A emission current. The numerical calculation method can be used to estimate plasma characteristics in the preliminary design stage of hollow cathodes.

  3. The comparative analysis of the current-meter method and the pressure-time method used for discharge measurements in the Kaplan turbine penstocks

    NASA Astrophysics Data System (ADS)

    Adamkowski, A.; Krzemianowski, Z.

    2012-11-01

    The paper presents experiences gathered during many years of utilizing the current-meter and pressure-time methods for flow rate measurements in many hydropower plants. The integration techniques used in these both methods are different from the recommendations contained in the relevant international standards, mainly from the graphical and arithmetical ones. The results of the comparative analysis of both methods applied at the same time during the hydraulic performance tests of two Kaplan turbines in one of the Polish hydropower plant are presented in the final part of the paper. In the case of the pressure-time method application, the concrete penstocks of the tested turbines required installing a special measuring instrumentation inside the penstock. The comparison has shown a satisfactory agreement between the results of discharge measurements executed using the both considered methods. Maximum differences between the discharge values have not exceeded 1.0 % and the average differences have not been greater than 0.5 %.

  4. Quantitation of sugar content in pyrolysis liquids after acid hydrolysis using high-performance liquid chromatography without neutralization.

    PubMed

    Johnston, Patrick A; Brown, Robert C

    2014-08-13

    A rapid method for the quantitation of total sugars in pyrolysis liquids using high-performance liquid chromatography (HPLC) was developed. The method avoids the tedious and time-consuming sample preparation required by current analytical methods. It is possible to directly analyze hydrolyzed pyrolysis liquids, bypassing the neutralization step usually required in determination of total sugars. A comparison with traditional methods was used to determine the validity of the results. The calibration curve coefficient of determination on all standard compounds was >0.999 using a refractive index detector. The relative standard deviation for the new method was 1.13%. The spiked sugar recoveries on the pyrolysis liquid samples were between 104 and 105%. The research demonstrates that it is possible to obtain excellent accuracy and efficiency using HPLC to quantitate glucose after acid hydrolysis of polymeric and oligomeric sugars found in fast pyrolysis bio-oils without neutralization.

  5. MANPRINT Methods Monograph: Aiding the Development of Manned System Performance Criteria

    DTIC Science & Technology

    1989-06-01

    the need for the new system. It may be necessary co derive these requirements from combat models. By modeling the capabilities of the current force ...FORMAT The O&O Plan describes how a system will be integrated into the force structure, deployed, operated, and supported in peacetime and wartime...for evaluation during OT I. 9. MANPOWER/ FORCE STRUCTURE ASSESSMENT. Estimate manpower require- ments per system, using unit, and total Army by

  6. Etching-free patterning method for electrical characterization of atomically thin MoSe2 films grown by chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Utama, M. Iqbal Bakti; Lu, Xin; Zhan, Da; Ha, Son Tung; Yuan, Yanwen; Shen, Zexiang; Xiong, Qihua

    2014-10-01

    Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures.Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures. Electronic supplementary information (ESI) available: Further experiments on patterning and additional electrical characterizations data. See DOI: 10.1039/c4nr03817g

  7. Management of injuries of the eye and its adnexa.

    PubMed

    Lipke, K J; Gümbel, H O C

    2013-08-01

    The face plays the main role in interpersonal communication and in aesthetic perception. What is more, on account of the complex eyelid anatomy required to ensure the functioning of the eye, the treatment of periocular injuries requires a profound knowledge of anatomy and plastic reconstructive surgery, even if a loss of soft tissue is involved. Many methods for the reconstruction of eyelid defects have been described in the current literature. These methods must be guided by the site and extent of the defect on the one hand and by cosmetic requirements on the other to produce best results in terms of form and function. The treatment of injuries in the area of the eyelid involves some peculiarities that must be considered. The management of large defects in particular requires the cooperation of all head surgery disciplines. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  8. Requirements for radiation emergency urine bioassay techniques for the public and first responders.

    PubMed

    Li, Chunsheng; Vlahovich, Slavica; Dai, Xiongxin; Richardson, Richard B; Daka, Joseph N; Kramer, Gary H

    2010-11-01

    Following a radiation emergency, the affected public and the first responders may need to be quickly assessed for internal contamination by the radionuclides involved. Urine bioassay is one of the most commonly used methods for assessing radionuclide intake and radiation dose. This paper attempts to derive the sensitivity requirements (from inhalation exposure) for the urine bioassay techniques for the top 10 high-risk radionuclides that might be used in a terrorist attack. The requirements are based on a proposed reference dose to adults of 0.1 Sv (CED, committed effective dose). In addition, requirements related to sample turnaround time and field deployability of the assay techniques are also discussed. A review of currently available assay techniques summarized in this paper reveals that method development for ²⁴¹Am, ²²⁶Ra, ²³⁸Pu, and ⁹⁰Sr urine bioassay is needed.

  9. Design enhancement tools in MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Wallerstein, D. V.

    1984-01-01

    Design sensitivity is the calculation of derivatives of constraint functions with respect to design variables. While a knowledge of these derivatives is useful in its own right, the derivatives are required in many efficient optimization methods. Constraint derivatives are also required in some reanalysis methods. It is shown where the sensitivity coefficients fit into the scheme of a basic organization of an optimization procedure. The analyzer is to be taken as MSC/NASTRAN. The terminator program monitors the termination criteria and ends the optimization procedure when the criteria are satisfied. This program can reside in several plances: in the optimizer itself, in a user written code, or as part of the MSC/EOS (Engineering Operating System) MSC/EOS currently under development. Since several excellent optimization codes exist and since they require such very specialized technical knowledge, the optimizer under the new MSC/EOS is considered to be selected and supplied by the user to meet his specific needs and preferences. The one exception to this is a fully stressed design (FSD) based on simple scaling. The gradients are currently supplied by various design sensitivity options now existing in MSC/NASTRAN's design sensitivity analysis (DSA).

  10. Automatic analysis of nuclear-magnetic-resonance-spectroscopy clinical research data

    NASA Astrophysics Data System (ADS)

    Scott, Katherine N.; Wilson, David C.; Bruner, Angela P.; Lyles, Teresa A.; Underhill, Brandon; Geiser, Edward A.; Ballinger, J. Ray; Scott, James D.; Stopka, Christine B.

    1998-03-01

    A major problem of P-31 nuclear magnetic spectroscopy (MRS) in vivo applications is that when large data sets are acquired, the time invested in data reduction and analysis with currently available technologies may totally overshadow the time required for data acquisition. An example is out MRS monitoring of exercise therapy for patients with peripheral vascular disease. In these, the spectral acquisition requires 90 minutes per patient study, whereas data analysis and reduction requires 6-8 hours. Our laboratory currently uses the proprietary software SA/GE developed by General Electric. However, other software packages have similar limitations. When data analysis takes this long, the researcher does not have the rapid feedback required to ascertain the quality of data acquired nor the result of the study. This highly undesirable even in a research environment, but becomes intolerable in the clinical setting. The purpose of this report is to outline progress towards the development of an automated method for eliminating the spectral analysis burden on the researcher working in the clinical setting.

  11. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  12. Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.

    1997-02-01

    A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.

  13. Chapter A5. Section 6.1.F. Wastewater, Pharmaceutical, and Antibiotic Compounds

    USGS Publications Warehouse

    Lewis, Michael Edward; Zaugg, Steven D.

    2003-01-01

    The USGS differentiates between samples collected for analysis of wastewater compounds and those collected for analysis of pharmaceutical and antibiotic compounds, based on the analytical schedule for the laboratory method. Currently, only the wastewater laboratory method for field-filtered samples (SH1433) is an approved, routine (production) method. (The unfiltered wastewater method LC 8033 also is available but requires a proposal for custom analysis.) At this time, analysis of samples for pharmaceutical and antibiotic compounds is confined to research studies and is available only on a custom basis.

  14. Developing cost effective plans for low volume bridges

    DOT National Transportation Integrated Search

    2006-09-01

    There is currently an escalating concern across the state of Kansas with respect to the age and condition of low volume bridges and methods available to modify or replace them. A high percentage of low volume bridges in the state of Kansas require or...

  15. Disinsection: evolution of the air curtain in the last year

    USDA-ARS?s Scientific Manuscript database

    Certain countries require disinsection of commercial aircraft from overseas flights before passengers and crews disembark. Currently acceptable method: spray aircraft interior with pesticides. One of the problems with this is that passengers and crew are exposed to pesticides. There are pesticide se...

  16. PROPOSED REVISION OF MIL-H-81019, HYDRAULIC FLUID, PETROLEUM BASE, ULTRA-LOW TEMPERATURE,

    DTIC Science & Technology

    81019 in line with that of the fluid currently being supplied under MIL -H-5606B. (Author)...An investigation was conducted to revise specification requirements and test methods which would bring the quality of the fluid supplied under MIL -H

  17. Visual function and fitness to drive.

    PubMed

    Kotecha, Aachal; Spratt, Alexander; Viswanathan, Ananth

    2008-01-01

    Driving is recognized to be a visually intensive task and accordingly there is a legal minimum standard of vision required for all motorists. The purpose of this paper is to review the current United Kingdom (UK) visual requirements for driving and discuss the evidence base behind these legal rules. The role of newer, alternative tests of visual function that may be better indicators of driving safety will also be considered. Finally, the implications of ageing on driving ability are discussed. A search of Medline and PubMed databases was performed using the following keywords: driving, vision, visual function, fitness to drive and ageing. In addition, papers from the Department of Transport website and UK Royal College of Ophthalmologists guidelines were studied. Current UK visual standards for driving are based upon historical concepts, but recent advances in technology have brought about more sophisticated methods for assessing the status of the binocular visual field and examining visual attention. These tests appear to be better predictors of driving performance. Further work is required to establish whether these newer tests should be incorporated in the current UK visual standards when examining an individual's fitness to drive.

  18. Estimating premorbid general cognitive functioning for children and adolescents using the American Wechsler Intelligence Scale for Children-Fourth Edition: demographic and current performance approaches.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Brickell, Tracey A; Saklofske, Donald H

    2007-04-01

    Neuropsychologic evaluation requires current test performance be contrasted against a comparison standard to determine if change has occurred. An estimate of premorbid intelligence quotient (IQ) is often used as a comparison standard. The Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is a commonly used intelligence test. However, there is no method to estimate premorbid IQ for the WISC-IV, limiting the test's utility for neuropsychologic assessment. This study develops algorithms to estimate premorbid Full Scale IQ scores. Participants were the American WISC-IV standardization sample (N = 2172). The sample was randomly divided into 2 groups (development and validation). The development group was used to generate 12 algorithms. These algorithms were accurate predictors of WISC-IV Full Scale IQ scores in healthy children and adolescents. These algorithms hold promise as a method to predict premorbid IQ for patients with known or suspected neurologic dysfunction; however, clinical validation is required.

  19. Study on the parameters of the scanning system for the 300 keV electron accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leo, K. W.; Chulan, R. M., E-mail: leo@nm.gov.my; Hashim, S. A.

    2016-01-22

    This paper describes the method to identify the magnetic coil parameters of the scanning system. This locally designed low energy electron accelerator with the present energy of 140 keV will be upgraded to 300 keV. In this accelerator, scanning system is required to deflect the energetic electron beam across a titanium foil in vertical and horizontal direction. The excitation current of the magnetic coil is determined by the energy of the electron beam. Therefore, the magnetic coil parameters must be identified to ensure the matching of the beam energy and excitation coil current. As the result, the essential parameters ofmore » the effective lengths for X-axis and Y-axis have been found as 0.1198 m and 0.1134 m and the required excitation coil currents which is dependenton the electron beam energies have be identified.« less

  20. Fee-for-service will remain a feature of major payment reforms, requiring more changes in Medicare physician payment.

    PubMed

    Ginsburg, Paul B

    2012-09-01

    Many health policy analysts envision provider payment reforms currently under development as replacements for the traditional fee-for-service payment system. Reforms include per episode bundled payment and elements of capitation, such as global payments or accountable care organizations. But even if these approaches succeed and are widely adopted, the core method of payment to many physicians for the services they provide is likely to remain fee-for-service. It is therefore critical to address the current shortcomings in the Medicare physician fee schedule, because it will affect physician incentives and will continue to play an important role in determining the payment amounts under payment reform. This article reviews how the current payment system developed and is applied, and it highlights areas that require careful review and modification to ensure the success of broader payment reform.

  1. Quantitative Determination of Caffeine in Beverages Using a Combined SPME-GC/MS Method

    NASA Astrophysics Data System (ADS)

    Pawliszyn, Janusz; Yang, Min J.; Orton, Maureen L.

    1997-09-01

    Solid-phase microextraction (SPME) combined with gas chromatography/mass spectrometry (GC/MS) has been applied to the analysis of various caffeinated beverages. Unlike the current methods, this technique is solvent free and requires no pH adjustments. The simplicity of the SPME-GC/MS method lends itself to a good undergraduate laboratory practice. This publication describes the analytical conditions and presents the data for determination of caffeine in coffee, tea, and coke. Quantitation by isotopic dilution is also illustrated.

  2. A novel mechanism for electrical currents inducing ventricular fibrillation: The three-fold way to fibrillation.

    PubMed

    Kroll, Mark W; Panescu, Dorin; Hinz, Andrew F; Lakkireddy, Dhanunjaya

    2010-01-01

    It has been long recognized that there are 2 methods for inducing VF (ventricular fibrillation) with electrical currents‥ These are: (1) delivering a high-charge shock into the cardiac T-wave, and (2) delivering lower level currents for 1-5 seconds. Present electrical safety standards are based on this understanding. We present new data showing a 3(rd) mechanism of inducing VF which involves the steps of delivering sufficient current to cause high-rate cardiac capture, causing cardiac output collapse, leading to ischemia, for sufficiently long duration, which then lowers the VFT (VF threshold) to the level of the current, which finally results in VF. This requires about 40% of the normal VF-induction current but requires a duration of minutes instead of seconds for the VF to be induced. Anesthetized and ventilated swine (n=6) had current delivered from a probe tip 10 mm from the epicardium sufficient to cause hypotensive capture but not directly induce VF within 5 s. After a median time of 90 s, VF was induced. This 3(rd) mechanism of VF induction should be studied further and considered for electrical safety standards and is relevant to long-duration TASER Electronic Control Device applications.

  3. Use of traditional contraceptive methods in India & its socio-demographic determinants.

    PubMed

    Ram, Faujdar; Shekhar, Chander; Chowdhury, Biswabandita

    2014-11-01

    The high use of traditional contraceptive methods may have health repercussions on both partners. High failure rate, lack of protection from sexually transmitted diseases are some of the examples of these repercussions. The aim of this study was to understand the level, trends, pattern, volume and socio-demographic determinants of using traditional contraceptive methods in the Indian context. Percentages, per cent distribution, cross-tabulation and multinomial logistic regression analyses were carried out. The data from the three rounds of National Family Health survey (NFHS) were used. The unit level District Level Household Survey (2007-2008) were mainly used to carry out the analysis in this paper. Marriage rates for States and Union Territories (UTs) were projected for the period of 2001-2011 to estimate the volume of traditional contraceptive users. These rates are required to get the number of eligible couples as on 2011 in the respective State/UT. The latest round of the District Level Household Survey (2007-2008) revealed that 6.7 per cent currently married women were using traditional contraceptive methods in India. More than half of the currently married women (56%) have ever used these methods. In terms of socio-demographic determinants, the odds ratios of using these methods were significantly higher for women aged 35 years and above, rural, Hindu, other than Scheduled Castes/Tribes (SCs/STs), secondary and above educated, non-poor, having two plus living children, and at least one surviving son in most of the states as well as at the national level. The northeastern region showed higher odds ratios (5 times) of women using traditional contraceptive methods than the southern region. A large number of currently married women have ever used the traditional contraceptive methods in India. On the basis of the findings from this study, the total size of those women who were using traditional methods and those who were having unmet need, and are required to use modern spacing methods of family planning in achieving the reproductive goals, is around 53 million. Women from a set of specific socio-demographic backgrounds are more likely to use these methods. A regional pattern has also emerged in use of tradition contraceptive methods in India.

  4. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pask, J E; Sukumar, N; Guney, M

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points inmore » space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is completely general, applicable to any crystal symmetry and to both metals and insulators alike. We have developed and implemented a full self-consistent Kohn-Sham method, including both total energies and forces for molecular dynamics, and developed a full MPI parallel implementation for large-scale calculations. We have applied the method to the gamut of physical systems, from simple insulating systems with light atoms to complex d- and f-electron systems, requiring large numbers of atomic-orbital enrichments. In every case, the new PU FE method attained the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the current state-of-the-art PW method. Finally, our initial MPI implementation has shown excellent parallel scaling of the most time-critical parts of the code up to 1728 processors, with clear indications of what will be required to achieve comparable scaling for the rest. Having shown that the key remaining disadvantage of real-space methods can in fact be overcome, the work has attracted significant attention: with sixteen invited talks, both domestic and international, so far; two papers published and another in preparation; and three new university and/or national laboratory collaborations, securing external funding to pursue a number of related research directions. Having demonstrated the proof of principle, work now centers on the necessary extensions and optimizations required to bring the prototype method and code delivered here to production applications.« less

  5. The U.S. Forest Service's analysis of cumulative effects to wildlife: A study of legal standards, current practice, and ongoing challenges on a National Forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Courtney A., E-mail: courtney.schultz@colostate.edu

    Cumulative effects analysis (CEA) allows natural resource managers to understand the status of resources in historical context, learn from past management actions, and adapt future activities accordingly. U.S. federal agencies are required to complete CEA as part of environmental impact assessment under the National Environmental Policy Act (NEPA). Past research on CEA as part of NEPA has identified significant deficiencies in CEA practice, suggested methodologies for handling difficult aspects of CEA, and analyzed the rise in litigation over CEA in U.S. courts. This article provides a review of the literature and legal standards related to CEA as it is donemore » under NEPA and then examines current practice on a U.S. National Forest, utilizing qualitative methods in order to provide a detailed understanding of current approaches to CEA. Research objectives were to understand current practice, investigate ongoing challenges, and identify impediments to improvement. Methods included a systematic review of a set of NEPA documents and semi-structured interviews with practitioners, scientists, and members of the public. Findings indicate that the primary challenges associated with CEA include: issues of both geographic and temporal scale of analysis, confusion over the purpose of the requirement, the lack of monitoring data, and problems coordinating and disseminating data. Improved monitoring strategies and programmatic analyses could support improved CEA practice.« less

  6. Label-Free Immuno-Sensors for the Fast Detection of Listeria in Food.

    PubMed

    Morlay, Alexandra; Roux, Agnès; Templier, Vincent; Piat, Félix; Roupioz, Yoann

    2017-01-01

    Foodborne diseases are a major concern for both food industry and health organizations due to the economic costs and potential threats for human lives. For these reasons, specific regulations impose the research of pathogenic bacteria in food products. Nevertheless, current methods, references and alternatives, take up to several days and require many handling steps. In order to improve pathogen detection in food, we developed an immune-sensor, based on Surface Plasmon Resonance imaging (SPRi) and bacterial growth which allows the detection of a very low number of Listeria monocytogenes in food sample in one day. Adequate sensitivity is achieved by the deposition of several antibodies in a micro-array format allowing real-time detection. This label-free method thus reduces handling and time to result compared with current methods.

  7. Aerobic conditioning for team sport athletes.

    PubMed

    Stone, Nicholas M; Kilding, Andrew E

    2009-01-01

    Team sport athletes require a high level of aerobic fitness in order to generate and maintain power output during repeated high-intensity efforts and to recover. Research to date suggests that these components can be increased by regularly performing aerobic conditioning. Traditional aerobic conditioning, with minimal changes of direction and no skill component, has been demonstrated to effectively increase aerobic function within a 4- to 10-week period in team sport players. More importantly, traditional aerobic conditioning methods have been shown to increase team sport performance substantially. Many team sports require the upkeep of both aerobic fitness and sport-specific skills during a lengthy competitive season. Classic team sport trainings have been shown to evoke marginal increases/decreases in aerobic fitness. In recent years, aerobic conditioning methods have been designed to allow adequate intensities to be achieved to induce improvements in aerobic fitness whilst incorporating movement-specific and skill-specific tasks, e.g. small-sided games and dribbling circuits. Such 'sport-specific' conditioning methods have been demonstrated to promote increases in aerobic fitness, though careful consideration of player skill levels, current fitness, player numbers, field dimensions, game rules and availability of player encouragement is required. Whilst different conditioning methods appear equivalent in their ability to improve fitness, whether sport-specific conditioning is superior to other methods at improving actual game performance statistics requires further research.

  8. Biomedical research applications of electromagnetically separated enriched stable isotopes

    NASA Astrophysics Data System (ADS)

    Lambrecht, R. M.

    The current and projected annual requirements through 1985 for stable isotopes enriched by electromagnetic separation methods were reviewed for applications in various types of biomedical research: (1) medical radiosiotope production, labeled compounds, and potential radio-pharmaceuticals; (2) nutrition, food science, and pharmacology: (3) metallobiochemistry and environmental toxicology; (4) nuclear magnetic resonance, electron paramagnetic resonance, and moessbauer spectroscopy in biochemical, biophysical, and biomedical research; and (5) miscellaneous advances in radioactive and nonradioactive tracer technology. Radioisotopes available from commercial sources or routinely used in clinical nuclear medicine were excluded. Current requirements for enriched stable isotopes in biomedical research are not being satisfied. Severe shortages exist for Mg 26, Ca 43, Zn 70, Se 76, Se 77, Se 78, Pd 102, Cd 111, Cd 113, and Os 190. Many interesting and potentially important investigations in biomedical research require small quantities of specific elements at high isotopic enrichments.

  9. A HIGH BANDWIDTH BIPOLAR POWER SUPPLY FOR THE FAST CORRECTORS IN THE APS UPGRADE*

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ju; Sprau, Gary

    The APS Upgrade of a multi-bend achromat (MBA) storage ring requires a fast bipolar power supply for the fast correction magnets. The key performance requirement of the power supply includes a small-signal bandwidth of 10 kHz for the output current. This requirement presents a challenge to the design because of the high inductance of the magnet load and a limited input DC voltage. A prototype DC/DC power supply utilizing a MOSFET H-bridge circuit with a 500 kHz PWM has been developed and tested successfully. The prototype achieved a 10-kHz bandwidth with less than 3-dB attenuation for a signal 0.5% ofmore » the maximum operating current of 15 amperes. This paper presents the design of the power circuit, the PWM method, the control loop, and the test results.« less

  10. A comparison of three methods to estimate evapotranspiration in two contrasting loblolly pine plantations: age-related changes in water use and drought sensitivity of evapotranspiration components

    Treesearch

    Jean-Christophe Domec; Ge Sun; Asko Noormets; Michael J. Gavazzi; Emrys A. Treasure; Erika Cohen; Jennifer J. Swenson; Steve G. McNulty; John S. King

    2012-01-01

    Increasing variability of rainfall patterns requires detailed understanding of the pathways of water loss from ecosystems to optimize carbon uptake and management choices. In the current study we characterized the usability of three alternative methods of different rigor for quantifying stand-level evapotranspiration (ET), partitioned ET into tree transpiration (T),...

  11. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    PubMed

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  13. Safety approaches for high power modular laser operation

    NASA Astrophysics Data System (ADS)

    Handren, R. T.

    1993-03-01

    Approximately 20 years ago, a program was initiated at the Lawrence Livermore National Laboratory (LLNL) to study the feasibility of using lasers to separate isotopes of uranium and other materials. Of particular interest was the development of a uranium enrichment method for the production of commercial nuclear power reactor fuel to replace current more expensive methods. The Uranium Atomic Vapor Laser Isotope Separation (U-AVLIS) Program progressed to the point where a plant-scale facility to demonstrate commercial feasibility was built and is being tested. The U-AVLIS Program uses copper vapor lasers which pump frequency selective dye lasers to photoionize uranium vapor produced by an electron beam. The selectively ionized isotopes are electrostatically collected. The copper lasers are arranged in oscillator/amplifier chains. The current configuration consists of 12 chains, each with a nominal output of 800 W for a system output in excess of 9 kW. The system requirements are for continuous operation (24 h a day, 7 days a week) and high availability. To meet these requirements, the lasers are designed in a modular form allowing for rapid change-out of the lasers requiring maintenance. Since beginning operation in early 1985, the copper lasers have accumulated over 2 million unit hours at a greater than 90% availability. The dye laser system provides approximately 2.5 kW average power in the visible wavelength range. This large-scale laser system has many safety considerations, including high-power laser beams, high voltage, and large quantities (approximately 3000 gal) of ethanol dye solutions. The Laboratory's safety policy requires that safety controls be designed into any process, equipment, or apparatus in the form of engineering controls. Administrative controls further reduce the risk to an acceptable level. Selected examples of engineering and administrative controls currently being used in the U-AVLIS Program are described.

  14. MRI-based methods for quantification of the cerebral metabolic rate of oxygen

    PubMed Central

    Rodgers, Zachary B; Detre, John A

    2016-01-01

    The brain depends almost entirely on oxidative metabolism to meet its significant energy requirements. As such, the cerebral metabolic rate of oxygen (CMRO2) represents a key measure of brain function. Quantification of CMRO2 has helped elucidate brain functional physiology and holds potential as a clinical tool for evaluating neurological disorders including stroke, brain tumors, Alzheimer’s disease, and obstructive sleep apnea. In recent years, a variety of magnetic resonance imaging (MRI)-based CMRO2 quantification methods have emerged. Unlike positron emission tomography – the current “gold standard” for measurement and mapping of CMRO2 – MRI is non-invasive, relatively inexpensive, and ubiquitously available in modern medical centers. All MRI-based CMRO2 methods are based on modeling the effect of paramagnetic deoxyhemoglobin on the magnetic resonance signal. The various methods can be classified in terms of the MRI contrast mechanism used to quantify CMRO2: T2*, T2′, T2, or magnetic susceptibility. This review article provides an overview of MRI-based CMRO2 quantification techniques. After a brief historical discussion motivating the need for improved CMRO2 methodology, current state-of-the-art MRI-based methods are critically appraised in terms of their respective tradeoffs between spatial resolution, temporal resolution, and robustness, all of critical importance given the spatially heterogeneous and temporally dynamic nature of brain energy requirements. PMID:27089912

  15. A novel magnetic resonance imaging segmentation technique for determining diffuse intrinsic pontine glioma tumor volume

    PubMed Central

    Singh, Ranjodh; Zhou, Zhiping; Tisnado, Jamie; Haque, Sofia; Peck, Kyung K.; Young, Robert J.; Tsiouris, Apostolos John; Thakur, Sunitha B.; Souweidane, Mark M.

    2017-01-01

    OBJECTIVE Accurately determining diffuse intrinsic pontine glioma (DIPG) tumor volume is clinically important. The aims of the current study were to 1) measure DIPG volumes using methods that require different degrees of subjective judgment; and 2) evaluate interobserver agreement of measurements made using these methods. METHODS Eight patients from a Phase I clinical trial testing convection-enhanced delivery (CED) of a therapeutic antibody were included in the study. Pre-CED, post–radiation therapy axial T2-weighted images were analyzed using 2 methods requiring high degrees of subjective judgment (picture archiving and communication system [PACS] polygon and Volume Viewer auto-contour methods) and 1 method requiring a low degree of subjective judgment (k-means clustering segmentation) to determine tumor volumes. Lin’s concordance correlation coefficients (CCCs) were calculated to assess interobserver agreement. RESULTS The CCCs of measurements made by 2 observers with the PACS polygon and the Volume Viewer auto-contour methods were 0.9465 (lower 1-sided 95% confidence limit 0.8472) and 0.7514 (lower 1-sided 95% confidence limit 0.3143), respectively. Both were considered poor agreement. The CCC of measurements made using k-means clustering segmentation was 0.9938 (lower 1-sided 95% confidence limit 0.9772), which was considered substantial strength of agreement. CONCLUSIONS The poor interobserver agreement of PACS polygon and Volume Viewer auto-contour methods high-lighted the difficulty in consistently measuring DIPG tumor volumes using methods requiring high degrees of subjective judgment. k-means clustering segmentation, which requires a low degree of subjective judgment, showed better interob-server agreement and produced tumor volumes with delineated borders. PMID:27391980

  16. Design of laser diode driver with constant current and temperature control system

    NASA Astrophysics Data System (ADS)

    Wang, Ming-cai; Yang, Kai-yong; Wang, Zhi-guo; Fan, Zhen-fang

    2017-10-01

    A laser Diode (LD) driver with constant current and temperature control system is designed according to the LD working characteristics. We deeply researched the protection circuit and temperature control circuit based on thermos-electric cooler(TEC) cooling circuit and PID algorithm. The driver could realize constant current output and achieve stable temperature control of LD. Real-time feedback control method was adopted in the temperature control system to make LD work on its best temperature point. The output power variety and output wavelength shift of LD caused by current and temperature instability were decreased. Furthermore, the driving current and working temperature is adjustable according to specific requirements. The experiment result showed that the developed LD driver meets the characteristics of LD.

  17. Measurement technology of RF interference current in high current system

    NASA Astrophysics Data System (ADS)

    Zhao, Zhihua; Li, Jianxuan; Zhang, Xiangming; Zhang, Lei

    2018-06-01

    Current probe is a detection method commonly used in electromagnetic compatibility. With the development of power electronics technology, the power level of power conversion devices is constantly increasing, and the power current of the electric energy conversion device in the electromagnetic launch system can reach 10kA. Current probe conventionally used in EMC (electromagnetic compatibility) detection cannot meet the test requirements on high current system due to the magnetic saturation problem. The conventional high current sensor is also not suitable for the RF (Radio Frequency) interference current measurement in high current power device due to the high noise level in the output of active amplifier. In this paper, a passive flexible current probe based on Rogowski coil and matching resistance is proposed that can withstand high current and has low noise level, to solve the measurement problems of interference current in high current power converter. And both differential mode and common mode current detection can be easily carried out with the proposed probe because of the probe's flexible structure.

  18. A NEW APPROACH TO PIP CROP MONITORING USING REMOTE SENSING

    EPA Science Inventory

    Current plantings of 25+ million acres of transgenic corn in the United States require a new approach to monitor this important crop for the development of pest resistance. Remote sensing by aerial or satellite images may provide a method of identifying transgenic pesticidal cro...

  19. RAPID DETECTION METHOD FOR E.COLI, ENTEROCOCCI AND BACTEROIDES IN RECREATIONAL WATER

    EPA Science Inventory

    Current methodology for determining fecal contamination of drinking water sources and recreational waters rely on the time-consuming process of bacterial multiplication and require at least 24 hours from the time of sampling to the possible determination that the water is unsafe ...

  20. Food pathogen detection using Ag nanorod-based surface plasmon resonance sensor

    USDA-ARS?s Scientific Manuscript database

    Food safety is world-wide issue for protecting public health. Many researchers have been working on development of biosensors for pathogenic bacteria detection. However, current biosensing methods and techniques do not meet the requirement of demanding as a biosensor in terms of sensitivity, speci...

  1. Information Measures for Statistical Orbit Determination

    ERIC Educational Resources Information Center

    Mashiku, Alinda K.

    2013-01-01

    The current Situational Space Awareness (SSA) is faced with a huge task of tracking the increasing number of space objects. The tracking of space objects requires frequent and accurate monitoring for orbit maintenance and collision avoidance using methods for statistical orbit determination. Statistical orbit determination enables us to obtain…

  2. Management Tools for Bus Maintenance: Current Practices and New Methods. Final Report.

    ERIC Educational Resources Information Center

    Foerster, James; And Others

    Management of bus fleet maintenance requires systematic recordkeeping, management reporting, and work scheduling procedures. Tools for controlling and monitoring routine maintenance activities are in common use. These include defect and fluid consumption reports, work order systems, historical maintenance records, and performance and cost…

  3. ASSESSING THE EFFECTS OF DICHLOROACETIC ACID (DCA) USING A MULTI-ENDPOINT MEDAKA ASSAY

    EPA Science Inventory

    In regulating the safety of water, EPA makes decisions on what chemical contaminants to regulate and at what levels. To make these decisions, the EPA needs hazard identification and dose-response information. Current rodent methods for generating required information have limita...

  4. THE NEED FOR SPEED-RAPID METHODOLOGIES TO DETERMINE BATHING BEACH WATER QUALITY

    EPA Science Inventory

    Current methods for determining fecal contamination of recreational waters rely on the culture of bacterial indicators and require at least 24 hours to determine whether the water is unsafe for use. By the time monitoring results are available, exposures have already occurred. N...

  5. Effects of Toluene, Acrolein and Vinyl Chloride on Motor Activity of Drosophila Melanogaster

    EPA Science Inventory

    The data generated by current high-throughput assays for chemical toxicity require information to link effects at molecular targets to adverse outcomes in whole animals. In addition, more efficient methods for testing volatile chemicals are needed. Here we begin to address these ...

  6. Incentive Pay for Remotely Piloted Aircraft Career Fields

    DTIC Science & Technology

    2012-01-01

    Fields C.1. Mathematical Symbols for Non-Stochastic Values and Shock Terms...78 C.2. Mathematical Symbols for Taste and Compensation . . . . . . . . . . . 79 xiii Summary Background and...manning requirement, even with the current incentive pays and reenlistment bonuses. 2 The mathematical foundations, data, and estimation methods for the

  7. Testing the efficacy of eGFP-transformed Aspergillus flavus as biocontrol strains

    USDA-ARS?s Scientific Manuscript database

    Current biological control methods to prevent pre-harvest aflatoxin contamination of corn, cottonseed, and ground and tree nuts involve field inoculation of non-aflatoxigenic Aspergillus flavus. To date, the efficacy of this approach requires annual reapplication of the biocontrol agent. The reason ...

  8. EVALUTION OF COGNITIVE FUNCTION IN WEANLING RATS: A REVIEW OF METHODS SUITABLE FOR CHEMICAL SCREENING.

    EPA Science Inventory

    The current developmental neurotoxicity (DNT) guidelines for environmental agents require cognitive testing around the age of weaning as well as adulthood. There are challenges associated with testing weanling rodents that are not present with testing older subjects, including r...

  9. 46 CFR 188.35-1 - Standards to be used.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OCEANOGRAPHIC RESEARCH VESSELS GENERAL... subchapter an item, or method of construction, or testing is required to meet the standards established by the American Bureau of Shipping, the current standards in effect at the time of construction of the...

  10. Global disaster satellite communications system for disaster assessment and relief coordination

    NASA Technical Reports Server (NTRS)

    Leroy, B. E.

    1979-01-01

    The global communication requirements for disaster assistance and examines operationally feasible satellite system concepts and the associated system parameters are analyzed. Some potential problems associated with the current method of providing disaster assistance and a scenario for disaster assistance relying on satellite communications are described. Historical statistics are used with the scenario to assess service requirements. Both present and planned commercially available systems are considered. The associated global disaster communication yearly service costs are estimated.

  11. Compensation of Verdet Constant Temperature Dependence by Crystal Core Temperature Measurement

    PubMed Central

    Petricevic, Slobodan J.; Mihailovic, Pedja M.

    2016-01-01

    Compensation of the temperature dependence of the Verdet constant in a polarimetric extrinsic Faraday sensor is of major importance for applying the magneto-optical effect to AC current measurements and magnetic field sensing. This paper presents a method for compensating the temperature effect on the Faraday rotation in a Bi12GeO20 crystal by sensing its optical activity effect on the polarization of a light beam. The method measures the temperature of the same volume of crystal that effects the beam polarization in a magnetic field or current sensing process. This eliminates the effect of temperature difference found in other indirect temperature compensation methods, thus allowing more accurate temperature compensation for the temperature dependence of the Verdet constant. The method does not require additional changes to an existing Δ/Σ configuration and is thus applicable for improving the performance of existing sensing devices. PMID:27706043

  12. Sonochemical approaches to enhanced oil recovery.

    PubMed

    Abramov, Vladimir O; Abramova, Anna V; Bayazitov, Vadim M; Altunina, Lyubov K; Gerasin, Artyom S; Pashin, Dmitriy M; Mason, Timothy J

    2015-07-01

    Oil production from wells reduces with time and the well becomes uneconomic unless enhanced oil recovery (EOR) methods are applied. There are a number of methods currently available and each has specific advantages and disadvantages depending on conditions. Currently there is a big demand for new or improved technologies in this field, the hope is that these might also be applicable to wells which have already been the subject of EOR. The sonochemical method of EOR is one of the most promising methods and is important in that it can also be applied for the treatment of horizontal wells. The present article reports the theoretical background of the developed sonochemical technology for EOR in horizontal wells; describes the requirements to the equipment needed to embody the technology. The results of the first field tests of the technology are reported. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Nurse manager succession planning: synthesis of the evidence.

    PubMed

    Titzer, Jennifer; Phillips, Tracy; Tooley, Stephanie; Hall, Norma; Shirey, Maria

    2013-10-01

    The literature supporting nurse manager succession planning is reviewed and synthesised to discover best practice for identifying and developing future nurse managers. Healthcare succession planning practices are lacking. Nurse managers are historically selected based on clinical skills and lack formal leadership preparation. A systematic literature search appraises and summarises the current literature supporting nurse manager succession planning. Multiple reviewers were used to increase the reliability and validity of article selection and analysis. New nurse managers require months to adapt to their positions. Deliberate nurse manager succession planning should be integrated in the organisation's strategic plan and provide a proactive method for identifying and developing potential leaders. Organisations that identify and develop internal human capital can improve role transition, reduce nurse manager turnover rates and decrease replacement costs. Despite the clear benefits of succession planning, studies show that resource allocation for proactive, deliberate development of current and future nurse leaders is lacking. Additionally, systematic evaluation of succession planning is limited. Deliberate succession planning efforts and appropriate resource allocation require strategic planning and evaluation methods. Detailed evaluation methods demonstrating a positive return on investment utilising a cost-benefit analysis and empirical outcomes are necessary. © 2013 John Wiley & Sons Ltd.

  14. HIV Epidemic Appraisals for Assisting in the Design of Effective Prevention Programmes: Shifting the Paradigm Back to Basics

    PubMed Central

    Mishra, Sharmistha; Sgaier, Sema K.; Thompson, Laura H.; Moses, Stephen; Ramesh, B. M.; Alary, Michel; Wilson, David; Blanchard, James F.

    2012-01-01

    Background To design HIV prevention programmes, it is critical to understand the temporal and geographic aspects of the local epidemic and to address the key behaviours that drive HIV transmission. Two methods have been developed to appraise HIV epidemics and guide prevention strategies. The numerical proxy method classifies epidemics based on current HIV prevalence thresholds. The Modes of Transmission (MOT) model estimates the distribution of incidence over one year among risk-groups. Both methods focus on the current state of an epidemic and provide short-term metrics which may not capture the epidemiologic drivers. Through a detailed analysis of country and sub-national data, we explore the limitations of the two traditional methods and propose an alternative approach. Methods and Findings We compared outputs of the traditional methods in five countries for which results were published, and applied the numeric and MOT model to India and six districts within India. We discovered three limitations of the current methods for epidemic appraisal: (1) their results failed to identify the key behaviours that drive the epidemic; (2) they were difficult to apply to local epidemics with heterogeneity across district-level administrative units; and (3) the MOT model was highly sensitive to input parameters, many of which required extraction from non-regional sources. We developed an alternative decision-tree framework for HIV epidemic appraisals, based on a qualitative understanding of epidemiologic drivers, and demonstrated its applicability in India. The alternative framework offered a logical algorithm to characterize epidemics; it required minimal but key data. Conclusions Traditional appraisals that utilize the distribution of prevalent and incident HIV infections in the short-term could misguide prevention priorities and potentially impede efforts to halt the trajectory of the HIV epidemic. An approach that characterizes local transmission dynamics provides a potentially more effective tool with which policy makers can design intervention programmes. PMID:22396756

  15. Lithium Battery Transient Response as a Diagnostic Tool

    NASA Astrophysics Data System (ADS)

    Denisov, E.; Nigmatullin, R.; Evdokimov, Y.; Timergalina, G.

    2018-05-01

    Lithium batteries are currently used as the main energy storage for electronic devices. Progress in the field of portable electronic devices is significantly determined by the improvement of their weight/dimensional characteristics and specific capacity. In addition to the high reliability required of lithium batteries, in some critical applications proper diagnostics are required. Corresponding techniques allow prediction and prevention of operation interruption and avoidance of expensive battery replacement, and also provide additional benefits. Many effective diagnostic methods have been suggested; however, most of them require expensive experimental equipment, as well as interruption or strong perturbation of the operating mode. In the framework of this investigation, a simple diagnostic method based on analysis of transient processes is proposed. The transient response is considered as a reaction to an applied load variation that typically corresponds to normal operating conditions for most real applications. The transient response contains the same information as the impedance characteristic for the system operating in linear mode. Taking into account the large number of publications describing the impedance response associated with diagnostic methods, it can be assumed that the transient response contains a sufficient amount of information for creation of effective diagnostic systems. The proposed experimental installation is based on a controlled load, providing current variation, measuring equipment, and data processing electronics. It is proposed to use the second exponent parameters U 2 and β to estimate the state of charge for secondary lithium batteries. The proposed method improves the accuracy and reliability of a set of quantitative parameters associated with electrochemical energy sources.

  16. First-principles simulations of heat transport

    NASA Astrophysics Data System (ADS)

    Puligheddu, Marcello; Gygi, Francois; Galli, Giulia

    2017-11-01

    Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.

  17. Light transport and general aviation aircraft icing research requirements

    NASA Technical Reports Server (NTRS)

    Breeze, R. K.; Clark, G. M.

    1981-01-01

    A short term and a long term icing research and technology program plan was drafted for NASA LeRC based on 33 separate research items. The specific items listed resulted from a comprehensive literature search, organized and assisted by a computer management file and an industry/Government agency survey. Assessment of the current facilities and icing technology was accomplished by presenting summaries of ice sensitive components and protection methods; and assessments of penalty evaluation, the experimental data base, ice accretion prediction methods, research facilities, new protection methods, ice protection requirements, and icing instrumentation. The intent of the research plan was to determine what icing research NASA LeRC must do or sponsor to ultimately provide for increased utilization and safety of light transport and general aviation aircraft.

  18. Methods for Quantification of Soil-Transmitted Helminths in Environmental Media: Current Techniques and Recent Advances.

    PubMed

    Collender, Philip A; Kirby, Amy E; Addiss, David G; Freeman, Matthew C; Remais, Justin V

    2015-12-01

    Limiting the environmental transmission of soil-transmitted helminths (STHs), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost-effective methods to detect and quantify STHs in the environment. We review the state-of-the-art of STH quantification in soil, biosolids, water, produce, and vegetation with regard to four major methodological issues: environmental sampling; recovery of STHs from environmental matrices; quantification of recovered STHs; and viability assessment of STH ova. We conclude that methods for sampling and recovering STHs require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Review of Railgun Modeling Techniques: The Computation of Railgun Force and Other Key Factors

    NASA Astrophysics Data System (ADS)

    Eckert, Nathan James

    Currently, railgun force modeling either uses the simple "railgun force equation" or finite element methods. It is proposed here that a middle ground exists that does not require the solution of partial differential equations, is more readily implemented than finite element methods, and is more accurate than the traditional force equation. To develop this method, it is necessary to examine the core railgun factors: power supply mechanisms, the distribution of current in the rails and in the projectile which slides between them (called the armature), the magnetic field created by the current flowing through these rails, the inductance gradient (a key factor in simplifying railgun analysis, referred to as L'), the resultant Lorentz force, and the heating which accompanies this action. Common power supply technologies are investigated, and the shape of their current pulses are modeled. The main causes of current concentration are described, and a rudimentary method for computing current distribution in solid rails and a rectangular armature is shown to have promising accuracy with respect to outside finite element results. The magnetic field is modeled with two methods using the Biot-Savart law, and generally good agreement is obtained with respect to finite element methods (5.8% error on average). To get this agreement, a factor of 2 is added to the original formulation after seeing a reliable offset with FEM results. Three inductance gradient calculations are assessed, and though all agree with FEM results, the Kerrisk method and a regression analysis method developed by Murugan et al. (referred to as the LRM here) perform the best. Six railgun force computation methods are investigated, including the traditional railgun force equation, an equation produced by Waindok and Piekielny, and four methods inspired by the work of Xu et al. Overall, good agreement between the models and outside data is found, but each model's accuracy varies significantly between comparisons. Lastly, an approximation of the temperature profile in railgun rails originally presented by McCorkle and Bahder is replicated. In total, this work describes railgun technology and moderately complex railgun modeling methods, but is inconclusive about the presence of a middle-ground modeling method.

  20. Environmentally safe aviation fuels

    NASA Technical Reports Server (NTRS)

    Liberio, Patricia D.

    1995-01-01

    In response to the Air Force directive to remove Ozone Depleting Chemicals (ODC's) from military specifications and Defense Logistics Agency's Hazardous Waste Minimization Program, we are faced with how to ensure a quality aviation fuel without using such chemicals. Many of these chemicals are found throughout the fuel and fuel related military specifications and are part of test methods that help qualify the properties and quality of the fuels before they are procured. Many years ago there was a directive for military specifications to use commercially standard test methods in order to provide standard testing in private industry and government. As a result the test methods used in military specifications are governed by the American Society of Testing and Materials (ASTM). The Air Force has been very proactive in the removal or replacement of the ODC's and hazardous materials in these test methods. For example, ASTM D3703 (Standard Test Method for Peroxide Number of Aviation Turbine Fuels), requires the use of Freon 113, a known ODC. A new rapid, portable hydroperoxide test for jet fuels similar to ASTM D3703 that does not require the use of ODC's has been developed. This test has proved, in limited testing, to be a viable substitute method for ASTM D3703. The Air Force is currently conducting a round robin to allow the method to be accepted by ASTM and therefore replace the current method. This paper will describe the Air Force's initiatives to remove ODC's and hazardous materials from the fuel and fuel related military specifications that the Air Force Wright Laboratory.

  1. A simplified 4-site economical intradermal post-exposure rabies vaccine regimen: a randomised controlled comparison with standard methods.

    PubMed

    Warrell, Mary J; Riddell, Anna; Yu, Ly-Mee; Phipps, Judith; Diggle, Linda; Bourhy, Hervé; Deeks, Jonathan J; Fooks, Anthony R; Audry, Laurent; Brookes, Sharon M; Meslin, François-Xavier; Moxon, Richard; Pollard, Andrew J; Warrell, David A

    2008-04-23

    The need for economical rabies post-exposure prophylaxis (PEP) is increasing in developing countries. Implementation of the two currently approved economical intradermal (ID) vaccine regimens is restricted due to confusion over different vaccines, regimens and dosages, lack of confidence in intradermal technique, and pharmaceutical regulations. We therefore compared a simplified 4-site economical PEP regimen with standard methods. Two hundred and fifty-four volunteers were randomly allocated to a single blind controlled trial. Each received purified vero cell rabies vaccine by one of four PEP regimens: the currently accepted 2-site ID; the 8-site regimen using 0.05 ml per ID site; a new 4-site ID regimen (on day 0, approximately 0.1 ml at 4 ID sites, using the whole 0.5 ml ampoule of vaccine; on day 7, 0.1 ml ID at 2 sites and at one site on days 28 and 90); or the standard 5-dose intramuscular regimen. All ID regimens required the same total amount of vaccine, 60% less than the intramuscular method. Neutralising antibody responses were measured five times over a year in 229 people, for whom complete data were available. All ID regimens showed similar immunogenicity. The intramuscular regimen gave the lowest geometric mean antibody titres. Using the rapid fluorescent focus inhibition test, some sera had unexpectedly high antibody levels that were not attributable to previous vaccination. The results were confirmed using the fluorescent antibody virus neutralisation method. This 4-site PEP regimen proved as immunogenic as current regimens, and has the advantages of requiring fewer clinic visits, being more practicable, and having a wider margin of safety, especially in inexperienced hands, than the 2-site regimen. It is more convenient than the 8-site method, and can be used economically with vaccines formulated in 1.0 or 0.5 ml ampoules. The 4-site regimen now meets all requirements of immunogenicity for PEP and can be introduced without further studies. Controlled-Trials.com ISRCTN 30087513.

  2. Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring

    PubMed Central

    Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat

    2015-01-01

    We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863

  3. Genetic algorithm optimization of transcutaneous energy transmission systems for implantable ventricular assist devices.

    PubMed

    Byron, Kelly; Bluvshtein, Vlad; Lucke, Lori

    2013-01-01

    Transcutaneous energy transmission systems (TETS) wirelessly transmit power through the skin. TETS is particularly desirable for ventricular assist devices (VAD), which currently require cables through the skin to power the implanted pump. Optimizing the inductive link of the TET system is a multi-parameter problem. Most current techniques to optimize the design simplify the problem by combining parameters leading to sub-optimal solutions. In this paper we present an optimization method using a genetic algorithm to handle a larger set of parameters, which leads to a more optimal design. Using this approach, we were able to increase efficiency while also reducing power variability in a prototype, compared to a traditional manual design method.

  4. Some advanced parametric methods for assessing waveform distortion in a smart grid with renewable generation

    NASA Astrophysics Data System (ADS)

    Alfieri, Luisa

    2015-12-01

    Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.

  5. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  6. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  7. Scaling a Human Body Finite Element Model with Radial Basis Function Interpolation

    DTIC Science & Technology

    Human body models are currently used to evaluate the body’s response to a variety of threats to the Soldier. The ability to adjust the size of human...body models is currently limited because of the complex shape changes that are required. Here, a radial basis function interpolation method is used to...morph the shape on an existing finite element mesh. Tools are developed and integrated into the Blender computer graphics software to assist with

  8. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    PubMed

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  9. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  10. Attitude maneuvers of a solar-powered electric orbital transfer vehicle

    NASA Astrophysics Data System (ADS)

    Jenkin, Alan B.

    1992-08-01

    Attitude maneuver requirements of a solar-powered electric orbital transfer vehicle have been studied in detail. This involved evaluation of the yaw, pitch, and roll profiles and associated angular accelerations needed to simultaneously steer the vehicle thrust vector and maintain the solar array pointed toward the sun. Maintaining the solar array pointed exactly at the sun leads to snap roll maneuvers which have very high (theoretically unbounded) accelerations, thereby imposing large torque requirements. The problem is exacerbated by the large solar arrays which are needed to generate the high levels of power needed by electric propulsion devices. A method of eliminating the snap roll maneuvers is presented. The method involves the determination of relaxed roll profiles which approximate a forced transition between alternate exact roll profiles and incur only small errors in solar array pointing. The method makes it feasible to perform the required maneuvers using currently available attitude control technology such as reaction wheels, hot gas jets, or gimballed main engines.

  11. Psychiatry Residency Education in Canada: Past, Present and Future

    ERIC Educational Resources Information Center

    Saperson, Karen

    2013-01-01

    Objective: This article provides a brief overview of the history of psychiatry residency training in Canada,and outlines the rationale for the current training requirements, changes to the final certification examination,and factors influencing future trends in psychiatry education and training. Method: The author compiled findings and reports on…

  12. Using the Scientific Method to Improve Mentoring

    ERIC Educational Resources Information Center

    McGuire, Saundra Yancy

    2007-01-01

    Many students who enter colleges and universities seem to be focused on memorizing and regurgitating information rather than on developing critical thinking and problem solving skills. Mentoring is crucial to help these students transition from the current approach to one that will be successful in college. Successful mentoring requires a…

  13. RAPIDLY-MEASURED INDICATORS OF RECREATIONAL WATER QUALITY ARE PREDICTIVE OF SWIMMING-ASSOCIATED GASTROINTESTINAL ILLNESS

    EPA Science Inventory

    Fecal indicator bacteria (FIB) are used to monitor recreational water quality worldwide. Current methods of measuring FIB require at least 24-hours for growth of bacterial colonies. We conducted studies at four Great Lake beaches to examine the relationship between novel and fas...

  14. A Study on Improving Information Processing Abilities Based on PBL

    ERIC Educational Resources Information Center

    Kim, Du Gyu; Lee, JaeMu

    2014-01-01

    This study examined an instruction method for the improvement of information processing abilities in elementary school students. Current elementary students are required to develop information processing abilities to create new knowledge for this digital age. There is, however, a shortage of instruction strategies for these information processing…

  15. IMPROVING PARTICULATE MATTER SOURCE APPORTIONMENT FOR HEALTH STUDIES: A TRAINED RECEPTOR MODELING APPROACH WITH SENSITIVITY, UNCERTAINTY AND SPATIAL ANALYSES

    EPA Science Inventory

    An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...

  16. THE USE OF ALTERNATIVE MATERIALS FOR DAILY COVER AT MUNICIPAL SOLID WASTE LANDFILLS. A Project Summary (EPA/600/SR-93/172)

    EPA Science Inventory

    This investigation was conducted to assess the applicability of currently available (ca. 1992) alternative materials for use as daily cover at landfills. Information on characteristics, material and equipment requirements, methods of preparation and application, climatic and ope...

  17. 76 FR 31342 - Agency Information Collection Activities; Proposed Collection; Comment Request; Current Good...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-31

    ... product requirements are set forth. Section 211.173--Animals used in testing components, in-process... drug is adulterated if the methods used in, or the facilities or controls used for, its manufacture... can be used for evaluating, at least annually, the quality [[Page 31343

  18. Food Buying Guide for Child Nutrition Programs. Revised.

    ERIC Educational Resources Information Center

    Davis, Dorothy W.; And Others

    This guide is based on the latest federal regulations and meal pattern requirements for the National School Lunch and Breakfast Programs. It considers current food production and marketing techniques, packaging methods, grading standards, and changing food habits in the American population. The guide gives average yield information on over 600…

  19. Proficiency and Linguistic Complexity Influence Speech Motor Control and Performance in Spanish Language Learners

    ERIC Educational Resources Information Center

    Nip, Ignatius S. B.; Blumenfeld, Henrike K.

    2015-01-01

    Purpose: Second-language (L2) production requires greater cognitive resources to inhibit the native language and to retrieve less robust lexical representations. The current investigation identifies how proficiency and linguistic complexity, specifically syntactic and lexical factors, influence speech motor control and performance. Method: Speech…

  20. Using Longitudinal Scales Assessment for Instrumental Music Students

    ERIC Educational Resources Information Center

    Simon, Samuel H.

    2014-01-01

    In music education, current assessment trends emphasize student reflection, tracking progress over time, and formative as well as summative measures. This view of assessment requires instrumental music educators to modernize their approaches without interfering with methods that have proven to be successful. To this end, the Longitudinal Scales…

  1. Chemical imaging of secondary cell wall development in cotton fibers using a mid-infrared focal-plane array detector

    USDA-ARS?s Scientific Manuscript database

    Market demands for cotton varieties with improved fiber properties also call for the development of fast, reliable analytical methods for monitoring fiber development and measuring their properties. Currently, cotton breeders rely on instrumentation that can require significant amounts of sample, w...

  2. Rapid pasteurization of shell eggs using RF

    USDA-ARS?s Scientific Manuscript database

    A novel method for rapidly pasteurizing eggs in the shell could enhance the safety of the United States’ food supply. Current federal regulations do not require eggs sold in stores to be pasteurized, yet these eggs are often consumed raw or undercooked and cause untold cases of salmonella illness ea...

  3. RAPID MEASUREMENT OF BACTERIAL FECAL INDICATORS IN SURFACE WATERS BY QUANTITATIVE POLYMERASE CHAIN REACTION (QPCR) ANALYSIS

    EPA Science Inventory

    Current methods for determining fecal contamination of recreational waters rely on the culture of bacterial indicators and require at least 24 hours to determine whether the water is unsafe for use. By the time monitoring results are available, exposures have already occurred. N...

  4. The Effect of Brain Gym® on Academic Engagement for Children with Developmental Disabilities

    ERIC Educational Resources Information Center

    Watson, Andrea; Kelso, Ginger L.

    2014-01-01

    Following recent legislative initiatives in education requiring evidence-based practices, schools have implemented various instructional programs characterized as "evidence-based." However, it is important to question whether these methods are truly effective. One example of a methodology currently promoted and used in schools is an…

  5. Effects of Dynamically Weighting Autonomous Rules in an Unmanned Aircraft System (UAS) Flocking Model

    DTIC Science & Technology

    2014-09-18

    methods of flight plan optimization, and yielded such techniques as: parallel A* (Gudaitis, 1994), Multi-Objective Traveling Salesman algorithms...1 Problem Statement...currently their utilization comes with a price: Problem Statement “Today’s unmanned systems require significant human interaction to operate. As

  6. Current Treatment of Lower Gastrointestinal Hemorrhage

    PubMed Central

    Raphaeli, Tal; Menon, Raman

    2012-01-01

    Massive lower gastrointestinal bleeding is a significant and expensive problem that requires methodical evaluation, management, and treatment. After initial resuscitation, care should be taken to localize the site of bleeding. Once localized, lesions can then be treated with endoscopic or angiographic interventions, reserving surgery for ongoing or recurrent bleeding. PMID:24294124

  7. A review of microdialysis coupled to microchip electrophoresis for monitoring biological events

    PubMed Central

    Saylor, Rachel A.; Lunte, Susan M.

    2015-01-01

    Microdialysis is a powerful sampling technique that enables monitoring of dynamic processes in vitro and in vivo. The combination of microdialysis with chromatographic or electrophoretic methods yields along with selective detection methods yields a “separation-based sensor” capable of monitoring multiple analytes in near real time. Analysis of microdialysis samples requires techniques that are fast (<1 min), have low volume requirements (nL–pL), and, ideally, can be employed on-line. Microchip electrophoresis fulfills these requirements and also permits the possibility of integrating sample preparation and manipulation with detection strategies directly on-chip. Microdialysis coupled to microchip electrophoresis has been employed for monitoring biological events in vivo and in vitro. This review discusses technical considerations for coupling microdialysis sampling and microchip electrophoresis, including various interface designs, and current applications in the field. PMID:25637011

  8. The mercedes-benz approach to γ-ray astronomy

    NASA Astrophysics Data System (ADS)

    Akerlof, Carl W.

    1988-02-01

    The sensitivity requirements for ground-based γ-ray astronomy are reviewed in the light of the most reliable estimates of stellar fluxes above 100 GeV. Current data strongly favor the construction of detectors with the lowest energy thresholds. Since improvements in angular resolution are limited by shower fluctuations, better methods of rejecting hadronic showers must be found to reliably observe the known astrophysical sources. Several possible methods for reducing this hadronic background are discussed.

  9. Biomagnetic separation of Salmonella Typhimurium with high affine and specific ligand peptides isolated by phage display technique

    NASA Astrophysics Data System (ADS)

    Steingroewer, Juliane; Bley, Thomas; Bergemann, Christian; Boschke, Elke

    2007-04-01

    Analyses of food-borne pathogens are of great importance in order to minimize the health risk for customers. Thus, very sensitive and rapid detection methods are required. Current conventional culture techniques are very time consuming. Modern immunoassays and biochemical analysis also require pre-enrichment steps resulting in a turnaround time of at least 24 h. Biomagnetic separation (BMS) is a promising more rapid method. In this study we describe the isolation of high affine and specific peptides from a phage-peptide library, which combined with BMS allows the detection of Salmonella spp. with a similar sensitivity as that of immunomagnetic separation using antibodies.

  10. [Success factors in hospital management].

    PubMed

    Heberer, M

    1998-12-01

    The hospital environment of most Western countries is currently undergoing dramatic changes. Competition among hospitals is increasing, and economic issues have become decisive factors for the allocation of medical care. Hospitals therefore require management tools to respond to these changes adequately. The balanced scorecard is a method of enabling development and implementation of a business strategy that equally respects the financial requirements, the needs of the customers, process development, and organizational learning. This method was used to derive generally valid success factors for hospital management based on an analysis of an academic hospital in Switzerland. Strategic management, the focus of medical services, customer orientation, and integration of professional groups across the hospital value chain were identified as success factors for hospital management.

  11. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  12. Basic materials and structures aspects for hypersonic transport vehicles (HTV)

    NASA Astrophysics Data System (ADS)

    Steinheil, E.; Uhse, W.

    A Mach 5 transport design is used to illustrate structural concepts and criteria for materials selections and also key technologies that must be followed in the areas of computational methods, materials and construction methods. Aside from the primary criteria of low weight, low costs, and conceivable risks, a number of additional requirements must be met, including stiffness and strength, corrosion resistance, durability, and a construction adequate for inspection, maintenance and repair. Current aircraft construction requirements are significantly extended for hypersonic vehicles. Additional consideration is given to long-duration temperature resistance of the airframe structure, the integration of large-volume cryogenic fuel tanks, computational tools, structural design, polymer matrix composites, and advanced manufacturing technologies.

  13. Defining and quantifying the social phenotype in autism.

    PubMed

    Klin, Ami; Jones, Warren; Schultz, Robert; Volkmar, Fred; Cohen, Donald

    2002-06-01

    Genetic and neurofunctional research in autism has highlighted the need for improved characterization of the core social disorder defining the broad spectrum of syndrome manifestations. This article reviews the advantages and limitations of current methods for the refinement and quantification of this highly heterogeneous social phenotype. The study of social visual pursuit by use of eye-tracking technology is offered as a paradigm for novel tools incorporating these requirements and as a research effort that builds on the emerging synergy of different branches of social neuroscience. Advances in the area will require increased consideration of processes underlying experimental results and a closer approximation of experimental methods to the naturalistic demands inherent in real-life social situations.

  14. Quantum sequencing: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    di Ventra, Massimiliano

    Personalized or precision medicine refers to the ability of tailoring drugs to the specific genome and transcriptome of each individual. It is however not yet feasible due the high costs and slow speed of present DNA sequencing methods. I will discuss a sequencing protocol that requires the measurement of the distributions of transverse tunneling currents during the translocation of single-stranded DNA into nanochannels. I will show that such a quantum sequencing approach can reach unprecedented speeds, without requiring any chemical preparation, amplification or labeling. I will discuss recent experiments that support these theoretical predictions, the advantages of this approach over other sequencing methods, and stress the challenges that need to be overcome to render it commercially viable.

  15. Neuronal current detection with low-field magnetic resonance: simulations and methods.

    PubMed

    Cassará, Antonino Mario; Maraviglia, Bruno; Hartwig, Stefan; Trahms, Lutz; Burghoff, Martin

    2009-10-01

    The noninvasive detection of neuronal currents in active brain networks [or direct neuronal imaging (DNI)] by means of nuclear magnetic resonance (NMR) remains a scientific challenge. Many different attempts using NMR scanners with magnetic fields >1 T (high-field NMR) have been made in the past years to detect phase shifts or magnitude changes in the NMR signals. However, the many physiological (i.e., the contemporarily BOLD effect, the weakness of the neuronal-induced magnetic field, etc.) and technical limitations (e.g., the spatial resolution) in observing the weak signals have led to some contradicting results. In contrast, only a few attempts have been made using low-field NMR techniques. As such, this paper was aimed at reviewing two recent developments in this front. The detection schemes discussed in this manuscript, the resonant mechanism (RM) and the DC method, are specific to NMR instrumentations with main fields below the earth magnetic field (50 microT), while some even below a few microteslas (ULF-NMR). However, the experimental validation for both techniques, with differentiating sensitivity to the various neuronal activities at specific temporal and spatial resolutions, is still in progress and requires carefully designed magnetic field sensor technology. Additional care should be taken to ensure a stringent magnetic shield from the ambient magnetic field fluctuations. In this review, we discuss the characteristics and prospect of these two methods in detecting neuronal currents, along with the technical requirements on the instrumentation.

  16. Direct current electrotherapy for internal haemorrhoids: experience in a tertiary health institution

    PubMed Central

    Olatoke, Samuel; Adeoti, Moses; Agodirin, Olayide; Ajape, Abdulwahab; Agbola, John

    2014-01-01

    Introduction Haemorrhoids disease is one of the most frequently occurring disabling conditions of the anorectum. We re-present the method, advantages and results of using direct current electrotherapy in the treatment of haemorrhoids. Methods Symptomatic grades 1, 2 or 3 internal and mixed haemorroids were treated. Exposure and evaluation was with an operative proctoscope which visualized one-eighth of the anal canal at a time. All diseased segments were treated per visit, indicators of successful treatment were, darkening of the treated segment, immediate shrinking of the haemorrhoid and ceasation of popping sound of gas release at the probe tip. Patients were followed up for two weeks. No bowel preparations, medications, anesthesia nor admission was required. Results Four hundred and fifty six segments were exposed, 252(55.3%) were diseased. eight patients with either grades 2 or 3 diseases required two treatment visits. The most common symptom was rectal bleeding (94.7%), followed by prolapsed but manually reduced hemorrhoids (68%). Prolapse of tuft of haemorrhoidal tissue with spontaneous return was seen in 59.6%, anal pain in 29.8%, and itching in 3.5%. the median number treated segments per patient was 4. No complication was encountered. All patients treated remained symptom free at a mean duration of follow up of 16 months. Conclusion Direct current electrotherapy is an effective, painless and safe out-patient treatment method for grades 1 to 3 internal and mixed hemorrhoid disease. PMID:25419283

  17. Standardizing the Delivery of 20 μL of Hapten During Patch Testing.

    PubMed

    Selvick, Annika; Stauss, Kari; Strobush, Katrina; Taylor, Lauren; Picard, Alexandra; Doll, Andrea; Reeder, Margo

    2016-01-01

    The current method for patch test tray assembly requires hand dispensing a small volume of hapten onto chambers. Because of human error, this technique produces inaccurate and inconsistent results. The recommended volume of hapten for patch testing using Finn Chambers is 20 μL. The aims of this study were to create a device that standardizes the delivery of 20 μL and to compare it with the current hand dispensing technique. A device, named the Revolution, was created using the SolidWorks program. Five nurses in our Contact Dermatitis Clinic were asked to load 10 Finn Chambers using the current technique and also using the Revolution. Assembly time, volume of petrolatum, and accuracy of placement were measured. After the 3 trials, the nurses completed a survey on the 2 methods. The amount of petrolatum dispensed using the current technique ranged from 16 to 85 μL, with an average amount of 41.39 μL. The Revolution design dispensed an average of 19.78 μL. The current hand dispensing technique does not allow for accurate and consistent dispensing of 20 μL for patch testing. In contrast, the Revolution is an accurate and consistent device that can help standardize the patch testing method.

  18. Novel optical strategies for biodetection

    NASA Astrophysics Data System (ADS)

    Sakamuri, Rama M.; Wolfenden, Mark S.; Anderson, Aaron S.; Swanson, Basil I.; Schmidt, Jurgen S.; Mukundan, Harshini

    2013-09-01

    Although bio-detection strategies have significantly evolved in the past decade, they still suffer from many disadvantages. For one, current approaches still require confirmation of pathogen viability by culture, which is the `gold-standard' method, and can take several days to result. Second, current methods typically target protein and nucleic acid signatures and cannot be applied to other biochemical categories of biomarkers (e.g.; lipidated sugars). Lipidated sugars (e.g.; lipopolysaccharide, lipoarabinomannan) are bacterial virulence factors that are significant to pathogenicity. Herein, we present two different optical strategies for biodetection to address these two limitations. We have exploited bacterial iron sequestration mechanisms to develop a simple, specific assay for the selective detection of viable bacteria, without the need for culture. We are currently working on the use of this technology for the differential detection of two different bacteria, using siderophores. Second, we have developed a novel strategy termed `membrane insertion' for the detection of amphiphilic biomarkers (e.g. lipidated glycans) that cannot be detected by conventional approaches. We have extended this technology to the detection of small molecule amphiphilic virulence factors, such as phenolic glycolipid-1 from leprosy, which could not be directly detected before. Together, these strategies address two critical limitations in current biodetection approaches. We are currently working on the optimization of these methods, and their extension to real-world clinical samples.

  19. Field Balancing of Magnetically Levitated Rotors without Trial Weights

    PubMed Central

    Fang, Jiancheng; Wang, Yingguang; Han, Bangcheng; Zheng, Shiqiang

    2013-01-01

    Unbalance in magnetically levitated rotor (MLR) can cause undesirable synchronous vibrations and lead to the saturation of the magnetic actuator. Dynamic balancing is an important way to solve these problems. However, the traditional balancing methods, using rotor displacement to estimate a rotor's unbalance, requiring several trial-runs, are neither precise nor efficient. This paper presents a new balancing method for an MLR without trial weights. In this method, the rotor is forced to rotate around its geometric axis. The coil currents of magnetic bearing, rather than rotor displacement, are employed to calculate the correction masses. This method provides two benefits when the MLR's rotation axis coincides with the geometric axis: one is that unbalanced centrifugal force/torque equals the synchronous magnetic force/torque, and the other is that the magnetic force is proportional to the control current. These make calculation of the correction masses by measuring coil current with only a single start-up precise. An unbalance compensation control (UCC) method, using a general band-pass filter (GPF) to make the MLR spin around its geometric axis is also discussed. Experimental results show that the novel balancing method can remove more than 92.7% of the rotor unbalance and a balancing accuracy of 0.024 g mm kg−1 is achieved.

  20. Can currently available non-animal methods detect pre and pro-haptens relevant for skin sensitization?

    PubMed

    Patlewicz, Grace; Casati, Silvia; Basketter, David A; Asturiol, David; Roberts, David W; Lepoittevin, Jean-Pierre; Worth, Andrew P; Aschberger, Karin

    2016-12-01

    Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal tests such as the Local Lymph Node Assay (LLNA). In recent years, regulations in the cosmetics and chemicals sectors have provided strong impetus to develop non-animal alternatives. Three test methods have undergone OECD validation: the direct peptide reactivity assay (DPRA), the KeratinoSens™ and the human Cell Line Activation Test (h-CLAT). Whilst these methods perform relatively well in predicting LLNA results, a concern raised is their ability to predict chemicals that need activation to be sensitizing (pre- or pro-haptens). This current study reviewed an EURL ECVAM dataset of 127 substances for which information was available in the LLNA and three non-animal test methods. Twenty eight of the sensitizers needed to be activated, with the majority being pre-haptens. These were correctly identified by 1 or more of the test methods. Six substances were categorized exclusively as pro-haptens, but were correctly identified by at least one of the cell-based assays. The analysis here showed that skin metabolism was not likely to be a major consideration for assessing sensitization potential and that sensitizers requiring activation could be identified correctly using one or more of the current non-animal methods. Published by Elsevier Inc.

  1. Novel methods to optimize the effects of transcranial direct current stimulation: a systematic review of transcranial direct current stimulation patents.

    PubMed

    Malavera, Alejandra; Vasquez, Alejandra; Fregni, Felipe

    2015-01-01

    Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that has been extensively studied. While there have been initial positive results in some clinical trials, there is still variability in tDCS results. The aim of this article is to review and discuss patents assessing novel methods to optimize the use of tDCS. A systematic review was performed using Google patents database with tDCS as the main technique, with patents filling date between 2010 and 2015. Twenty-two patents met our inclusion criteria. These patents attempt to address current tDCS limitations. Only a few of them have been investigated in clinical trials (i.e., high-definition tDCS), and indeed most of them have not been tested before in human trials. Further clinical testing is required to assess which patents are more likely to optimize the effects of tDCS. We discuss the potential optimization of tDCS based on these patents and the current experience with standard tDCS.

  2. Robust current control-based generalized predictive control with sliding mode disturbance compensation for PMSM drives.

    PubMed

    Liu, Xudong; Zhang, Chenghui; Li, Ke; Zhang, Qi

    2017-11-01

    This paper addresses the current control of permanent magnet synchronous motor (PMSM) for electric drives with model uncertainties and disturbances. A generalized predictive current control method combined with sliding mode disturbance compensation is proposed to satisfy the requirement of fast response and strong robustness. Firstly, according to the generalized predictive control (GPC) theory based on the continuous time model, a predictive current control method is presented without considering the disturbance, which is convenient to be realized in the digital controller. In fact, it's difficult to derive the exact motor model and parameters in the practical system. Thus, a sliding mode disturbance compensation controller is studied to improve the adaptiveness and robustness of the control system. The designed controller attempts to combine the merits of both predictive control and sliding mode control, meanwhile, the controller parameters are easy to be adjusted. Lastly, the proposed controller is tested on an interior PMSM by simulation and experiment, and the results indicate that it has good performance in both current tracking and disturbance rejection. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Nondestructive examination of the Tropical Rainfall Measuring Mission (TRMM) reaction control subsystem (RCS) propellant tanks

    NASA Technical Reports Server (NTRS)

    Free, James M.

    1993-01-01

    This paper assesses the feasibility of using eddy current nondestructive examination to determine flaw sizes in completely assembled hydrazine propellant tanks. The study was performed by the NASA Goddard Space Flight Center for the Tropical Rainfall Measuring Mission (TRMM) project to help determine whether existing propellant tanks could meet the fracture analysis requirements of the current pressure vessel specification, MIL-STD-1522A and, therefore be used on the TRMM spacecraft. After evaluating several nondestructive test methods, eddy current testing was selected as the most promising method for determining flaw sizes on external and internal surfaces of completely assembled tanks. Tests were conducted to confirm the detection capability of the eddy current NDE, procedures were developed to inspect two candidate tanks, and the test support equipment was designed. The non-spherical tank eddy current NDE test program was terminated when the decision was made to procure new tanks for the TRMM propulsion subsystem. The information on the development phase of this test program is presented in this paper as a reference for future investigation on the subject.

  4. Gridded climate data from 5 GCMs of the Last Glacial Maximum downscaled to 30 arc s for Europe

    NASA Astrophysics Data System (ADS)

    Schmatz, D. R.; Luterbacher, J.; Zimmermann, N. E.; Pearman, P. B.

    2015-06-01

    Studies of the impacts of historical, current and future global change require very high-resolution climate data (≤ 1 km) as a basis for modelled responses, meaning that data from digital climate models generally require substantial rescaling. Another shortcoming of available datasets on past climate is that the effects of sea level rise and fall are not considered. Without such information, the study of glacial refugia or early Holocene plant and animal migration are incomplete if not impossible. Sea level at the last glacial maximum (LGM) was approximately 125 m lower, creating substantial additional terrestrial area for which no current baseline data exist. Here, we introduce the development of a novel, gridded climate dataset for LGM that is both very high resolution (1 km) and extends to the LGM sea and land mask. We developed two methods to extend current terrestrial precipitation and temperature data to areas between the current and LGM coastlines. The absolute interpolation error is less than 1 and 0.5 °C for 98.9 and 87.8 %, respectively, of all pixels within two arc degrees of the current coastline. We use the change factor method with these newly assembled baseline data to downscale five global circulation models of LGM climate to a resolution of 1 km for Europe. As additional variables we calculate 19 "bioclimatic" variables, which are often used in climate change impact studies on biological diversity. The new LGM climate maps are well suited for analysing refugia and migration during Holocene warming following the LGM.

  5. Are there unmet needs in contraceptive counselling and choice? Findings of the European TANCO Study.

    PubMed

    Merki-Feld, G S; Caetano, C; Porz, T C; Bitzer, J

    2018-05-22

    Effective use of contraception requires women to make an informed choice about methods that match their individual needs and expectations. The European Thinking About Needs in Contraception (TANCO) study is a quantitative, online survey of healthcare provider and women's views on aspects of counselling around contraception and contraceptive use. Healthcare providers and women attending their practices for contraceptive counselling were invited to complete online questionnaires. The women's survey explored knowledge and use of contraceptive methods, satisfaction with current method, and interest in receiving more information about all methods. Healthcare provider views were gathered in parallel. A total of 676 healthcare providers and 6027 women completed the online surveys in 11 countries. There was a high prevalence of contraceptive use and general satisfaction with current method across the countries. Fifty-five percent of women were using short-acting contraception (SAC) methods; 19% were using a long-acting reversible contraception (LARC) method. Sixty percent of women were interested in receiving more information about all methods; 73% of women said they would consider LARC if they received more comprehensive information. Healthcare providers tend to underestimate women's interest in receiving information on contraception in general and, more specifically, LARC methods. Despite high levels of use and satisfaction with current methods, women were interested in receiving more information about all contraceptive methods. Greater exploration of women's views on their needs and expectations of contraception could lead to increased knowledge, more effective discussions with healthcare providers and the greater likelihood of informed contraceptive choice.

  6. Applying CBR to machine tool product configuration design oriented to customer requirements

    NASA Astrophysics Data System (ADS)

    Wang, Pengjia; Gong, Yadong; Xie, Hualong; Liu, Yongxian; Nee, Andrew Yehching

    2017-01-01

    Product customization is a trend in the current market-oriented manufacturing environment. However, deduction from customer requirements to design results and evaluation of design alternatives are still heavily reliant on the designer's experience and knowledge. To solve the problem of fuzziness and uncertainty of customer requirements in product configuration, an analysis method based on the grey rough model is presented. The customer requirements can be converted into technical characteristics effectively. In addition, an optimization decision model for product planning is established to help the enterprises select the key technical characteristics under the constraints of cost and time to serve the customer to maximal satisfaction. A new case retrieval approach that combines the self-organizing map and fuzzy similarity priority ratio method is proposed in case-based design. The self-organizing map can reduce the retrieval range and increase the retrieval efficiency, and the fuzzy similarity priority ratio method can evaluate the similarity of cases comprehensively. To ensure that the final case has the best overall performance, an evaluation method of similar cases based on grey correlation analysis is proposed to evaluate similar cases to select the most suitable case. Furthermore, a computer-aided system is developed using MATLAB GUI to assist the product configuration design. The actual example and result on an ETC series machine tool product show that the proposed method is effective, rapid and accurate in the process of product configuration. The proposed methodology provides a detailed instruction for the product configuration design oriented to customer requirements.

  7. Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods

    NASA Astrophysics Data System (ADS)

    Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco

    2015-04-01

    The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface resistivity. The Hessian of the regularization term is used as preconditioner which requires an additional PDE solution in each iteration step. As it turns out, the relevant PDEs are naturally formulated in the finite element framework. Using the domain decomposition method provided in Escript, the inversion scheme has been parallelized for distributed memory computers with multi-core shared memory nodes. We show numerical examples from simple layered models to complex 3D models and compare with the results from other methods. The inversion scheme is furthermore tested on a field data example to characterise localised freshwater discharge in a coastal environment.. References: L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306

  8. Formal methods demonstration project for space applications

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.

    1995-01-01

    The Space Shuttle program is cooperating in a pilot project to apply formal methods to live requirements analysis activities. As one of the larger ongoing shuttle Change Requests (CR's), the Global Positioning System (GPS) CR involves a significant upgrade to the Shuttle's navigation capability. Shuttles are to be outfitted with GPS receivers and the primary avionics software will be enhanced to accept GPS-provided positions and integrate them into navigation calculations. Prior to implementing the CR, requirements analysts at Loral Space Information Systems, the Shuttle software contractor, must scrutinize the CR to identify and resolve any requirements issues. We describe an ongoing task of the Formal Methods Demonstration Project for Space Applications whose goal is to find an effective way to use formal methods in the GPS CR requirements analysis phase. This phase is currently under way and a small team from NASA Langley, ViGYAN Inc. and Loral is now engaged in this task. Background on the GPS CR is provided and an overview of the hardware/software architecture is presented. We outline the approach being taken to formalize the requirements, only a subset of which is being attempted. The approach features the use of the PVS specification language to model 'principal functions', which are major units of Shuttle software. Conventional state machine techniques form the basis of our approach. Given this background, we present interim results based on a snapshot of work in progress. Samples of requirements specifications rendered in PVS are offered to illustration. We walk through a specification sketch for the principal function known as GPS Receiver State processing. Results to date are summarized and feedback from Loral requirements analysts is highlighted. Preliminary data is shown comparing issues detected by the formal methods team versus those detected using existing requirements analysis methods. We conclude by discussing our plan to complete the remaining activities of this task.

  9. A thioacidolysis method tailored for higher‐throughput quantitative analysis of lignin monomers

    PubMed Central

    Foster, Cliff; Happs, Renee M.; Doeppke, Crissa; Meunier, Kristoffer; Gehan, Jackson; Yue, Fengxia; Lu, Fachuang; Davis, Mark F.

    2016-01-01

    Abstract Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β‐O‐4 linkages. Current thioacidolysis methods are low‐throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non‐chlorinated organic solvent and is tailored for higher‐throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1–2 mg of biomass per assay and has been quantified using fast‐GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, including standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day‐to‐day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. The method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses. PMID:27534715

  10. A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.

    Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less

  11. A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers

    DOE PAGES

    Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.; ...

    2016-09-14

    Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less

  12. Noise detection in heart sound recordings.

    PubMed

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  13. The challenges of simulating wake vortex encounters and assessing separation criteria

    NASA Technical Reports Server (NTRS)

    Dunham, R. E.; Stuever, Robert A.; Vicroy, Dan D.

    1993-01-01

    During landings and take-offs, the longitudinal spacing between airplanes is in part determined by the safe separation required to avoid the trailing vortex wake of the preceding aircraft. Safe exploration of the feasibility of reducing longitudinal separation standards will require use of aircraft simulators. This paper discusses the approaches to vortex modeling, methods for modeling the aircraft/vortex interaction, some of the previous attempts of defining vortex hazard criteria, and current understanding of the development of vortex hazard criteria.

  14. Launch team training system

    NASA Technical Reports Server (NTRS)

    Webb, J. T.

    1988-01-01

    A new approach to the training, certification, recertification, and proficiency maintenance of the Shuttle launch team is proposed. Previous training approaches are first reviewed. Short term program goals include expanding current training methods, improving the existing simulation capability, and scheduling training exercises with the same priority as hardware tests. Long-term goals include developing user requirements which would take advantage of state-of-the-art tools and techniques. Training requirements for the different groups of people to be trained are identified, and future goals are outlined.

  15. Note: thermal imaging enhancement algorithm for gas turbine aerothermal characterization.

    PubMed

    Beer, S K; Lawson, S A

    2013-08-01

    An algorithm was developed to convert radiation intensity images acquired using a black and white CCD camera to thermal images without requiring knowledge of incident background radiation. This unique infrared (IR) thermography method was developed to determine aerothermal characteristics of advanced cooling concepts for gas turbine cooling application. Compared to IR imaging systems traditionally used for gas turbine temperature monitoring, the system developed for the current study is relatively inexpensive and does not require calibration with surface mounted thermocouples.

  16. Synthetic Self-Healing Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bello, Mollie

    Given enough time, pressure, temperature fluctuation, and stress any material will fail. Currently, synthesized materials make up a large part of our everyday lives, and are used in a number of important applications such as; space travel, under water devices, precise instrumentation, transportation, and infrastructure. Structural failure of these material scan lead to expensive and dangerous consequences. In an attempt to prolong the life spans of specific materials and reduce efforts put into repairing them, biologically inspired, self-healing systems have been extensively investigated. The current review explores recent advances in three methods of synthesized self-healing: capsule based, vascular, and intrinsic.more » Ideally, self-healing materials require no human intervention to promote healing, are capable of surviving all the steps of polymer processing, and heal the same location repeatedly. Only the vascular method holds up to all of these idealities.« less

  17. Unity PF current-source rectifier based on dynamic trilogic PWM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao Wang; Boon-Teck Ooi

    1993-07-01

    One remaining step in perfecting the stand-along, unity power factor, regulated current-source PWM rectifier is to reduce cost, by bringing the 12-valve converter (consisting of three single-phase full bridges that operate with two-level or bilogic PWM) to the six-valve bridge. However, the six-valve topology requires a three-level or trilogic PWM strategy that can handle feedback signals. This feature was not available until now. The paper describes a general method of translating three-phase bilogic PWM signals to three-phase trilogic PWM signals. The method of translation retains the characteristics of the bilogic PWM, including the frequency bandwidth. Experiments show that the trilogicmore » PWM signals produced by the method can not only handle stabilizing feedback signals but also signals for active filtering.« less

  18. Fundamental study on the magnetic field control method using multiple HTS coils for Magnetic Drug Delivery System

    NASA Astrophysics Data System (ADS)

    Hirano, R.; Kim, S. B.; Nakagawa, T.; Tomisaka, Y.; Ueda, H.

    2017-07-01

    The magnetic drug delivery system (MDDS) is a key technology to reduce the side effects in the medical applications, and the magnetic force control is very important issue in MDDS. In this application, the strength of magnetic field and gradient required to MDDS devices are 54 mT and 5.5 T/m, respectively. We proposed the new magnetic force control system that consists of the multiple racetrack HTS magnets. We can control the magnetic field gradient along the longitudinal direction by the arrangement of the multiple racetrack HTS magnets and operating current of each magnet. When the racetrack HTS magnets were used, the critical current was reduced by the self-magnetic field. Therefore, the shape design of HTS magnet to reduce the magnet field into the surface of HTS tapes was required. Therefore, the electromagnetic analysis based on finite element method (FEM) was carried out to design and optimize the shape of multiple racetrack HTS magnet. We were able to suppress the reduction of critical current by placing the magnetic substance at upper and lower side of the HTS magnets. It was confirmed that obtained maximum values of magnetic field strength and field gradient were 33 mT and 0.18 T/m, respectively.

  19. A z-gradient array for simultaneous multi-slice excitation with a single-band RF pulse.

    PubMed

    Ertan, Koray; Taraghinia, Soheil; Sadeghi, Alireza; Atalar, Ergin

    2018-07-01

    Multi-slice radiofrequency (RF) pulses have higher specific absorption rates, more peak RF power, and longer pulse durations than single-slice RF pulses. Gradient field design techniques using a z-gradient array are investigated for exciting multiple slices with a single-band RF pulse. Two different field design methods are formulated to solve for the required current values of the gradient array elements for the given slice locations. The method requirements are specified, optimization problems are formulated for the minimum current norm and an analytical solution is provided. A 9-channel z-gradient coil array driven by independent, custom-designed gradient amplifiers is used to validate the theory. Performance measures such as normalized slice thickness error, gradient strength per unit norm current, power dissipation, and maximum amplitude of the magnetic field are provided for various slice locations and numbers of slices. Two and 3 slices are excited by a single-band RF pulse in simulations and phantom experiments. The possibility of multi-slice excitation with a single-band RF pulse using a z-gradient array is validated in simulations and phantom experiments. Magn Reson Med 80:400-412, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Towards the development of universal, fast and highly accurate docking/scoring methods: a long way to go

    PubMed Central

    Moitessier, N; Englebienne, P; Lee, D; Lawandi, J; Corbeil, C R

    2008-01-01

    Accelerating the drug discovery process requires predictive computational protocols capable of reducing or simplifying the synthetic and/or combinatorial challenge. Docking-based virtual screening methods have been developed and successfully applied to a number of pharmaceutical targets. In this review, we first present the current status of docking and scoring methods, with exhaustive lists of these. We next discuss reported comparative studies, outlining criteria for their interpretation. In the final section, we describe some of the remaining developments that would potentially lead to a universally applicable docking/scoring method. PMID:18037925

  1. A comparison of methods for teaching receptive language to toddlers with autism.

    PubMed

    Vedora, Joseph; Grandelski, Katrina

    2015-01-01

    The use of a simple-conditional discrimination training procedure, in which stimuli are initially taught in isolation with no other comparison stimuli, is common in early intensive behavioral intervention programs. Researchers have suggested that this procedure may encourage the development of faulty stimulus control during training. The current study replicated previous work that compared the simple-conditional and the conditional-only methods to teach receptive labeling of pictures to young children with autism spectrum disorder. Both methods were effective, but the conditional-only method required fewer sessions to mastery. © Society for the Experimental Analysis of Behavior.

  2. Refinery evaluation of optical imaging to locate fugitive emissions.

    PubMed

    Robinson, Donald R; Luke-Boone, Ronke; Aggarwal, Vineet; Harris, Buzz; Anderson, Eric; Ranum, David; Kulp, Thomas J; Armstrong, Karla; Sommers, Ricky; McRae, Thomas G; Ritter, Karin; Siegell, Jeffrey H; Van Pelt, Doug; Smylie, Mike

    2007-07-01

    Fugitive emissions account for approximately 50% of total hydrocarbon emissions from process plants. Federal and state regulations aiming at controlling these emissions require refineries and petrochemical plants in the United States to implement a Leak Detection and Repair Program (LDAR). The current regulatory work practice, U.S. Environment Protection Agency Method 21, requires designated components to be monitored individually at regular intervals. The annual costs of these LDAR programs in a typical refinery can exceed US$1,000,000. Previous studies have shown that a majority of controllable fugitive emissions come from a very small fraction of components. The Smart LDAR program aims to find cost-effective methods to monitor and reduce emissions from these large leakers. Optical gas imaging has been identified as one such technology that can help achieve this objective. This paper discusses a refinery evaluation of an instrument based on backscatter absorption gas imaging technology. This portable camera allows an operator to scan components more quickly and image gas leaks in real time. During the evaluation, the instrument was able to identify leaking components that were the source of 97% of the total mass emissions from leaks detected. More than 27,000 components were monitored. This was achieved in far less time than it would have taken using Method 21. In addition, the instrument was able to find leaks from components that are not required to be monitored by the current LDAR regulations. The technology principles and the parameters that affect instrument performance are also discussed in the paper.

  3. Detection and traceability of genetically modified organisms in the food production chain.

    PubMed

    Miraglia, M; Berdal, K G; Brera, C; Corbisier, P; Holst-Jensen, A; Kok, E J; Marvin, H J P; Schimmel, H; Rentsch, J; van Rie, J P P F; Zagon, J

    2004-07-01

    Both labelling and traceability of genetically modified organisms are current issues that are considered in trade and regulation. Currently, labelling of genetically modified foods containing detectable transgenic material is required by EU legislation. A proposed package of legislation would extend this labelling to foods without any traces of transgenics. These new legislations would also impose labelling and a traceability system based on documentation throughout the food and feed manufacture system. The regulatory issues of risk analysis and labelling are currently harmonised by Codex Alimentarius. The implementation and maintenance of the regulations necessitates sampling protocols and analytical methodologies that allow for accurate determination of the content of genetically modified organisms within a food and feed sample. Current methodologies for the analysis of genetically modified organisms are focused on either one of two targets, the transgenic DNA inserted- or the novel protein(s) expressed- in a genetically modified product. For most DNA-based detection methods, the polymerase chain reaction is employed. Items that need consideration in the use of DNA-based detection methods include the specificity, sensitivity, matrix effects, internal reference DNA, availability of external reference materials, hemizygosity versus homozygosity, extrachromosomal DNA, and international harmonisation. For most protein-based methods, enzyme-linked immunosorbent assays with antibodies binding the novel protein are employed. Consideration should be given to the selection of the antigen bound by the antibody, accuracy, validation, and matrix effects. Currently, validation of detection methods for analysis of genetically modified organisms is taking place. In addition, new methodologies are developed, including the use of microarrays, mass spectrometry, and surface plasmon resonance. Challenges for GMO detection include the detection of transgenic material in materials with varying chromosome numbers. The existing and proposed regulatory EU requirements for traceability of genetically modified products fit within a broader tendency towards traceability of foods in general and, commercially, towards products that can be distinguished from each other. Traceability systems document the history of a product and may serve the purpose of both marketing and health protection. In this framework, segregation and identity preservation systems allow for the separation of genetically modified and non-modified products from "farm to fork". Implementation of these systems comes with specific technical requirements for each particular step of the food processing chain. In addition, the feasibility of traceability systems depends on a number of factors, including unique identifiers for each genetically modified product, detection methods, permissible levels of contamination, and financial costs. In conclusion, progress has been achieved in the field of sampling, detection, and traceability of genetically modified products, while some issues remain to be solved. For success, much will depend on the threshold level for adventitious contamination set by legislation. Copryright 2004 Elsevier Ltd.

  4. Twelve tips for getting started using mixed methods in medical education research.

    PubMed

    Lavelle, Ellen; Vuk, Jasna; Barber, Carolyn

    2013-04-01

    Mixed methods research, which is gaining popularity in medical education, provides a new and comprehensive approach for addressing teaching, learning, and evaluation issues in the field. The aim of this article is to provide medical education researchers with 12 tips, based on consideration of current literature in the health professions and in educational research, for conducting and disseminating mixed methods research. Engaging in mixed methods research requires consideration of several major components: the mixed methods paradigm, types of problems, mixed method designs, collaboration, and developing or extending theory. Mixed methods is an ideal tool for addressing a full range of problems in medical education to include development of theory and improving practice.

  5. Detector Powering in the 21st Century Why stay stuck with the Good old 20th Century methods?

    NASA Astrophysics Data System (ADS)

    Dhawan, Satish; Sumner, Richard

    Future Collider Physics Detectors are envisioned with large granularity but we have a power delivery problem unless we fill a large fraction of the detector volume with copper conductors. LHC detector electronics is powered by transporting direct current over distances of 30 to 150 meters. This is how Thomas Alva Edison powered his light bulb. For example, CMS ECAL uses 50 kiloamps at 2.5 volts, supplied over a cable set with a transmission efficiency of only 30%. The transmission loss becomes waste heat in the detector that has to be removed. We have been exploring methods to transmit the DC power at higher voltage (low current), reducing to the final low voltage (high current) using DC-DC converters. These converters must operate in high magnetic fields and high radiation levels. This requires rad hard components and non-magnetic (air core) inductors.

  6. Simulative and experimental investigation on stator winding turn and unbalanced supply voltage fault diagnosis in induction motors using Artificial Neural Networks.

    PubMed

    Lashkari, Negin; Poshtan, Javad; Azgomi, Hamid Fekri

    2015-11-01

    The three-phase shift between line current and phase voltage of induction motors can be used as an efficient fault indicator to detect and locate inter-turn stator short-circuit (ITSC) fault. However, unbalanced supply voltage is one of the contributing factors that inevitably affect stator currents and therefore the three-phase shift. Thus, it is necessary to propose a method that is able to identify whether the unbalance of three currents is caused by ITSC or supply voltage fault. This paper presents a feedforward multilayer-perceptron Neural Network (NN) trained by back propagation, based on monitoring negative sequence voltage and the three-phase shift. The data which are required for training and test NN are generated using simulated model of stator. The experimental results are presented to verify the superior accuracy of the proposed method. Copyright © 2015. Published by Elsevier Ltd.

  7. A Bookmarking Service for Organizing and Sharing URLs

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Wolfe, Shawn R.; Chen, James R.; Mathe, Nathalie; Rabinowitz, Joshua L.

    1997-01-01

    Web browser bookmarking facilities predominate as the method of choice for managing URLs. In this paper, we describe some deficiencies of current bookmarking schemes, and examine an alternative to current approaches. We present WebTagger(TM), an implemented prototype of a personal bookmarking service that provides both individuals and groups with a customizable means of organizing and accessing Web-based information resources. In addition, the service enables users to supply feedback on the utility of these resources relative to their information needs, and provides dynamically-updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet, and require no special software. This service greatly simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers, and enables rapid access to stored bookmarks.

  8. An innovative exercise method to simulate orbital EVA work - Applications to PLSS automatic controls

    NASA Technical Reports Server (NTRS)

    Lantz, Renee; Vykukal, H.; Webbon, Bruce

    1987-01-01

    An exercise method has been proposed which may satisfy the current need for a laboratory simulation representative of muscular, cardiovascular, respiratory, and thermoregulatory responses to work during orbital extravehicular activity (EVA). The simulation incorporates arm crank ergometry with a unique body support mechanism that allows all body position stabilization forces to be reacted at the feet. By instituting this exercise method in laboratory experimentation, an advanced portable life support system (PLSS) thermoregulatory control system can be designed to more accurately reflect the specific work requirements of orbital EVA.

  9. A risk-based classification scheme for genetically modified foods. II: Graded testing.

    PubMed

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    This paper presents a graded approach to the testing of crop-derived genetically modified (GM) foods based on concern levels in a proposed risk-based classification scheme (RBCS) and currently available testing methods. A graded approach offers the potential for more efficient use of testing resources by focusing less on lower concern GM foods, and more on higher concern foods. In this proposed approach to graded testing, products that are classified as Level I would have met baseline testing requirements that are comparable to what is widely applied to premarket assessment of GM foods at present. In most cases, Level I products would require no further testing, or very limited confirmatory analyses. For products classified as Level II or higher, additional testing would be required, depending on the type of the substance, prior dietary history, estimated exposure level, prior knowledge of toxicity of the substance, and the nature of the concern related to unintended changes in the modified food. Level III testing applies only to the assessment of toxic and antinutritional effects from intended changes and is tailored to the nature of the substance in question. Since appropriate test methods are not currently available for all effects of concern, future research to strengthen the testing of GM foods is discussed.

  10. Physics-based coastal current tomographic tracking using a Kalman filter.

    PubMed

    Wang, Tongchen; Zhang, Ying; Yang, T C; Chen, Huifang; Xu, Wen

    2018-05-01

    Ocean acoustic tomography can be used based on measurements of two-way travel-time differences between the nodes deployed on the perimeter of the surveying area to invert/map the ocean current inside the area. Data at different times can be related using a Kalman filter, and given an ocean circulation model, one can in principle now cast and even forecast current distribution given an initial distribution and/or the travel-time difference data on the boundary. However, an ocean circulation model requires many inputs (many of them often not available) and is unpractical for estimation of the current field. A simplified form of the discretized Navier-Stokes equation is used to show that the future velocity state is just a weighted spatial average of the current state. These weights could be obtained from an ocean circulation model, but here in a data driven approach, auto-regressive methods are used to obtain the time and space dependent weights from the data. It is shown, based on simulated data, that the current field tracked using a Kalman filter (with an arbitrary initial condition) is more accurate than that estimated by the standard methods where data at different times are treated independently. Real data are also examined.

  11. Improvement of photon correlation spectroscopy method for measuring nanoparticle size by using attenuated total reflectance.

    PubMed

    Krishtop, Victor; Doronin, Ivan; Okishev, Konstantin

    2012-11-05

    Photon correlation spectroscopy is an effective method for measuring nanoparticle sizes and has several advantages over alternative methods. However, this method suffers from a disadvantage in that its measuring accuracy reduces in the presence of convective flows of fluid containing nanoparticles. In this paper, we propose a scheme based on attenuated total reflectance in order to reduce the influence of convection currents. The autocorrelation function for the light-scattering intensity was found for this case, and it was shown that this method afforded a significant decrease in the time required to measure the particle sizes and an increase in the measuring accuracy.

  12. Optimal current waveforms for brushless permanent magnet motors

    NASA Astrophysics Data System (ADS)

    Moehle, Nicholas; Boyd, Stephen

    2015-07-01

    In this paper, we give energy-optimal current waveforms for a permanent magnet synchronous motor that result in a desired average torque. Our formulation generalises previous work by including a general back-electromotive force (EMF) wave shape, voltage and current limits, an arbitrary phase winding connection, a simple eddy current loss model, and a trade-off between power loss and torque ripple. Determining the optimal current waveforms requires solving a small convex optimisation problem. We show how to use the alternating direction method of multipliers to find the optimal current in milliseconds or hundreds of microseconds, depending on the processor used, which allows the possibility of generating optimal waveforms in real time. This allows us to adapt in real time to changes in the operating requirements or in the model, such as a change in resistance with winding temperature, or even gross changes like the failure of one winding. Suboptimal waveforms are available in tens or hundreds of microseconds, allowing for quick response after abrupt changes in the desired torque. We demonstrate our approach on a simple numerical example, in which we give the optimal waveforms for a motor with a sinusoidal back-EMF, and for a motor with a more complicated, nonsinusoidal waveform, in both the constant-torque region and constant-power region.

  13. Development of Ground Coils with Low Eddy Current Loss by Applying the Compression Molding Method after the Coil Winding

    NASA Astrophysics Data System (ADS)

    Suzuki, Masao; Aiba, Masayuki; Takahashi, Noriyuki; Ota, Satoru; Okada, Shigenori

    In a magnetically levitated transportation (MAGLEV) system, a huge number of ground coils will be required because they must be laid for the whole line. Therefore, stable performance and reduced cost are essential requirements for the ground coil development. On the other hand, because the magnetic field changes when the superconducting magnet passes by, an eddy current will be generated in the conductor of the ground coil and will result in energy loss. The loss not only increases the magnetic resistance for the train running but also brings an increase in the ground coil temperature. Therefore, the reduction of the eddy current loss is extremely important. This study examined ground coils in which both the eddy current loss and temperature increase were small. Furthermore, quantitative comparison for the eddy current loss of various magnet wire samples was performed by bench test. On the basis of the comparison, a round twisted wire having low eddy current loss was selected as an effective ground coil material. In addition, the ground coils were manufactured on trial. A favorable outlook to improve the size accuracy of the winding coil and uneven thickness of molded resin was obtained without reducing the insulation strength between the coil layers by applying a compression molding after winding.

  14. 20 CFR 216.12 - When current connection is required.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false When current connection is required. 216.12... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.12 When current connection is required. (a) A current connection is required to qualify an individual for the following types of railroad...

  15. 20 CFR 216.12 - When current connection is required.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false When current connection is required. 216.12... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.12 When current connection is required. (a) A current connection is required to qualify an individual for the following types of railroad...

  16. Textile Technologies and Tissue Engineering: A Path Towards Organ Weaving

    PubMed Central

    Akbari, Mohsen; Tamayol, Ali; Bagherifard, Sara; Serex, Ludovic; Mostafalu, Pooria; Faramarzi, Negar; Mohammadi, Mohammad Hossein

    2016-01-01

    Textile technologies have recently attracted great attention as potential biofabrication tools for engineering tissue constructs. Using current textile technologies, fibrous structures can be designed and engineered to attain the required properties that are demanded by different tissue engineering applications. Several key parameters such as physiochemical characteristics of fibers, pore size and mechanical properties of the fabrics play important role in the effective use of textile technologies in tissue engineering. This review summarizes the current advances in the manufacturing of biofunctional fibers. Different textile methods such as knitting, weaving, and braiding are discussed and their current applications in tissue engineering are highlighted. PMID:26924450

  17. Four-point probe measurements using current probes with voltage feedback to measure electric potentials

    NASA Astrophysics Data System (ADS)

    Lüpke, Felix; Cuma, David; Korte, Stefan; Cherepanov, Vasily; Voigtländer, Bert

    2018-02-01

    We present a four-point probe resistance measurement technique which uses four equivalent current measuring units, resulting in minimal hardware requirements and corresponding sources of noise. Local sample potentials are measured by a software feedback loop which adjusts the corresponding tip voltage such that no current flows to the sample. The resulting tip voltage is then equivalent to the sample potential at the tip position. We implement this measurement method into a multi-tip scanning tunneling microscope setup such that potentials can also be measured in tunneling contact, allowing in principle truly non-invasive four-probe measurements. The resulting measurement capabilities are demonstrated for \

  18. Requirements and Techniques for Developing and Measuring Simulant Materials

    NASA Technical Reports Server (NTRS)

    Rickman, Doug; Owens, Charles; Howard, Rick

    2006-01-01

    The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication identify and reinforced a need for a set of standards and requirements for the production and usage of the lunar simulant materials. As NASA need prepares to return to the moon, a set of requirements have been developed for simulant materials and methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum Characteristics for simulants of lunar regolith, and 3) a method to produce lunar regolith simulants needed for NASA's exploration mission. A method to evaluate new and current simulants has also been rigorously defined through the mathematics of Figures of Merit (FoM), a concept new to simulant development. A single FoM is conceptually an algorithm defining a single characteristic of a simulant and provides a clear comparison of that characteristic for both the simulant and a reference material. Included as an intrinsic part of the algorithm is a minimum acceptable performance for the characteristic of interest. The algorithms for the FoM for Standard Lunar Regolith Simulants are also explicitly keyed to a recommended method to make lunar simulants.

  19. Multiplex cDNA quantification method that facilitates the standardization of gene expression data

    PubMed Central

    Gotoh, Osamu; Murakami, Yasufumi; Suyama, Akira

    2011-01-01

    Microarray-based gene expression measurement is one of the major methods for transcriptome analysis. However, current microarray data are substantially affected by microarray platforms and RNA references because of the microarray method can provide merely the relative amounts of gene expression levels. Therefore, valid comparisons of the microarray data require standardized platforms, internal and/or external controls and complicated normalizations. These requirements impose limitations on the extensive comparison of gene expression data. Here, we report an effective approach to removing the unfavorable limitations by measuring the absolute amounts of gene expression levels on common DNA microarrays. We have developed a multiplex cDNA quantification method called GEP-DEAN (Gene expression profiling by DCN-encoding-based analysis). The method was validated by using chemically synthesized DNA strands of known quantities and cDNA samples prepared from mouse liver, demonstrating that the absolute amounts of cDNA strands were successfully measured with a sensitivity of 18 zmol in a highly multiplexed manner in 7 h. PMID:21415008

  20. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  1. Technique for Very High Order Nonlinear Simulation and Validation

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2001-01-01

    Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.

  2. Birth Control in Clinical Trials: Industry Survey of Current Use Practices, Governance, and Monitoring.

    PubMed

    Stewart, J; Breslin, W J; Beyer, B K; Chadwick, K; De Schaepdrijver, L; Desai, M; Enright, B; Foster, W; Hui, J Y; Moffat, G J; Tornesi, B; Van Malderen, K; Wiesner, L; Chen, C L

    2016-03-01

    The Health and Environmental Sciences Institute (HESI) Developmental and Reproductive Toxicology Technical Committee sponsored a pharmaceutical industry survey on current industry practices for contraception use during clinical trials. The objectives of the survey were to improve our understanding of the current industry practices for contraception requirements in clinical trials, the governance processes set up to promote consistency and/or compliance with contraception requirements, and the effectiveness of current contraception practices in preventing pregnancies during clinical trials. Opportunities for improvements in current practices were also considered. The survey results from 12 pharmaceutical companies identified significant variability among companies with regard to contraception practices and governance during clinical trials. This variability was due primarily to differences in definitions, areas of scientific uncertainty or misunderstanding, and differences in company approaches to enrollment in clinical trials. The survey also revealed that few companies collected data in a manner that would allow a retrospective understanding of the reasons for failure of birth control during clinical trials. In this article, suggestions are made for topics where regulatory guidance or scientific publications could facilitate best practice. These include provisions for a pragmatic definition of women of childbearing potential, guidance on how animal data can influence the requirements for male and female birth control, evidence-based guidance on birth control and pregnancy testing regimes suitable for low- and high-risk situations, plus practical methods to ascertain the risk of drug-drug interactions with hormonal contraceptives.

  3. Development and verification of local/global analysis techniques for laminated composites

    NASA Technical Reports Server (NTRS)

    Griffin, O. Hayden, Jr.

    1989-01-01

    Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.

  4. Progress in Developing Transfer Functions for Surface Scanning Eddy Current Inspections

    NASA Astrophysics Data System (ADS)

    Shearer, J.; Heebl, J.; Brausch, J.; Lindgren, E.

    2009-03-01

    As US Air Force (USAF) aircraft continue to age, additional inspections are required for structural components. The validation of new inspections typically requires a capability demonstration of the method using representative structure with representative damage. To minimize the time and cost required to prepare such samples, Electric Discharge machined (EDM) notches are commonly used to represent fatigue cracks in validation studies. However, the sensitivity to damage typically changes as a function of damage type. This requires a mathematical relationship to be developed between the responses from the two different flaw types to enable the use of EDM notched samples to validate new inspections. This paper reviews progress to develop transfer functions for surface scanning eddy current inspections of aluminum and titanium alloys found in structural aircraft components. Multiple samples with well characterized grown fatigue cracks and master gages with EDM notches, both with a range of flaw sizes, were used to collect flaw signals with USAF field inspection equipment. Analysis of this empirical data was used to develop a transfer function between the response from the EDM notches and grown fatigue cracks.

  5. Measuring nursing essential contributions to quality patient care outcomes.

    PubMed

    Wolgast, Kelly A; Taylor, Katherine; Garcia, Dawn; Watkins, Miko

    2011-01-01

    Workload Management System for Nursing (WMSN) is a core Army Medical Department business system that has provided near real-time, comprehensive nursing workload and manpower data for decision making at all levels for over 25 years. The Army Manpower Requirements and Documentation Agency populates data from WMSN into the Manpower Staffing Standards System (Inpatient module within Automated Staffing Assessment Model). The current system, Workload Management System for Nursing Internet (WMSNi), is an interim solution that requires additional functionalities for modernization and integration at the enterprise level. The expanding missions and approved requirements for WMSNi support strategic initiatives on the Army Medical Command balanced scorecard and require continued sustainment for multiple personnel and manpower business processes for both inpatient and outpatient nursing care. This system is currently being leveraged by the TRICARE Management Activity as an interim multiservice solution, and is being used at 24 Army medical treatment facilities. The evidenced-based information provided to Army decision makers through the methods used in the WMSNi will be essential across the Army Medical Command throughout the system's life cycle.

  6. TargetSpy: a supervised machine learning approach for microRNA target prediction.

    PubMed

    Sturm, Martin; Hackenberg, Michael; Langenberger, David; Frishman, Dmitrij

    2010-05-28

    Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org.

  7. TargetSpy: a supervised machine learning approach for microRNA target prediction

    PubMed Central

    2010-01-01

    Background Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. Results We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences. In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Conclusion Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. PMID:20509939

  8. Problem decomposition by mutual information and force-based clustering

    NASA Astrophysics Data System (ADS)

    Otero, Richard Edward

    The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

  9. Does the Presence of Scrapie Affect the Ability of Current Statutory Discriminatory Tests To Detect the Presence of Bovine Spongiform Encephalopathy?

    PubMed Central

    Chaplin, M. J.; Vickery, C. M.; Simon, S.; Davis, L.; Denyer, M.; Lockey, R.; Stack, M. J.; O'Connor, M. J.; Bishop, K.; Gough, K. C.; Maddison, B. C.; Thorne, L.; Spiropoulos, J.

    2015-01-01

    Current European Commission (EC) surveillance regulations require discriminatory testing of all transmissible spongiform encephalopathy (TSE)-positive small ruminant (SR) samples in order to classify them as bovine spongiform encephalopathy (BSE) or non-BSE. This requires a range of tests, including characterization by bioassay in mouse models. Since 2005, naturally occurring BSE has been identified in two goats. It has also been demonstrated that more than one distinct TSE strain can coinfect a single animal in natural field situations. This study assesses the ability of the statutory methods as listed in the regulation to identify BSE in a blinded series of brain samples, in which ovine BSE and distinct isolates of scrapie are mixed at various ratios ranging from 99% to 1%. Additionally, these current statutory tests were compared with a new in vitro discriminatory method, which uses serial protein misfolding cyclic amplification (sPMCA). Western blotting consistently detected 50% BSE within a mixture, but at higher dilutions it had variable success. The enzyme-linked immunosorbent assay (ELISA) method consistently detected BSE only when it was present as 99% of the mixture, with variable success at higher dilutions. Bioassay and sPMCA reported BSE in all samples where it was present, down to 1%. sPMCA also consistently detected the presence of BSE in mixtures at 0.1%. While bioassay is the only validated method that allows comprehensive phenotypic characterization of an unknown TSE isolate, the sPMCA assay appears to offer a fast and cost-effective alternative for the screening of unknown isolates when the purpose of the investigation was solely to determine the presence or absence of BSE. PMID:26041899

  10. Analysis of Mesh Distribution Systems Considering Load Models and Load Growth Impact with Loops on System Performance

    NASA Astrophysics Data System (ADS)

    Kumar Sharma, A.; Murty, V. V. S. N.

    2014-12-01

    The distribution system is the final link between bulk power system and consumer end. A distinctive load flow solution method is used for analysis of the load flow of radial and weakly meshed network based on Kirchhoff's Current Law (KCL) and KVL. This method has excellent convergence characteristics for both radial as well as weakly meshed structure and is based on bus injection to branch current and branch-current to bus-voltage matrix. The main contribution of the paper is: (i) an analysis has been carried out for a weekly mesh network considering number of loops addition and its impact on the losses, kW and kVAr requirements from a system, and voltage profile, (ii) different load models, realistic ZIP load model and load growth impact on losses, voltage profile, kVA and kVAr requirements, (iii) impact of addition of loops on losses, voltage profile, kVA and kVAr requirements from substation, and (iv) comparison of system performance with radial distribution system. Voltage stability is a major concern in planning and operation of power systems. This paper also includes identifying the closeness critical bus which is the most sensitive to the voltage collapse in radial distribution networks. Node having minimum value of voltage stability index is the most sensitive node. Voltage stability index values are computed for meshed network with number of loops added in the system. The results have been obtained for IEEE 33 and 69 bus test system. The results have also been obtained for radial distribution system for comparison.

  11. Role of In Vitro Release Methods in Liposomal Formulation Development: Challenges and Regulatory Perspective.

    PubMed

    Solomon, Deepak; Gupta, Nilesh; Mulla, Nihal S; Shukla, Snehal; Guerrero, Yadir A; Gupta, Vivek

    2017-11-01

    In the past few years, measurement of drug release from pharmaceutical dosage forms has been a focus of extensive research because the release profile obtained in vitro can give an indication of the drug's performance in vivo. Currently, there are no compendial in vitro release methods designed for liposomes owing to a range of experimental challenges, which has created a major hurdle for both development and regulatory acceptance of liposome-based drug products. In this paper, we review the current techniques that are most often used to assess in vitro drug release from liposomal products; these include the membrane diffusion techniques (dialysis, reverse dialysis, fractional dialysis, and microdialysis), the sample-and-separate approach, the in situ method, the continuous flow, and the modified United States Pharmacopeia methods (USP I and USP IV). We discuss the principles behind each of the methods and the criteria that assist in choosing the most appropriate method for studying drug release from a liposomal formulation. Also, we have included information concerning the current regulatory requirements for liposomal drug products in the United States and in Europe. In light of increasing costs of preclinical and clinical trials, applying a reliable in vitro release method could serve as a proxy to expensive in vivo bioavailability studies. Graphical Abstract Appropriate in-vitro drug release test from liposomal products is important to predict the in-vivo performance.

  12. Meeting the measurement uncertainty and traceability requirements of ISO/AEC standard 17025 in chemical analysis.

    PubMed

    King, B

    2001-11-01

    The new laboratory accreditation standard, ISO/IEC 17025, reflects current thinking on good measurement practice by requiring more explicit and more demanding attention to a number of activities. These include client interactions, method validation, traceability, and measurement uncertainty. Since the publication of the standard in 1999 there has been extensive debate about its interpretation. It is the author's view that if good quality practices are already in place and if the new requirements are introduced in a manner that is fit for purpose, the additional work required to comply with the new requirements can be expected to be modest. The paper argues that the rigour required in addressing the issues should be driven by customer requirements and the factors that need to be considered in this regard are discussed. The issues addressed include the benefits, interim arrangements, specifying the analytical requirement, establishing traceability, evaluating the uncertainty and reporting the information.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Liping; Zhu, Fulong, E-mail: zhufulong@hust.edu.cn; Duan, Ke

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of opticalmore » devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.« less

  14. Ultrasonic power measurement system based on acousto-optic interaction.

    PubMed

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  15. Ultrasonic power measurement system based on acousto-optic interaction

    NASA Astrophysics Data System (ADS)

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  16. Diode Laser Measurements of Concentration and Temperature in Microgravity Combustion

    NASA Technical Reports Server (NTRS)

    Silver, Joel A.; Kane, Daniel J.

    1999-01-01

    Diode laser absorption spectroscopy provides a direct method of determinating species concentration and local gas temperature in combustion flames. Under microgravity conditions, diode lasers are particularly suitable, given their compact size, low mass and low power requirements. The development of diode laser-based sensors for gas detection in microgravity is presented, detailing measurements of molecular oxygen. Current progress of this work and future application possibilities for these methods on the International Space Station are discussed.

  17. Quantitative monitoring of lipid accumulation over time in cultured adipocytes as function of culture conditions: toward controlled adipose tissue engineering.

    PubMed

    Or-Tzadikario, Shira; Sopher, Ran; Gefen, Amit

    2010-10-01

    Adipose tissue engineering is investigated for native fat substitutes and wound healing model systems. Research and clinical applications of bioartificial fat require a quantitative and objective method to continuously measure adipogenesis in living cultures as opposed to currently used culture-destructive techniques that stain lipid droplet (LD) accumulation. To allow standardization, automatic quantification of LD size is further needed, but currently LD size is measured mostly manually. We developed an image processing-based method that does not require staining to monitor adipose cell maturation in vitro nondestructively using optical micrographs taken consecutively during culturing. We employed our method to monitor LD accumulation in 3T3-L1 and mesenchymal stem cells over 37 days. For each cell type, percentage of lipid area, number of droplets per cell, and droplet diameter were obtained every 2-3 days. In 3T3-L1 cultures, high insulin concentration (10 microg/mL) yielded a significantly different (p < 0.01) time course of all three outcome measures. In mesenchymal stem cell cultures, high fetal bovine serum concentration (12.5%) produced significantly more lipid area (p < 0.01). Our method was able to successfully characterize time courses and extents of adipogenesis and is useful for a wide range of applications testing the effects of biochemical, mechanical, and thermal stimulations in tissue engineering of bioartificial fat constructs.

  18. Inflight Microbial Monitoring- An Alternative Method to Culture Based Detection Currently Used on the International Space Station

    NASA Technical Reports Server (NTRS)

    Khodadad, Christina L.; Birmele, Michele N.; Roman, Monsi; Hummerick, Mary E.; Smith, David J.; Wheeler, Raymond M.

    2015-01-01

    Previous research has shown that potentially destructive microorganisms and human pathogens have been detected on the International Space Station (ISS). The likelihood of introducing new microorganisms occurs with every exchange of crew or addition of equipment or supplies. Microorganisms introduced to the ISS are readily transferred between crew and subsystems (i.e. ECLSS, environmental control and life support systems). Current microbial characterization methods require enrichment of microorganisms and at least a 48-hour incubation time. This increases the microbial load while detecting only a limited number of the total microorganisms. The culture based method detects approximately 1-10% of the total organisms present and provides no identification. To identify and enumerate ISS microbes requires that samples be returned to Earth for complete analysis. Therefore, a more expedient, low-cost, in-flight method of microbial detection, identification, and enumeration is warranted. The RAZOR EX, a ruggedized, commercial off the shelf, real-time PCR field instrument was tested for its ability to detect microorganisms at low concentrations within one hour. Escherichia coli, Salmonella enterica Typhimurium, and Pseudomonas aeruginosa were detected at low levels using real-time DNA amplification. Total heterotrophic counts could also be detected using a 16S gene marker that can identify up to 98% of all bacteria. To reflect viable cells found in the samples, RNA was also detectable using a modified, single-step reverse transcription reaction.

  19. Inflight Microbial Monitoring-An Alternative Method to Culture Based Detection Currently Used on International Space Station

    NASA Technical Reports Server (NTRS)

    Khodadad, Christina L.; Birmele, Michele N.; Roman, Monsi; Hummerick, Mary E.; Smith, David J.; Wheeler, Raymond M.

    2015-01-01

    Previous research has shown that microorganisms and potential human pathogens have been detected on the International Space Station (ISS). The potential to introduce new microorganisms occurs with every exchange of crew or addition of equipment or supplies. Previous research has shown that microorganisms introduced to the ISS are readily transferred between crew and subsystems and back (i.e. ECLSS, environmental control and life support systems). Current microbial characterization methods require enrichment of microorganisms and a 48-hour incubation time. This increases the microbial load while detecting a limited number of microorganisms. The culture based method detects approximately 1-10% of the total organisms present and provides no identification, To identify and enumerate ISS samples requires that samples to be returned to Earth for complete analysis. Therefore, a more expedient, low-cost, in-flight method of microbial detection, identification, and enumeration is warranted. The RAZOR EX, a ruggedized, commercial off the shelf, real-time PCR field instrument was tested for its ability to detect microorganism at low concentrations within one hour. Escherichia coli, Salmonella enterica Typhimurium, and Pseudomonas aeruginosa were detected at low levels using real-time DNA amplification. Total heterotrophic counts could also be detected using a 16S gene marker that can identify up to 98% of all bacteria. To reflect viable cells found in the samples, RNA was also detectable using a modified, single-step reverse transcription reaction.

  20. Si /SiGe n-type resonant tunneling diodes fabricated using in situ hydrogen cleaning

    NASA Astrophysics Data System (ADS)

    Suet, Z.; Paul, D. J.; Zhang, J.; Turner, S. G.

    2007-05-01

    In situ hydrogen cleaning to reduce the surface segregation of n-type dopants in SiGe epitaxy has been used to fabricate Si /SiGe resonant tunneling diodes in a joint gas source chemical vapor deposition and molecular beam epitaxial system. Diodes fabricated without the in situ clean demonstrate linear current-voltage characteristics, while a 15min hydrogen clean produces negative differential resistance with peak-to-valley current ratios up to 2.2 and peak current densities of 5.0A/cm2 at 30K. Analysis of the valley current and the band structure of the devices suggest methods for increasing the operating temperature of Si /SiGe resonant tunneling diodes as required for applications.

  1. Helicopter noise prediction - The current status and future direction

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Farassat, F.

    1992-01-01

    The paper takes stock of the progress, assesses the current prediction capabilities, and forecasts the direction of future helicopter noise prediction research. The acoustic analogy approach, specifically, theories based on the Ffowcs Williams-Hawkings equations, are the most widely used for deterministic noise sources. Thickness and loading noise can be routinely predicted given good plane motion and blade loading inputs. Blade-vortex interaction noise can also be predicted well with measured input data, but prediction of airloads with the high spatial and temporal resolution required for BVI is still difficult. Current semiempirical broadband noise predictions are useful and reasonably accurate. New prediction methods based on a Kirchhoff formula and direct computation appear to be very promising, but are currently very demanding computationally.

  2. Computational problems and signal processing in SETI

    NASA Technical Reports Server (NTRS)

    Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard

    1991-01-01

    The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.

  3. Comparative analysis of the current payment system for hospital services in Serbia and projected payments under diagnostic related groups system in urology.

    PubMed

    Babić, Uroš; Soldatović, Ivan; Vuković, Dejana; Milićević, Milena Šantrić; Stjepanović, Mihailo; Kojić, Dejan; Argirović, Aleksandar; Vukotić, Vinka

    2015-03-01

    Global budget per calendar year is a traditional method of funding hospitals in Serbia. Diagnose related groups (DGR) is a method of hospital payment based on classification of patients into groups with clinically similar problems and similar utilization of hospital resources. The aim of this study was to compare current methods of hospital services payment with the projected costs by DRG payment method in urology. The data were obtained from the information system used in the Clinical Hospital Center "Dr. Dragiša Mišović"--Dedinje in Belgrade, Serbia. The implemented hospital information system was the main criterion for selection of healthcare institutions. The study included 994 randomly selected patients treated surgically and conservatively in 2012. Average costs under the current payment method were slightly higher than those projected by DRG, however, the variability was twice as high (54,111 ± 69,789 compared to 53,434 ± 32,509, p < 0.001) respectively. The univariate analysis showed that the highest correlation with the current payment method as well as with the projected one by DRG was observed in relation to the number of days of hospitalization (ρ = 0.842, p < 0.001, and ρ = 0.637, p < 0.001, respectively). Multivariate regression models confirmed the influence of the number of hospitalization days to costs under the current payment system (β = 0.843, p < 0.001) as well as under the projected DRG payment system (β = 0.737, p < 0.001). The same predictor was crucial for the difference in the current payment method and the pro- jected DRG payment methods (β = 0.501, p < 0.001). Payment under the DRG system is administratively more complex because it requires detailed and standardized coding of diagnoses and procedures, as well as the information on the average consumption of resources (costs) per DRG. Given that aggregate costs of treatment under two hospital payment methods compared in the study are not significantly different, the focus on minor surgeries both under the current hospital payment method and under the introduced DRG system would be far more cost-effective for a hospital as great variations in treatment performance (reductions of days of hospitalization and complications), and consequently invoiced amounts would be reduced.

  4. Human Factors Engineering as a System in the Vision for Exploration

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Smith, Danielle; Holden, Kritina

    2006-01-01

    In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation in order to identify HSI requirements for the lunar communications architecture. Throughout these ConOps exercises, HFE personnel are testing various tools and methodologies that have been identified in the literature. A key part of the effort is the identification of optimal processes, methods, and tools for these early development phase activities, such as ConOps, requirements development, and early conceptual design. An overview of the activities completed thus far, as well as the tools and methods investigated will be presented.

  5. When Less Is More in Cognitive Diagnosis: A Rapid Online Method for Diagnosing Learner Task-Specific Expertise

    ERIC Educational Resources Information Center

    Kalyuga, Slava

    2008-01-01

    Rapid cognitive diagnosis allows measuring current levels of learner domain-specific knowledge in online learning environments. Such measures are required for individualizing instructional support in real time, as students progress through a learning session. This article describes 2 experiments designed to validate a rapid online diagnostic…

  6. Polydiacetylene-Based Liposomes: An "Optical Tongue" for Bacteria Detection and Identification

    ERIC Educational Resources Information Center

    West, Matthew R.; Hanks, Timothy W.; Watson, Rhett T.

    2009-01-01

    Food- and water-borne bacteria are a major health concern worldwide. Current detection methods are time-consuming and require sophisticated equipment that is not always readily available. However, new techniques based on nanotechnology are under development that will result in a new generation of sensors. In this experiment, liposomes are…

  7. A Study on the Training Mode of Electronic Application-Oriented Undergraduate with Industry Needs

    ERIC Educational Resources Information Center

    Wang, Zhonghua; Cheng, Lifang; Wang, Hao

    2017-01-01

    Electronic industry is an economic pillar in China. Due to the Moore's Law, the industry requires continuous development and innovation. In order to achieve these goals, the cultivation of electronic application-oriented undergraduate is essential. However, at current, the innovative educational concepts and teaching methods are lagging behind so…

  8. Bayesian Statistics in Educational Research: A Look at the Current State of Affairs

    ERIC Educational Resources Information Center

    König, Christoph; van de Schoot, Rens

    2018-01-01

    The ability of a scientific discipline to build cumulative knowledge depends on its predominant method of data analysis. A steady accumulation of knowledge requires approaches which allow researchers to consider results from comparable prior research. Bayesian statistics is especially relevant for establishing a cumulative scientific discipline,…

  9. Electrophoresis for biological production

    NASA Technical Reports Server (NTRS)

    Mccreight, L. R.

    1977-01-01

    Preparative electrophoresis may provide a unique method for meeting ever more stringent purity requirements. Prolonged near zero gravity in space may permit the operation of preparative electrophoresis equipment with 100 times greater throughput than is currently available. Some experiments with influenza Virus Antigen, Erythropoietin and Antihemophaliac Factor, along with process and economic projections, are briefly reviewed.

  10. Impacts of U.S. Export Control Policies on Science and Technology Activities and Competitiveness

    DTIC Science & Technology

    2009-02-25

    coffee table. However, under the current export control regime, the stand was considered ‘ITAR hardware’ and we were required to have two security...should survive without an effective method for pruning items from the control lists when they no longer serve a significant definable national

  11. Perceptions about the Construction of Academic and Professional Competencies in Psychologists

    ERIC Educational Resources Information Center

    Arias, Jesus de la Fuente; Justicia, Fernando Justicia; Casanova, Pedro Felix; Trianes, Maria Victoria

    2005-01-01

    Introduction: Evaluating competencies required for professional practice is a matter of particular current interest. Its importance lies in improvements that can be made in both preparatory and ongoing training and development processes. This paper summarizes results obtained from a recent investigation regarding this issue. Method: A total of 76…

  12. Re-Designing University Courses to Support Collaborative Knowledge Creation Practices

    ERIC Educational Resources Information Center

    Lakkala, Minna; Toom, Auli; Ilomäki, Liisa; Muukkkonen, Hanni

    2015-01-01

    Higher education institutions should not only aim to educate academic experts who master their own fields, but also give their students generic skills important in the current society. New teaching methods are required to support the development of such skills. The study examined how a group of voluntary university lecturers re-designed their…

  13. Health supply chain management.

    PubMed

    Zimmerman, Rolf; Gallagher, Pat

    2010-01-01

    This chapter gives an educational overview of: * The actual application of supply chain practice and disciplines required for service delivery improvement within the current health environment. * A rationale for the application of Supply Chain Management (SCM) approaches to the Health sector. * The tools and methods available for supply chain analysis and benchmarking. * Key supply chain success factors.

  14. Ethics Education in Australian Preservice Teacher Programs: A Hidden Imperative?

    ERIC Educational Resources Information Center

    Boon, Helen J.; Maxwell, Bruce

    2016-01-01

    This paper provides a snapshot of the current approach to ethics education in accredited Australian pre-service teacher programs. Methods included a manual calendar search of ethics related subjects required in teacher programs using a sample of 24 Australian universities and a survey of 26 university representatives. Findings show a paucity of…

  15. Determining RNA quality for NextGen sequencing: some exceptions to the gold standard rule of 23S to 16S rRNA ratio

    USDA-ARS?s Scientific Manuscript database

    Using next-generation-sequencing technology to assess entire transcriptomes requires high quality starting RNA. Currently, RNA quality is routinely judged using automated microfluidic gel electrophoresis platforms and associated algorithms. Here we report that such automated methods generate false-n...

  16. Techniques employed by the NASA White Sands Test Facility to ensure oxygen system component safety

    NASA Technical Reports Server (NTRS)

    Stradling, J. S.; Pippen, D. L.; Frye, G. W.

    1983-01-01

    Methods of ascertaining the safety and suitability of a variety of oxygen system components are discussed. Additionally, qualification and batch control requirements for soft goods in oxygen systems are presented. Current oxygen system component qualification test activities in progress at White Sands Test Facility are described.

  17. 17 CFR 250.26 - Financial statement and recordkeeping requirements for registered holding companies and...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... to form and content of financial statements; and (2) Shall make and keep current accounts, books and... which it issues to stockholders. Such accounts, books and other records shall be maintained in... subsidiary company thereof shall hereafter follow the equity method of accounting for investments in any...

  18. Developing Systemic Theories Requires Formal Methods

    ERIC Educational Resources Information Center

    Gobet, Fernand

    2012-01-01

    Ziegler and Phillipson (Z&P) advance an interesting and ambitious proposal, whereby current analytical/mechanistic theories of gifted education are replaced by systemic theories. In this commentary, the author focuses on the pros and cons of using systemic theories. He argues that Z&P's proposal both goes too far and not far enough. The future of…

  19. A rapid colorimetric assay for mold spore germination using XTT tetrazolium salt

    Treesearch

    Carol A. Clausen; Vina W. Yang

    2011-01-01

    Current laboratory test methods to measure efficacy of new mold inhibitors are time consuming, some require specialized test equipment and ratings are subjective. Rapid, simple quantitative assays to measure the efficacy of mold inhibitors are needed. A quantitative, colorimetric microassay was developed using XTT tetrazolium salt to metabolically assess mold spore...

  20. Twenty-First Century Literacy: A Matter of Scale from Micro to Mega

    ERIC Educational Resources Information Center

    Brown, Abbie; Slagter van Tryon, Patricia J.

    2010-01-01

    Twenty-first century technologies require educators to look for new ways to teach literacy skills. Current communication methods are combinations of traditional and newer, network-driven forms. This article describes the changes twenty-first century technologies cause in the perception of time, size, distance, audience, and available data, and…

  1. An Economic Analysis of College Scholarship Policy.

    ERIC Educational Resources Information Center

    Owen, John D.

    A national scholarship policy based on a cost-benefit analysis of the social value of education is proposed as one method for improving current patterns of allocating US college scholarships and tuition funds. A central college subsidy agency, operating on a limited budget, would be required to allocate funds according to the maximum overall…

  2. Bed Capacity Planning Using Stochastic Simulation Approach in Cardiac-surgery Department of Teaching Hospitals, Tehran, Iran

    PubMed Central

    TORABIPOUR, Amin; ZERAATI, Hojjat; ARAB, Mohammad; RASHIDIAN, Arash; AKBARI SARI, Ali; SARZAIEM, Mahmuod Reza

    2016-01-01

    Background: To determine the hospital required beds using stochastic simulation approach in cardiac surgery departments. Methods: This study was performed from Mar 2011 to Jul 2012 in three phases: First, collection data from 649 patients in cardiac surgery departments of two large teaching hospitals (in Tehran, Iran). Second, statistical analysis and formulate a multivariate linier regression model to determine factors that affect patient's length of stay. Third, develop a stochastic simulation system (from admission to discharge) based on key parameters to estimate required bed capacity. Results: Current cardiac surgery department with 33 beds can only admit patients in 90.7% of days. (4535 d) and will be required to over the 33 beds only in 9.3% of days (efficient cut off point). According to simulation method, studied cardiac surgery department will requires 41–52 beds for admission of all patients in the 12 next years. Finally, one-day reduction of length of stay lead to decrease need for two hospital beds annually. Conclusion: Variation of length of stay and its affecting factors can affect required beds. Statistic and stochastic simulation model are applied and useful methods to estimate and manage hospital beds based on key hospital parameters. PMID:27957466

  3. Sensor-less pseudo-sinusoidal drive for a permanent-magnet brushless ac motor

    NASA Astrophysics Data System (ADS)

    Liu, Li-Hsiang; Chern, Tzuen-Lih; Pan, Ping-Lung; Huang, Tsung-Mou; Tsay, Der-Min; Kuang, Jao-Hwa

    2012-04-01

    The precise rotor-position information is required for a permanent-magnet brushless ac motor (BLACM) drive. In the conventional sinusoidal drive method, either an encoder or a resolver is usually employed. For position sensor-less vector control schemes, the rotor flux estimation and torque components are obtained by complicated coordinate transformations. These computational intensive methods are susceptible to current distortions and parameter variations. To simplify the method complexity, this work presents a sensor-less pseudo-sinusoidal drive scheme with speed control for a three-phase BLACM. Based on the sinusoidal drive scheme, a floating period of each phase current is inserted for back electromotive force detection. The zero-crossing point is determined directly by the proposed scheme, and the rotor magnetic position and rotor speed can be estimated simultaneously. Several experiments for various active angle periods are undertaken. Furthermore, a current feedback control is included to minimize and compensate the torque fluctuation. The experimental results show that the proposed method has a competitive performance compared with the conventional drive manners for BLACM. The proposed scheme is straightforward, bringing the benefits of sensor-less drive and negating the need for coordinate transformations in the operating process.

  4. Innovations in energy expenditure assessment.

    PubMed

    Achamrah, Najate; Oshima, Taku; Genton, Laurence

    2018-06-15

    Optimal nutritional therapy has been associated with better clinical outcomes and requires providing energy as closed as possible to measured energy expenditure. We reviewed the current innovations in energy expenditure assessment in humans, focusing on indirect calorimetry and other new alternative methods. Although considered the reference method to measure energy expenditure, the use of indirect calorimetry is currently limited by the lack of an adequate device. However, recent technical developments may allow a broader use of indirect calorimetry for in-patients and out-patients. An ongoing international academic initiative to develop a new indirect calorimeter aimed to provide innovative and affordable technical solutions for many of the current limitations of indirect calorimetry. New alternative methods to indirect calorimetry, including CO2 measurements in mechanically ventilated patients, isotopic approaches and accelerometry-based fitness equipments, show promises but have been either poorly studied and/or are not accurate compared to indirect calorimetry. Therefore, to date, energy expenditure measured by indirect calorimetry remains the gold standard to guide nutritional therapy. Some new innovative methods are demonstrating promises in energy expenditure assessment, but still need to be validated. There is an ongoing need for easy-to-use, accurate and affordable indirect calorimeter for daily use in in-patients and out-patients.

  5. Financial impact of nursing professionals staff required in an Intensive Care Unit 1

    PubMed Central

    de Araújo, Thamiris Ricci; Menegueti, Mayra Gonçalves; Auxiliadora-Martins, Maria; Castilho, Valéria; Chaves, Lucieli Dias Pedreschi; Laus, Ana Maria

    2016-01-01

    ABSTRACT Objective: to calculate the cost of the average time of nursing care spent and required by patients in the Intensive Care Unit (ICU) and the financial expense for the right dimension of staff of nursing professionals. Method: a descriptive, quantitative research, using the case study method, developed in adult ICU patients. We used the workload index - Nursing Activities Score; the average care time spent and required and the amount of professionals required were calculated using equations and from these data, and from the salary composition of professionals and contractual monthly time values, calculated the cost of direct labor of nursing. Results: the monthly cost of the average quantity of available professionals was US$ 35,763.12, corresponding to 29.6 professionals, and the required staff for 24 hours of care is 42.2 nurses, with a monthly cost of US$ 50,995.44. Conclusion: the numerical gap of nursing professionals was 30% and the monthly financial expense for adaptation of the structure is US$ 15,232.32, which corresponds to an increase of 42.59% in the amounts currently paid by the institution. PMID:27878219

  6. New methods in hydrologic modeling and decision support for culvert flood risk under climate change

    NASA Astrophysics Data System (ADS)

    Rosner, A.; Letcher, B. H.; Vogel, R. M.; Rees, P. S.

    2015-12-01

    Assessing culvert flood vulnerability under climate change poses an unusual combination of challenges. We seek a robust method of planning for an uncertain future, and therefore must consider a wide range of plausible future conditions. Culverts in our case study area, northwestern Massachusetts, USA, are predominantly found in small, ungaged basins. The need to predict flows both at numerous sites and under numerous plausible climate conditions requires a statistical model with low data and computational requirements. We present a statistical streamflow model that is driven by precipitation and temperature, allowing us to predict flows without reliance on reference gages of observed flows. The hydrological analysis is used to determine each culvert's risk of failure under current conditions. We also explore the hydrological response to a range of plausible future climate conditions. These results are used to determine the tolerance of each culvert to future increases in precipitation. In a decision support context, current flood risk as well as tolerance to potential climate changes are used to provide a robust assessment and prioritization for culvert replacements.

  7. A comparison of two neural network schemes for navigation

    NASA Technical Reports Server (NTRS)

    Munro, Paul W.

    1989-01-01

    Neural networks have been applied to tasks in several areas of artificial intelligence, including vision, speech, and language. Relatively little work has been done in the area of problem solving. Two approaches to path-finding are presented, both using neural network techniques. Both techniques require a training period. Training under the back propagation (BPL) method was accomplished by presenting representations of (current position, goal position) pairs as input and appropriate actions as output. The Hebbian/interactive activation (HIA) method uses the Hebbian rule to associate points that are nearby. A path to a goal is found by activating a representation of the goal in the network and processing until the current position is activated above some threshold level. BPL, using back-propagation learning, failed to learn, except in a very trivial fashion, that is equivalent to table lookup techniques. HIA, performed much better, and required storage of fewer weights. In drawing a comparison, it is important to note that back propagation techniques depend critically upon the forms of representation used, and can be sensitive to parameters in the simulations; hence the BPL technique may yet yield strong results.

  8. Vortex Formation in the Wake of Dark Matter Propulsion

    NASA Astrophysics Data System (ADS)

    Robertson, G. A.; Pinheiro, M. J.

    Future spaceflight will require a new theory of propulsion; specifically one that does not require mass ejection. A new theory is proposed that uses the general view that closed currents pervade the entire universe and, in particular, there is a cosmic mechanism to expel matter to large astronomical distances involving vortex currents as seen with blazars and blackholes. At the terrestrial level, force producing vortices have been related to the motion of wings (e.g., birds, duck paddles, fish's tail). In this paper, vortex structures are shown to exist in the streamlines aft of a spaceship moving at high velocity in the vacuum. This is accomplished using the density excitation method per a modified Chameleon Cosmology model. This vortex structure is then shown to have similarities to spacetime models as Warp-Drive and wormholes, giving rise to the natural extension of Hawking and Unruh radiation, which provides the propulsive method for space travel where virtual electron-positron pairs, absorbed by the gravitational expansion forward of the spaceship emerge from an annular vortex field aft of the spaceship as real particles, in-like to propellant mass ejection in conventional rocket theory.

  9. LBM-EP: Lattice-Boltzmann method for fast cardiac electrophysiology simulation from 3D images.

    PubMed

    Rapaka, S; Mansi, T; Georgescu, B; Pop, M; Wright, G A; Kamen, A; Comaniciu, Dorin

    2012-01-01

    Current treatments of heart rhythm troubles require careful planning and guidance for optimal outcomes. Computational models of cardiac electrophysiology are being proposed for therapy planning but current approaches are either too simplified or too computationally intensive for patient-specific simulations in clinical practice. This paper presents a novel approach, LBM-EP, to solve any type of mono-domain cardiac electrophysiology models at near real-time that is especially tailored for patient-specific simulations. The domain is discretized on a Cartesian grid with a level-set representation of patient's heart geometry, previously estimated from images automatically. The cell model is calculated node-wise, while the transmembrane potential is diffused using Lattice-Boltzmann method within the domain defined by the level-set. Experiments on synthetic cases, on a data set from CESC'10 and on one patient with myocardium scar showed that LBM-EP provides results comparable to an FEM implementation, while being 10 - 45 times faster. Fast, accurate, scalable and requiring no specific meshing, LBM-EP paves the way to efficient and detailed models of cardiac electrophysiology for therapy planning.

  10. A comparison of two neural network schemes for navigation

    NASA Technical Reports Server (NTRS)

    Munro, Paul

    1990-01-01

    Neural networks have been applied to tasks in several areas of artificial intelligence, including vision, speech, and language. Relatively little work has been done in the area of problem solving. Two approaches to path-finding are presented, both using neural network techniques. Both techniques require a training period. Training under the back propagation (BPL) method was accomplished by presenting representations of current position, goal position pairs as input and appropriate actions as output. The Hebbian/interactive activation (HIA) method uses the Hebbian rule to associate points that are nearby. A path to a goal is found by activating a representation of the goal in the network and processing until the current position is activated above some threshold level. BPL, using back-propagation learning, failed to learn, except in a very trivial fashion, that is equivalent to table lookup techniques. HIA, performed much better, and required storage of fewer weights. In drawing a comparison, it is important to note that back propagation techniques depend critically upon the forms of representation used, and can be sensitive to parameters in the simulations; hence the BPL technique may yet yield strong results.

  11. Time-Shifted Boundary Conditions Used for Navier-Stokes Aeroelastic Solver

    NASA Technical Reports Server (NTRS)

    Srivastava, Rakesh

    1999-01-01

    Under the Advanced Subsonic Technology (AST) Program, an aeroelastic analysis code (TURBO-AE) based on Navier-Stokes equations is currently under development at NASA Lewis Research Center s Machine Dynamics Branch. For a blade row, aeroelastic instability can occur in any of the possible interblade phase angles (IBPA s). Analyzing small IBPA s is very computationally expensive because a large number of blade passages must be simulated. To reduce the computational cost of these analyses, we used time shifted, or phase-lagged, boundary conditions in the TURBO-AE code. These conditions can be used to reduce the computational domain to a single blade passage by requiring the boundary conditions across the passage to be lagged depending on the IBPA being analyzed. The time-shifted boundary conditions currently implemented are based on the direct-store method. This method requires large amounts of data to be stored over a period of the oscillation cycle. On CRAY computers this is not a major problem because solid-state devices can be used for fast input and output to read and write the data onto a disk instead of storing it in core memory.

  12. Fast Entanglement Establishment via Local Dynamics for Quantum Repeater Networks

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    Quantum entanglement is a necessity for future quantum communication networks, quantum internet, and long-distance quantum key distribution. The current approaches of entanglement distribution require high-delay entanglement transmission, entanglement swapping to extend the range of entanglement, high-cost entanglement purification, and long-lived quantum memories. We introduce a fundamental protocol for establishing entanglement in quantum communication networks. The proposed scheme does not require entanglement transmission between the nodes, high-cost entanglement swapping, entanglement purification, or long-lived quantum memories. The protocol reliably establishes a maximally entangled system between the remote nodes via dynamics generated by local Hamiltonians. The method eliminates the main drawbacks of current schemes allowing fast entanglement establishment with a minimized delay. Our solution provides a fundamental method for future long-distance quantum key distribution, quantum repeater networks, quantum internet, and quantum-networking protocols. This work was partially supported by the GOP-1.1.1-11-2012-0092 project sponsored by the EU and European Structural Fund, by the Hungarian Scientific Research Fund - OTKA K-112125, and by the COST Action MP1006.

  13. Epitaxy of Ferroelectric P(VDF-TrFE) Films via Removable PTFE Templates and Its Application in Semiconducting/Ferroelectric Blend Resistive Memory.

    PubMed

    Xia, Wei; Peter, Christian; Weng, Junhui; Zhang, Jian; Kliem, Herbert; Jiang, Yulong; Zhu, Guodong

    2017-04-05

    Ferroelectric polymer based devices exhibit great potentials in low-cost and flexible electronics. To meet the requirements of both low voltage operation and low energy consumption, thickness of ferroelectric polymer films is usually required to be less than, for example, 100 nm. However, decrease of film thickness is also accompanied by the degradation of both crystallinity and ferroelectricity and also the increase of current leakage, which surely degrades device performance. Here we report one epitaxy method based on removable poly(tetrafluoroethylene) (PTFE) templates for high-quality fabrication of ordered ferroelectric polymer thin films. Experimental results indicate that such epitaxially grown ferroelectric polymer films exhibit well improved crystallinity, reduced current leakage and good resistance to electrical breakdown, implying their applications in high-performance and low voltage operated ferroelectric devices. On the basis of this removable PTFE template method, we fabricated organic semiconducting/ferroelectric blend resistive films which presented record electrical performance with operation voltage as low as 5 V and ON/OFF ratio up to 10 5 .

  14. Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows

    PubMed Central

    Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio

    2012-01-01

    Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896

  15. Metrology for hydrogen energy applications: a project to address normative requirements

    NASA Astrophysics Data System (ADS)

    Haloua, Frédérique; Bacquart, Thomas; Arrhenius, Karine; Delobelle, Benoît; Ent, Hugo

    2018-03-01

    Hydrogen represents a clean and storable energy solution that could meet worldwide energy demands and reduce greenhouse gases emission. The joint research project (JRP) ‘Metrology for sustainable hydrogen energy applications’ addresses standardisation needs through pre- and co-normative metrology research in the fast emerging sector of hydrogen fuel that meet the requirements of the European Directive 2014/94/EU by supplementing the revision of two ISO standards that are currently too generic to enable a sustainable implementation of hydrogen. The hydrogen purity dispensed at refueling points should comply with the technical specifications of ISO 14687-2 for fuel cell electric vehicles. The rapid progress of fuel cell technology now requires revising this standard towards less constraining limits for the 13 gaseous impurities. In parallel, optimized validated analytical methods are proposed to reduce the number of analyses. The study aims also at developing and validating traceable methods to assess accurately the hydrogen mass absorbed and stored in metal hydride tanks; this is a research axis for the revision of the ISO 16111 standard to develop this safe storage technique for hydrogen. The probability of hydrogen impurity presence affecting fuel cells and analytical techniques for traceable measurements of hydrogen impurities will be assessed and new data of maximum concentrations of impurities based on degradation studies will be proposed. Novel validated methods for measuring the hydrogen mass absorbed in hydrides tanks AB, AB2 and AB5 types referenced to ISO 16111 will be determined, as the methods currently available do not provide accurate results. The outputs here will have a direct impact on the standardisation works for ISO 16111 and ISO 14687-2 revisions in the relevant working groups of ISO/TC 197 ‘Hydrogen technologies’.

  16. Multi-criteria clinical decision support: A primer on the use of multiple criteria decision making methods to promote evidence-based, patient-centered healthcare.

    PubMed

    Dolan, James G

    2010-01-01

    Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers.Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine "hard data" with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings.The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP).

  17. Multi-criteria clinical decision support: A primer on the use of multiple criteria decision making methods to promote evidence-based, patient-centered healthcare

    PubMed Central

    Dolan, James G.

    2010-01-01

    Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers. Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine “hard data” with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings. The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP) PMID:21394218

  18. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the optimization method. This paper describes the GPS-based pitot-static calibration method developed for the AirSTAR research test-bed operated as part of the Integrated Resilient Aircraft Controls (IRAC) project in the NASA Aviation Safety Program (AvSP). A description of the method will be provided and results from recent flight tests will be shown to illustrate the performance and advantages of this approach. Discussion of maneuver requirements and data reduction will be included as well as potential applications.

  19. A common-path optical coherence tomography based electrode for structural imaging of nerves and recording of action potentials

    NASA Astrophysics Data System (ADS)

    Islam, M. Shahidul; Haque, Md. Rezuanul; Oh, Christian M.; Wang, Yan; Park, B. Hyle

    2013-03-01

    Current technologies for monitoring neural activity either use different variety of electrodes (electrical recording) or require contrast agents introduced exogenously or through genetic modification (optical imaging). Here we demonstrate an optical method for non-contact and contrast agent free detection of nerve activity using phase-resolved optical coherence tomography (pr-OCT). A common-path variation of the pr-OCT is recently implemented and the developed system demonstrated the capability to detect rapid transient structural changes that accompany neural spike propagation. No averaging over multiple trials was required, indicating its capability of single-shot detection of individual impulses from functionally stimulated Limulus optic nerve. The strength of this OCT-based optical electrode is that it is a contactless method and does not require any exogenous contrast agent. With further improvements in accuracy and sensitivity, this optical electrode will play a complementary role to the existing recording technologies in future.

  20. A Novel Design of an Automatic Lighting Control System for a Wireless Sensor Network with Increased Sensor Lifetime and Reduced Sensor Numbers

    PubMed Central

    Mohamaddoust, Reza; Haghighat, Abolfazl Toroghi; Sharif, Mohamad Javad Motahari; Capanni, Niccolo

    2011-01-01

    Wireless sensor networks (WSN) are currently being applied to energy conservation applications such as light control. We propose a design for such a system called a Lighting Automatic Control System (LACS). The LACS system contains a centralized or distributed architecture determined by application requirements and space usage. The system optimizes the calculations and communications for lighting intensity, incorporates user illumination requirements according to their activities and performs adjustments based on external lighting effects in external sensor and external sensor-less architectures. Methods are proposed for reducing the number of sensors required and increasing the lifetime of those used, for considerably reduced energy consumption. Additionally we suggest methods for improving uniformity of illuminance distribution on a workplane’s surface, which improves user satisfaction. Finally simulation results are presented to verify the effectiveness of our design. PMID:22164114

Top