40 CFR 141.110 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false General requirements. 141.110 Section 141.110 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Treatment Techniques § 141.110 General requirements...
40 CFR 141.110 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false General requirements. 141.110 Section 141.110 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Treatment Techniques § 141.110 General requirements...
40 CFR 141.130 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Disinfectant Residuals, Disinfection Byproducts, and Disinfection Byproduct Precursors § 141.130 General requirements. (a) The requirements of this subpart L... treatment technique requirements for disinfection byproduct precursors in § 141.135. (2) The regulations in...
40 CFR 141.130 - General requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Disinfectant Residuals, Disinfection Byproducts, and Disinfection Byproduct Precursors § 141.130 General requirements. (a) The requirements of this subpart L... treatment technique requirements for disinfection byproduct precursors in § 141.135. (2) The regulations in...
40 CFR 141.700 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Treatment for Cryptosporidium General... drinking water regulations. The regulations in this subpart establish or extend treatment technique... requirements of this subpart for filtered systems apply to systems required by National Primary Drinking Water...
48 CFR 970.4402-2 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... techniques such as partnering agreements, ombudsmen, and alternative disputes procedures; (6) Use of self-assessment and benchmarking techniques to support continuous improvement in purchasing; (7) Maintenance of...
48 CFR 970.4402-2 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... techniques such as partnering agreements, ombudsmen, and alternative disputes procedures; (6) Use of self-assessment and benchmarking techniques to support continuous improvement in purchasing; (7) Maintenance of...
48 CFR 970.4402-2 - General requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... techniques such as partnering agreements, ombudsmen, and alternative disputes procedures; (6) Use of self-assessment and benchmarking techniques to support continuous improvement in purchasing; (7) Maintenance of...
48 CFR 970.4402-2 - General requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... techniques such as partnering agreements, ombudsmen, and alternative disputes procedures; (6) Use of self-assessment and benchmarking techniques to support continuous improvement in purchasing; (7) Maintenance of...
NASA Technical Reports Server (NTRS)
Miller, R. D.; Rogers, J. T.
1975-01-01
General requirements for dynamic loads analyses are described. The indicial lift growth function unsteady subsonic aerodynamic representation is reviewed, and the FLEXSTAB CPS is evaluated with respect to these general requirements. The effects of residual flexibility techniques on dynamic loads analyses are also evaluated using a simple dynamic model.
40 CFR 141.853 - General monitoring requirements for all public water systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Coliform Rule § 141.853 General monitoring requirements for all public water systems. (a) Sample siting... 31, 2016. These plans are subject to State review and revision. Systems must collect total coliform... MCL violation or has exceeded the coliform treatment technique triggers in § 141.859(a). (4) A system...
40 CFR 141.700 - General requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 141.700 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Treatment for Cryptosporidium General... drinking water regulations. The regulations in this subpart establish or extend treatment technique...
40 CFR 141.700 - General requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 141.700 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Treatment for Cryptosporidium General... drinking water regulations. The regulations in this subpart establish or extend treatment technique...
40 CFR 141.700 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 141.700 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Treatment for Cryptosporidium General... drinking water regulations. The regulations in this subpart establish or extend treatment technique...
40 CFR 141.700 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 141.700 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Treatment for Cryptosporidium General... drinking water regulations. The regulations in this subpart establish or extend treatment technique...
43 CFR 3420.1-4 - General requirements for land use planning.
Code of Federal Regulations, 2011 CFR
2011-10-01
... mining by other than underground mining techniques. (ii) For the purposes of this paragraph, any surface... techniques shall be deemed to have expressed a preference in favor of mining. Where a significant number of... underground mining techniques, that area shall be considered acceptable for further consideration only for...
43 CFR 3420.1-4 - General requirements for land use planning.
Code of Federal Regulations, 2013 CFR
2013-10-01
... mining by other than underground mining techniques. (ii) For the purposes of this paragraph, any surface... techniques shall be deemed to have expressed a preference in favor of mining. Where a significant number of... underground mining techniques, that area shall be considered acceptable for further consideration only for...
43 CFR 3420.1-4 - General requirements for land use planning.
Code of Federal Regulations, 2014 CFR
2014-10-01
... mining by other than underground mining techniques. (ii) For the purposes of this paragraph, any surface... techniques shall be deemed to have expressed a preference in favor of mining. Where a significant number of... underground mining techniques, that area shall be considered acceptable for further consideration only for...
43 CFR 3420.1-4 - General requirements for land use planning.
Code of Federal Regulations, 2012 CFR
2012-10-01
... mining by other than underground mining techniques. (ii) For the purposes of this paragraph, any surface... techniques shall be deemed to have expressed a preference in favor of mining. Where a significant number of... underground mining techniques, that area shall be considered acceptable for further consideration only for...
Photogrammetric techniques for aerospace applications
NASA Astrophysics Data System (ADS)
Liu, Tianshu; Burner, Alpheus W.; Jones, Thomas W.; Barrows, Danny A.
2012-10-01
Photogrammetric techniques have been used for measuring the important physical quantities in both ground and flight testing including aeroelastic deformation, attitude, position, shape and dynamics of objects such as wind tunnel models, flight vehicles, rotating blades and large space structures. The distinct advantage of photogrammetric measurement is that it is a non-contact, global measurement technique. Although the general principles of photogrammetry are well known particularly in topographic and aerial survey, photogrammetric techniques require special adaptation for aerospace applications. This review provides a comprehensive and systematic summary of photogrammetric techniques for aerospace applications based on diverse sources. It is useful mainly for aerospace engineers who want to use photogrammetric techniques, but it also gives a general introduction for photogrammetrists and computer vision scientists to new applications.
Defense Information Systems Program Automated CORDIVEM Design Requirements,
1984-02-28
for the Soviet military organization and equipment. Dr. John Spagnuolo incorporated artificial intelligence techniques in the discussion of functional...4-44 4.1.2.18.2 Artificial Intelligence ...... ........ 4-49 4.1.2.18.3 Types of A.I ................. 4-51 4.1.2.19 General Planning Requirements...described later. Further, some subprocesses may need one of the various techniques associated with the broad field of Artificial Intelligence (A.I.) in
LSI/VLSI design for testability analysis and general approach
NASA Technical Reports Server (NTRS)
Lam, A. Y.
1982-01-01
The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented.
NASA Technical Reports Server (NTRS)
1972-01-01
Guidelines are presented for incorporation of the onboard checkout and monitoring function (OCMF) into the designs of the space shuttle propulsion systems. The guidelines consist of and identify supporting documentation; requirements for formulation, implementation, and integration of OCMF; associated compliance verification techniques and requirements; and OCMF terminology and nomenclature. The guidelines are directly applicable to the incorporation of OCMF into the design of space shuttle propulsion systems and the equipment with which the propulsion systems interface. The techniques and general approach, however, are also generally applicable to OCMF incorporation into the design of other space shuttle systems.
Survey Of Lossless Image Coding Techniques
NASA Astrophysics Data System (ADS)
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
Kutkut, Ahmad; Abu-Hammad, Osama; Frazer, Robert
2016-01-01
Impression techniques for implant restorations can be implant level or abutment level impressions with open tray or closed tray techniques. Conventional implant-abutment level impression techniques are predictable for maximizing esthetic outcomes. Restoration of the implant traditionally requires the use of the metal or plastic impression copings, analogs, and laboratory components. Simplifying the dental implant restoration by reducing armamentarium through incorporating conventional techniques used daily for crowns and bridges will allow more general dentists to restore implants in their practices. The demonstrated technique is useful when modifications to implant abutments are required to correct the angulation of malpositioned implants. This technique utilizes conventional crown and bridge impression techniques. As an added benefit, it reduces costs by utilizing techniques used daily for crowns and bridges. The aim of this report is to describe a simplified conventional impression technique for custom abutments and modified prefabricated solid abutments for definitive restorations. PMID:29563457
A Generalized Technique in Numerical Integration
NASA Astrophysics Data System (ADS)
Safouhi, Hassan
2018-02-01
Integration by parts is one of the most popular techniques in the analysis of integrals and is one of the simplest methods to generate asymptotic expansions of integral representations. The product of the technique is usually a divergent series formed from evaluating boundary terms; however, sometimes the remaining integral is also evaluated. Due to the successive differentiation and anti-differentiation required to form the series or the remaining integral, the technique is difficult to apply to problems more complicated than the simplest. In this contribution, we explore a generalized and formalized integration by parts to create equivalent representations to some challenging integrals. As a demonstrative archetype, we examine Bessel integrals, Fresnel integrals and Airy functions.
Tsou, Tsung-Shan
2007-03-30
This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.
Farzandipour, Mehrdad; Meidani, Zahra; Riazi, Hossein; Sadeqi Jabali, Monireh
2016-12-01
Considering the integral role of understanding users' requirements in information system success, this research aimed to determine functional requirements of nursing information systems through a national survey. Delphi technique method was applied to conduct this study through three phases: focus group method modified Delphi technique and classic Delphi technique. A cross-sectional study was conducted to evaluate the proposed requirements within 15 general hospitals in Iran. Forty-three of 76 approved requirements were clinical, and 33 were administrative ones. Nurses' mean agreements for clinical requirements were higher than those of administrative requirements; minimum and maximum means of clinical requirements were 3.3 and 3.88, respectively. Minimum and maximum means of administrative requirements were 3.1 and 3.47, respectively. Research findings indicated that those information system requirements that support nurses in doing tasks including direct care, medicine prescription, patient treatment management, and patient safety have been the target of special attention. As nurses' requirements deal directly with patient outcome and patient safety, nursing information systems requirements should not only address automation but also nurses' tasks and work processes based on work analysis.
40 CFR 63.1437 - Additional requirements for performance testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... gas being combusted, using the techniques specified in § 63.11(b)(6) of the General Provisions; and (3... National Emission Standards for Hazardous Air Pollutant Emissions for Polyether Polyols Production § 63..., using the techniques specified in paragraphs (c)(1) through (3) of this section, that compliance...
A strategy for selecting data mining techniques in metabolomics.
Banimustafa, Ahmed Hmaidan; Hardy, Nigel W
2012-01-01
There is a general agreement that the development of metabolomics depends not only on advances in chemical analysis techniques but also on advances in computing and data analysis methods. Metabolomics data usually requires intensive pre-processing, analysis, and mining procedures. Selecting and applying such procedures requires attention to issues including justification, traceability, and reproducibility. We describe a strategy for selecting data mining techniques which takes into consideration the goals of data mining techniques on the one hand, and the goals of metabolomics investigations and the nature of the data on the other. The strategy aims to ensure the validity and soundness of results and promote the achievement of the investigation goals.
Enhancing instruction scheduling with a block-structured ISA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melvin, S.; Patt, Y.
It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-19
... public comments, access the index listing of the contents of the docket, and to access those documents in..., mechanical, or other technological collection techniques or other forms of information technology, e.g..., and Indian tribal governments. Title: General Administrative Requirements for Assistance Programs. ICR...
Aircraft Inspection for the General Aviation Aircraft Owner.
ERIC Educational Resources Information Center
Federal Aviation Administration (DOT), Washington, DC. Flight Standards Service.
Presented is useful information for owners, pilots, student mechanics, and others with aviation interests. Part I of this booklet outlines aircraft inspection requirements, owner responsibilities, inspection time intervals, and sources of basic information. Part II is concerned with the general techniques used to inspect an aircraft. (Author/JN)
Development and Evaluation of a Casualty Evacuation Model for a European Conflict.
1985-12-01
EVAC, the computer code which implements our technique, has been used to solve a series of test problems in less time and requiring less memory than...the order of 1/K the amount of main memory for a K-commodity problem, so it can solve significantly larger problems than MCNF. I . 10 CHAPTER II A...technique may require only half the memory of the general L.P. package [6]. These advances are due to the efficient data structures which have been
NASA Technical Reports Server (NTRS)
Burk, S. M., Jr.; Wilson, C. F., Jr.
1975-01-01
A relatively inexpensive radio-controlled model stall/spin test technique was developed. Operational experiences using the technique are presented. A discussion of model construction techniques, spin-recovery parachute system, data recording system, and movie camera tracking system is included. Also discussed are a method of measuring moments of inertia, scaling of engine thrust, cost and time required to conduct a program, and examples of the results obtained from the flight tests.
DOT National Transportation Integrated Search
1976-09-30
Models and techniques for determining passenger-car requirements in railroad service were developed and applied by a research project of which this is the final report. The report is published in two volumes. This volume considers a general problem o...
An interactive user-friendly approach to surface-fitting three-dimensional geometries
NASA Technical Reports Server (NTRS)
Cheatwood, F. Mcneil; Dejarnette, Fred R.
1988-01-01
A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.
STRATEGIES FOR QUANTIFYING PET IMAGING DATA FROM TRACER STUDIES OF BRAIN RECEPTORS AND ENZYMES.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, J.
2001-04-02
A description of some of the methods used in neuroreceptor imaging to distinguish changes in receptor availability has been presented in this chapter. It is necessary to look beyond regional uptake of the tracer since uptake generally is affected by factors other than the number of receptors for which the tracer has affinity. An exception is the infusion method producing an equilibrium state. The techniques vary in complexity some requiring arterial blood measurements of unmetabolized tracer and multiple time uptake data. Others require only a few plasma and uptake measurements and those based on a reference region require no plasmamore » measurements. We have outlined some of the limitations of the different methods. Laruelle (1999) has pointed out that test/retest studies to which various methods can be applied are crucial in determining the optimal method for a particular study. The choice of method will also depend upon the application. In a clinical setting, methods not involving arterial blood sampling are generally preferred. In the future techniques for externally measuring arterial plasma radioactivity with only a few blood samples for metabolite correction will extend the modeling options of clinical PET. Also since parametric images can provide information beyond that of ROI analysis, improved techniques for generating such images will be important, particularly for ligands requiring more than a one-compartment model. Techniques such as the wavelet transform proposed by Turkheimer et al. (2000) may prove to be important in reducing noise and improving quantitation.« less
Munro, Peter R.T.; Ignatyev, Konstantin; Speller, Robert D.; Olivo, Alessandro
2013-01-01
X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation. PMID:20389424
Munro, Peter R T; Ignatyev, Konstantin; Speller, Robert D; Olivo, Alessandro
2010-03-01
X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation.
Capillary electrophoresis in two-dimensional separation systems: Techniques and applications.
Kohl, Felix J; Sánchez-Hernández, Laura; Neusüß, Christian
2015-01-01
The analysis of complex samples requires powerful separation techniques. Here, 2D chromatographic separation techniques (e.g. LC-LC, GC-GC) are increasingly applied in many fields. Electrophoretic separation techniques show a different selectivity in comparison to LC and GC and very high separation efficiency. Thus, 2D separation systems containing at least one CE-based separation technique are an interesting alternative featuring potentially a high degree of orthogonality. However, the generally small volumes and strong electrical fields in CE require special coupling techniques. These technical developments are reviewed in this work, discussing benefits and drawbacks of offline and online systems. Emphasis is placed on the design of the systems, their coupling, and the detector used. Moreover, the employment of strategies to improve peak capacity, resolution, or sensitivity is highlighted. Various applications of 2D separations with CE are summarized. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Generalizing Backtrack-Free Search: A Framework for Search-Free Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Frank, Jeremy
2000-01-01
Tractable classes of constraint satisfaction problems are of great importance in artificial intelligence. Identifying and taking advantage of such classes can significantly speed up constraint problem solving. In addition, tractable classes are utilized in applications where strict worst-case performance guarantees are required, such as constraint-based plan execution. In this work, we present a formal framework for search-free (backtrack-free) constraint satisfaction. The framework is based on general procedures, rather than specific propagation techniques, and thus generalizes existing techniques in this area. We also relate search-free problem solving to the notion of decision sets and use the result to provide a constructive criterion that is sufficient to guarantee search-free problem solving.
NASA Technical Reports Server (NTRS)
Parks, D. M.
1974-01-01
A finite element technique for determination of elastic crack tip stress intensity factors is presented. The method, based on the energy release rate, requires no special crack tip elements. Further, the solution for only a single crack length is required, and the crack is 'advanced' by moving nodal points rather than by removing nodal tractions at the crack tip and performing a second analysis. The promising straightforward extension of the method to general three-dimensional crack configurations is presented and contrasted with the practical impossibility of conventional energy methods.
The Challenge of Winter Backpacking.
ERIC Educational Resources Information Center
Cavanaugh, Michael; Mapes, Alan
1981-01-01
Tips and techniques for safe and enjoyable winter backpacking are offered. Topics covered include cross county skis, snowshoes, clothing, footwear, shelter, sleeping bags, food, hypothermia prevention, as well as general rules and requirements. (CO)
76 FR 72983 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-28
... byproduct material in certain devices. General licensees are required to keep testing records and submit... use of automated collection techniques or other forms of information technology? The public may...
Theoretical and software considerations for nonlinear dynamic analysis
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1983-01-01
In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Accelerated spike resampling for accurate multiple testing controls.
Harrison, Matthew T
2013-02-01
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
Experimental Techniques for Thermodynamic Measurements of Ceramics
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Putnam, Robert L.; Navrotsky, Alexandra
1999-01-01
Experimental techniques for thermodynamic measurements on ceramic materials are reviewed. For total molar quantities, calorimetry is used. Total enthalpies are determined with combustion calorimetry or solution calorimetry. Heat capacities and entropies are determined with drop calorimetry, differential thermal methods, and adiabatic calorimetry . Three major techniques for determining partial molar quantities are discussed. These are gas equilibration techniques, Knudsen cell methods, and electrochemical techniques. Throughout this report, issues unique to ceramics are emphasized. Ceramic materials encompass a wide range of stabilities and this must be considered. In general data at high temperatures is required and the need for inert container materials presents a particular challenge.
Analysis of entry accelerometer data: A case study of Mars Pathfinder
NASA Astrophysics Data System (ADS)
Withers, Paul; Towner, M. C.; Hathi, B.; Zarnecki, J. C.
2003-08-01
Accelerometers are regularly flown on atmosphere-entering spacecraft. Using their measurements, the spacecraft trajectory and the vertical structure of density, pressure, and temperature in the atmosphere through which it descends can be calculated. We review the general procedures for trajectory and atmospheric structure reconstruction and outline them here in detail. We discuss which physical properties are important in atmospheric entry, instead of working exclusively with the dimensionless numbers of fluid dynamics. Integration of the equations of motion governing the spacecraft trajectory is carried out in a novel and general formulation. This does not require an axisymmetric gravitational field or many of the other assumptions that are present in the literature. We discuss four techniques - head-on, drag-only, acceleration ratios, and gyroscopes - for constraining spacecraft attitude, which is the critical issue in the trajectory reconstruction. The head-on technique uses an approximate magnitude and direction for the aerodynamic acceleration, whereas the drag-only technique uses the correct magnitude and an approximate direction. The acceleration ratios technique uses the correct magnitude and an indirect way of finding the correct direction and the gyroscopes technique uses the correct magnitude and a direct way of finding the correct direction. The head-on and drag-only techniques are easy to implement and require little additional information. The acceleration ratios technique requires extensive and expensive aerodynamic modelling. The gyroscopes technique requires additional onboard instrumentation. The effects of errors are briefly addressed. Our implementations of these trajectory reconstruction procedures have been verified on the Mars Pathfinder dataset. We find inconsistencies within the published work of the Pathfinder science team, and in the PDS archive itself, relating to the entry state of the spacecraft. Our atmospheric structure reconstruction, which uses only a simple aerodynamic database, is consistent with the PDS archive to about 4%. Surprisingly accurate profiles of atmospheric temperatures can be derived with no information about the spacecraft aerodynamics. Using no aerodynamic information whatsoever about Pathfinder, our profile of atmospheric temperature is still consistent with the PDS archive to about 8%. As a service to the community, we have placed simplified versions of our trajectory and atmospheric structure computer programmes online for public use.
Tools reference manual for a Requirements Specification Language (RSL), version 2.0
NASA Technical Reports Server (NTRS)
Fisher, Gene L.; Cohen, Gerald C.
1993-01-01
This report describes a general-purpose Requirements Specification Language, RSL. The purpose of RSL is to specify precisely the external structure of a mechanized system and to define requirements that the system must meet. A system can be comprised of a mixture of hardware, software, and human processing elements. RSL is a hybrid of features found in several popular requirements specification languages, such as SADT (Structured Analysis and Design Technique), PSL (Problem Statement Language), and RMF (Requirements Modeling Framework). While languages such as these have useful features for structuring a specification, they generally lack formality. To overcome the deficiencies of informal requirements languages, RSL has constructs for formal mathematical specification. These constructs are similar to those found in formal specification languages such as EHDM (Enhanced Hierarchical Development Methodology), Larch, and OBJ3.
Health requirements for advanced coal extraction systems
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.
1980-01-01
Health requirements were developed as long range goals for future advanced coal extraction systems which would be introduced into the market in the year 2000. The goal of the requirements is that underground coal miners work in an environment that is as close as possible to the working conditions of the general population, that they do not exceed mortality and morbidity rates resulting from lung diseases that are comparable to those of the general population, and that their working conditions comply as closely as possible to those of other industries as specified by OSHA regulations. A brief technique for evaluating whether proposed advanced systems meet these safety requirements is presented, as well as a discussion of the costs of respiratory disability compensation.
Graefe, F.; Marschke, J.; Dimpfl, T.; Tunn, R.
2012-01-01
Vaginal vault suspension during hysterectomy for prolapse is both a therapy for apical insufficiency and helps prevent recurrence. Numerous techniques exist, with different anatomical results and differing complications. The description of the different approaches together with a description of the vaginal vault suspension technique used at the Department for Urogynaecology at St. Hedwig Hospital could serve as a basis for reassessment and for recommendations by scientific associations regarding general standards. PMID:25278621
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
NASA Technical Reports Server (NTRS)
Hayden, W. L.; Robinson, L. H.
1972-01-01
Spectral analyses of angle-modulated communication systems is studied by: (1) performing a literature survey of candidate power spectrum computational techniques, determining the computational requirements, and formulating a mathematical model satisfying these requirements; (2) implementing the model on UNIVAC 1230 digital computer as the Spectral Analysis Program (SAP); and (3) developing the hardware specifications for a data acquisition system which will acquire an input modulating signal for SAP. The SAP computational technique uses extended fast Fourier transform and represents a generalized approach for simple and complex modulating signals.
NASA Technical Reports Server (NTRS)
Bowhill, S. A. (Editor); Edwards, B. (Editor)
1984-01-01
Various topics relative to middle atmosphere research were discussed. meteorological and aeronomical requirements for mesosphere-stratosphere-troposphere (MST) radar networks, general circulation of the middle atmosphere, the interpretation of radar returns from clear air, spaced antenna and Doppler techniques for velocity measurement, and techniques for the study of gravity waves and turbulence are among the topics discussed.
Improved Concrete Cutting and Excavation Capabilities for Crater Repair Phase 2
2015-05-01
production rate and ease of execution. The current ADR techniques, tactics, and procedures (TTPs) indicate cutting of pavement around a small crater...demonstrations and evaluations were used to create the techniques, tactics, and procedures (TTPs) manual describing the processes and requirements of...was more difficult when dowels were present. In general, the OUA demonstration validated that the new materials, equipment, and procedures were
Application of artificial intelligence to impulsive orbital transfers
NASA Technical Reports Server (NTRS)
Burns, Rowland E.
1987-01-01
A generalized technique for the numerical solution of any given class of problems is presented. The technique requires the analytic (or numerical) solution of every applicable equation for all variables that appear in the problem. Conditional blocks are employed to rapidly expand the set of known variables from a minimum of input. The method is illustrated via the use of the Hohmann transfer problem from orbital mechanics.
Donato, David I.
2013-01-01
A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.
The use of anaesthetic agents to provide anxiolysis and sedation in dentistry and oral surgery.
O'Halloran, Michael
2013-12-31
Throughout the world there is considerable variation in the techniques used to manage anxious dental patients requiring treatment. Traditionally anxious or phobic dental patients may have been sent for general anaesthesia to allow dental treatment be undertaken. While this is still the case for the more invasive oral surgical procedures, such as wisdom teeth extraction, sedation in general dentistry is becoming more popular. Various sedation techniques using many different anaesthetic agents have gained considerable popularity over the past 30 years. While the practice of sedating patients for dental procedures is invaluable in the management of suitably assessed patients, patient safety must always be the primary concern. Medical, dental and psychosocial considerations must be taken into account when evaluating the patient need and the patient suitability for sedation or general anaesthesia. The regulations that govern the practice of dental sedation vary throughout the world, in particular regarding the techniques used and the training necessary for dental practitioners to sedate patients. It is necessary for medical and dental practitioners to be up to date on current practice to ensure standards of practice, competence and safety throughout our profession. This article, the first in a two-part series, will provide information to practitioners on the practice of sedation in dentistry, the circumstances where it may be appropriate instead of general anaesthesia and the risks involved with sedation. It will also discuss the specific training and qualifications required for dental practitioners to provide sedation. The second article in this series will outline the different techniques used to administer inhalation, oral and intravenous sedation in dentistry and will focus on specific methods that are practiced.
NASA Technical Reports Server (NTRS)
Wolfe, M. G.
1978-01-01
Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.
ERIC Educational Resources Information Center
Tsai, Chih-Fong; Tsai, Ching-Tzu; Hung, Chia-Sheng; Hwang, Po-Sen
2011-01-01
Enabling undergraduate students to develop basic computing skills is an important issue in higher education. As a result, some universities have developed computer proficiency tests, which aim to assess students' computer literacy. Generally, students are required to pass such tests in order to prove that they have a certain level of computer…
Requirements Modeling with the Aspect-oriented User Requirements Notation (AoURN): A Case Study
NASA Astrophysics Data System (ADS)
Mussbacher, Gunter; Amyot, Daniel; Araújo, João; Moreira, Ana
The User Requirements Notation (URN) is a recent ITU-T standard that supports requirements engineering activities. The Aspect-oriented URN (AoURN) adds aspect-oriented concepts to URN, creating a unified framework that allows for scenario-based, goal-oriented, and aspect-oriented modeling. AoURN is applied to the car crash crisis management system (CCCMS), modeling its functional and non-functional requirements (NFRs). AoURN generally models all use cases, NFRs, and stakeholders as individual concerns and provides general guidelines for concern identification. AoURN handles interactions between concerns, capturing their dependencies and conflicts as well as the resolutions. We present a qualitative comparison of aspect-oriented techniques for scenario-based and goal-oriented requirements engineering. An evaluation carried out based on the metrics adapted from literature and a task-based evaluation suggest that AoURN models are more scalable than URN models and exhibit better modularity, reusability, and maintainability.
Sparse Reconstruction Techniques in MRI: Methods, Applications, and Challenges to Clinical Adoption
Yang, Alice Chieh-Yu; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-01-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in Magnetic Resonance Imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be employed to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MR imaging, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold-standards, are discussed. PMID:27003227
Internal and external 2-d boundary layer flows
NASA Technical Reports Server (NTRS)
Crawford, M. E.; Kays, W. M.
1978-01-01
Computer program computes general two dimensional turbulent boundary-layer flow using finite-difference techniques. Structure allows for user modification to accommodate unique problems. Program should prove useful in many applications where accurate boundary-layer flow calculations are required.
76 FR 24026 - Federal Acquisition Regulation; Information Collection; Trade Agreements Certificate
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
...; Information Collection; Trade Agreements Certificate AGENCY: Department of Defense (DOD), General Services... approved information collection requirement concerning trade agreements certificate. Public comments are... of appropriate technological collection techniques or other forms of information technology. DATES...
40 CFR 49.154 - Permit application requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., production rates and operating schedules. (vii) Identification and description of any existing air pollution... pollution prevention techniques, air pollution control devices, design standards, equipment standards, work... ASSISTANCE INDIAN COUNTRY: AIR QUALITY PLANNING AND MANAGEMENT General Federal Implementation Plan Provisions...
40 CFR 49.154 - Permit application requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., production rates and operating schedules. (vii) Identification and description of any existing air pollution... pollution prevention techniques, air pollution control devices, design standards, equipment standards, work... ASSISTANCE INDIAN COUNTRY: AIR QUALITY PLANNING AND MANAGEMENT General Federal Implementation Plan Provisions...
40 CFR 49.154 - Permit application requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., production rates and operating schedules. (vii) Identification and description of any existing air pollution... pollution prevention techniques, air pollution control devices, design standards, equipment standards, work... ASSISTANCE INDIAN COUNTRY: AIR QUALITY PLANNING AND MANAGEMENT General Federal Implementation Plan Provisions...
40 CFR 49.154 - Permit application requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., production rates and operating schedules. (vii) Identification and description of any existing air pollution... pollution prevention techniques, air pollution control devices, design standards, equipment standards, work... ASSISTANCE INDIAN COUNTRY: AIR QUALITY PLANNING AND MANAGEMENT General Federal Implementation Plan Provisions...
Software for universal noiseless coding
NASA Technical Reports Server (NTRS)
Rice, R. F.; Schlutsmeyer, A. P.
1981-01-01
An overview is provided of the universal noiseless coding algorithms as well as their relationship to the now available FORTRAN implementations. It is suggested that readers considering investigating the utility of these algorithms for actual applications should consult both NASA's Computer Software Management and Information Center (COSMIC) and descriptions of coding techniques provided by Rice (1979). Examples of applying these techniques have also been given by Rice (1975, 1979, 1980). Attention is given to reversible preprocessing, general implementation instructions, naming conventions, and calling arguments. A general applicability of the considered algorithms to solving practical problems is obtained because most real data sources can be simply transformed into the required form by appropriate preprocessing.
Generalized algebraic scene-based nonuniformity correction algorithm.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2005-02-01
A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.
Efficient computational nonlinear dynamic analysis using modal modification response technique
NASA Astrophysics Data System (ADS)
Marinone, Timothy; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2012-08-01
Generally, structural systems contain nonlinear characteristics in many cases. These nonlinear systems require significant computational resources for solution of the equations of motion. Much of the model, however, is linear where the nonlinearity results from discrete local elements connecting different components together. Using a component mode synthesis approach, a nonlinear model can be developed by interconnecting these linear components with highly nonlinear connection elements. The approach presented in this paper, the Modal Modification Response Technique (MMRT), is a very efficient technique that has been created to address this specific class of nonlinear problem. By utilizing a Structural Dynamics Modification (SDM) approach in conjunction with mode superposition, a significantly smaller set of matrices are required for use in the direct integration of the equations of motion. The approach will be compared to traditional analytical approaches to make evident the usefulness of the technique for a variety of test cases.
Magnetic Imaging: a New Tool for UK National Nuclear Security
NASA Astrophysics Data System (ADS)
Darrer, Brendan J.; Watson, Joe C.; Bartlett, Paul; Renzoni, Ferruccio
2015-01-01
Combating illicit trafficking of Special Nuclear Material may require the ability to image through electromagnetic shields. This is the case when the trafficking involves cargo containers. Thus, suitable detection techniques are required to penetrate a ferromagnetic enclosure. The present study considers techniques that employ an electromagnetic based principle of detection. It is generally assumed that a ferromagnetic metallic enclosure will effectively act as a Faraday cage to electromagnetic radiation and therefore screen any form of interrogating electromagnetic radiation from penetrating, thus denying the detection of any eventual hidden material. In contrast, we demonstrate that it is actually possible to capture magnetic images of a conductive object through a set of metallic ferromagnetic enclosures. This validates electromagnetic interrogation techniques as a potential detection tool for National Nuclear Security applications.
Magnetic Imaging: a New Tool for UK National Nuclear Security
Darrer, Brendan J.; Watson, Joe C.; Bartlett, Paul; Renzoni, Ferruccio
2015-01-01
Combating illicit trafficking of Special Nuclear Material may require the ability to image through electromagnetic shields. This is the case when the trafficking involves cargo containers. Thus, suitable detection techniques are required to penetrate a ferromagnetic enclosure. The present study considers techniques that employ an electromagnetic based principle of detection. It is generally assumed that a ferromagnetic metallic enclosure will effectively act as a Faraday cage to electromagnetic radiation and therefore screen any form of interrogating electromagnetic radiation from penetrating, thus denying the detection of any eventual hidden material. In contrast, we demonstrate that it is actually possible to capture magnetic images of a conductive object through a set of metallic ferromagnetic enclosures. This validates electromagnetic interrogation techniques as a potential detection tool for National Nuclear Security applications. PMID:25608957
Magnetic imaging: a new tool for UK national nuclear security.
Darrer, Brendan J; Watson, Joe C; Bartlett, Paul; Renzoni, Ferruccio
2015-01-22
Combating illicit trafficking of Special Nuclear Material may require the ability to image through electromagnetic shields. This is the case when the trafficking involves cargo containers. Thus, suitable detection techniques are required to penetrate a ferromagnetic enclosure. The present study considers techniques that employ an electromagnetic based principle of detection. It is generally assumed that a ferromagnetic metallic enclosure will effectively act as a Faraday cage to electromagnetic radiation and therefore screen any form of interrogating electromagnetic radiation from penetrating, thus denying the detection of any eventual hidden material. In contrast, we demonstrate that it is actually possible to capture magnetic images of a conductive object through a set of metallic ferromagnetic enclosures. This validates electromagnetic interrogation techniques as a potential detection tool for National Nuclear Security applications.
TopMaker: A Technique for Automatic Multi-Block Topology Generation Using the Medial Axis
NASA Technical Reports Server (NTRS)
Heidmann, James D. (Technical Monitor); Rigby, David L.
2004-01-01
A two-dimensional multi-block topology generation technique has been developed. Very general configurations are addressable by the technique. A configuration is defined by a collection of non-intersecting closed curves, which will be referred to as loops. More than a single loop implies that holes exist in the domain, which poses no problem. This technique requires only the medial vertices and the touch points that define each vertex. From the information about the medial vertices, the connectivity between medial vertices is generated. The physical shape of the medial edge is not required. By applying a few simple rules to each medial edge, the multiblock topology is generated with no user intervention required. The resulting topologies contain only the level of complexity dictated by the configurations. Grid lines remain attached to the boundary except at sharp concave turns where a change in index family is introduced as would be desired. Keeping grid lines attached to the boundary is especially important in the area of computational fluid dynamics where highly clustered grids are used near no-slip boundaries. This technique is simple and robust and can easily be incorporated into the overall grid generation process.
Fault detection techniques for complex cable shield topologies
NASA Astrophysics Data System (ADS)
Coonrod, Kurt H.; Davis, Stuart L.; McLemore, Donald P.
1994-09-01
This document presents the results of a basic principles study which investigated technical approaches for developing fault detection techniques for use on cables with complex shielding topologies. The study was limited to those approaches which could realistically be implemented on a fielded cable, i.e., approaches which would require partial disassembly of a cable were not pursued. The general approach used was to start with present transfer impedance measurement techniques and modify their use to achieve the best possible measurement range. An alternative test approach, similar to a sniffer type test, was also investigated.
Characteristic-eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1989-01-01
Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.
Laparoscopic Surgery Using Spinal Anesthesia
Gurwara, A. K.; Gupta, S. C.
2008-01-01
Background: Laparoscopic abdominal surgery is conventionally done under general anesthesia. Spinal anesthesia is usually preferred in patients where general anesthesia is contraindicated. We present our experience using spinal anesthesia as the first choice for laparoscopic surgery for over 11 years with the contention that it is a good alterative to anesthesia. Methods: Spinal anesthesia was used in 4645 patients over the last 11 years. Laparoscopic cholecystectomy was performed in 2992, and the remaining patients underwent other laparoscopic surgeries. There was no modification in the technique, and the intraabdominal pressure was kept at 8mm Hg to 10mm Hg. Sedation was given if required, and conversion to general anesthesia was done in patients not responding to sedation or with failure of spinal anesthesia. Results were compared with those of 421 patients undergoing laparoscopic surgery while under general anesthesia. Results: Twenty-four (0.01%) patients required conversion to general anesthesia. Hypotension requiring support was recorded in 846 (18.21%) patients, and 571(12.29%) experienced neck or shoulder pain, or both. Postoperatively, 2.09% (97) of patients had vomiting compared to 29.22% (123 patients) of patients who were administered general anesthesia. Injectable diclofenac was required in 35.59% (1672) for abdominal pain within 2 hours postoperatively, and oral analgesic was required in 2936 (63.21%) patients within the first 24 hours. However, 90.02% of patients operated on while under general anesthesia required injectable analgesics in the immediate postoperative period. Postural headache persisting for an average of 2.6 days was seen in 255 (5.4%) patients postoperatively. Average time to discharge was 2.3 days. Karnofsky Performance Status Scale showed a 98.6% satisfaction level in patients. Conclusions: Laparoscopic surgery done with the patient under spinal anesthesia has several advantages over laparoscopic surgery done with the patient under general anesthesia. PMID:18435884
Bayesian Techniques for Plasma Theory to Bridge the Gap Between Space and Lab Plasmas
NASA Astrophysics Data System (ADS)
Crabtree, Chris; Ganguli, Gurudas; Tejero, Erik
2017-10-01
We will show how Bayesian techniques provide a general data analysis methodology that is better suited to investigate phenomena that require a nonlinear theory for an explanation. We will provide short examples of how Bayesian techniques have been successfully used in the radiation belts to provide precise nonlinear spectral estimates of whistler mode chorus and how these techniques have been verified in laboratory plasmas. We will demonstrate how Bayesian techniques allow for the direct competition of different physical theories with data acting as the necessary arbitrator. This work is supported by the Naval Research Laboratory base program and by the National Aeronautics and Space Administration under Grant No. NNH15AZ90I.
A linear shift-invariant image preprocessing technique for multispectral scanner systems
NASA Technical Reports Server (NTRS)
Mcgillem, C. D.; Riemer, T. E.
1973-01-01
A linear shift-invariant image preprocessing technique is examined which requires no specific knowledge of any parameter of the original image and which is sufficiently general to allow the effective radius of the composite imaging system to be arbitrarily shaped and reduced, subject primarily to the noise power constraint. In addition, the size of the point-spread function of the preprocessing filter can be arbitrarily controlled, thus minimizing truncation errors.
Optimal systems of geoscience surveying A preliminary discussion
NASA Astrophysics Data System (ADS)
Shoji, Tetsuya
2006-10-01
In any geoscience survey, each survey technique must be effectively applied, and many techniques are often combined optimally. An important task is to get necessary and sufficient information to meet the requirement of the survey. A prize-penalty function quantifies effectiveness of the survey, and hence can be used to determine the best survey technique. On the other hand, an information-cost function can be used to determine the optimal combination of survey techniques on the basis of the geoinformation obtained. Entropy is available to evaluate geoinformation. A simple model suggests the possibility that low-resolvability techniques are generally applied at early stages of survey, and that higher-resolvability techniques should alternate with lower-resolvability ones with the progress of the survey.
76 FR 24027 - Federal Acquisition Regulation; Information Collection; Buy American Act Certificate
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
...; Information Collection; Buy American Act Certificate AGENCY: Department of Defense (DOD), General Services... approved information collection requirement concerning the Buy American Act certificate. Public comments... the use of appropriate technological collection techniques or other forms of information technology...
Safe Use of Pesticides, Guidelines. Occupational Safety and Health Series No. 38.
ERIC Educational Resources Information Center
International Labour Office, Geneva (Switzerland).
This document provides guidance on the safe use of pesticides in agricultural work. General principles are given and followed by more detailed safety requirements for the various pesticide application techniques. Finally, the medical aspects of pesticides are considered. (BB)
Constrained optimization of image restoration filters
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1973-01-01
A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.
Fantini, Sergio; Sassaroli, Angelo; Tgavalekos, Kristen T.; Kornbluth, Joshua
2016-01-01
Abstract. Cerebral blood flow (CBF) and cerebral autoregulation (CA) are critically important to maintain proper brain perfusion and supply the brain with the necessary oxygen and energy substrates. Adequate brain perfusion is required to support normal brain function, to achieve successful aging, and to navigate acute and chronic medical conditions. We review the general principles of CBF measurements and the current techniques to measure CBF based on direct intravascular measurements, nuclear medicine, X-ray imaging, magnetic resonance imaging, ultrasound techniques, thermal diffusion, and optical methods. We also review techniques for arterial blood pressure measurements as well as theoretical and experimental methods for the assessment of CA, including recent approaches based on optical techniques. The assessment of cerebral perfusion in the clinical practice is also presented. The comprehensive description of principles, methods, and clinical requirements of CBF and CA measurements highlights the potentially important role that noninvasive optical methods can play in the assessment of neurovascular health. In fact, optical techniques have the ability to provide a noninvasive, quantitative, and continuous monitor of CBF and autoregulation. PMID:27403447
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huiqiang; Wu, Xizeng, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn; Xiao, Tiqiao, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn
Purpose: Propagation-based phase-contrast CT (PPCT) utilizes highly sensitive phase-contrast technology applied to x-ray microtomography. Performing phase retrieval on the acquired angular projections can enhance image contrast and enable quantitative imaging. In this work, the authors demonstrate the validity and advantages of a novel technique for high-resolution PPCT by using the generalized phase-attenuation duality (PAD) method of phase retrieval. Methods: A high-resolution angular projection data set of a fish head specimen was acquired with a monochromatic 60-keV x-ray beam. In one approach, the projection data were directly used for tomographic reconstruction. In two other approaches, the projection data were preprocessed bymore » phase retrieval based on either the linearized PAD method or the generalized PAD method. The reconstructed images from all three approaches were then compared in terms of tissue contrast-to-noise ratio and spatial resolution. Results: The authors’ experimental results demonstrated the validity of the PPCT technique based on the generalized PAD-based method. In addition, the results show that the authors’ technique is superior to the direct PPCT technique as well as the linearized PAD-based PPCT technique in terms of their relative capabilities for tissue discrimination and characterization. Conclusions: This novel PPCT technique demonstrates great potential for biomedical imaging, especially for applications that require high spatial resolution and limited radiation exposure.« less
TopMaker: Technique Developed for Automatic Multiblock Topology Generation Using the Medial Axis
NASA Technical Reports Server (NTRS)
Rigby, David L.
2004-01-01
The TopMaker technique was developed in an effort to reduce the time required for grid generation in complex numerical studies. Topology generation accounts for much of the man-hours required for structured multiblock grids. With regard to structured multiblock grids, topology refers to how the blocks are arranged and connected. A two-dimensional multiblock topology generation technique has been developed at the NASA Glenn Research Center. Very general configurations can be addressed by the technique. A configuration is defined by a collection of non-intersecting closed curves, which will be referred to as loops. More than a single loop implies that holes exist in the domain, which poses no problem. This technique requires only the medial vertices and the touch points that define each vertex. From the information about the medial vertices, the connectivity between medial vertices is generated. The physical shape of the medial edge is not required. By applying a few simple rules to each medial edge, a multiblock topology can be generated without user intervention. The resulting topologies contain only the level of complexity dictated by the configurations. Grid lines remain attached to the boundary except at sharp concave turns, where a change in index family is introduced as would be desired. Keeping grid lines attached to the boundary is especially important in computational fluid dynamics, where highly clustered grids are used near no-slip boundaries. This technique is simple and robust and can easily be incorporated into the overall grid-generation process.
Balancing generality and specificity in component-based reuse
NASA Technical Reports Server (NTRS)
Eichmann, David A.; Beck, Jon
1992-01-01
For a component industry to be successful, we must move beyond the current techniques of black box reuse and genericity to a more flexible framework supporting customization of components as well as instantiation and composition of components. Customization of components strikes a balanced between creating dozens of variations of a base component and requiring the overhead of unnecessary features of an 'everything but the kitchen sink' component. We argue that design and instantiation of reusable components have competing criteria - design-for-use strives for generality, design-with-reuse strives for specificity - and that providing mechanisms for each can be complementary rather than antagonistic. In particular, we demonstrate how program slicing techniques can be applied to customization of reusable components.
High-sensitivity determination of Zn(II) and Cu(II) in vitro by fluorescence polarization
NASA Astrophysics Data System (ADS)
Thompson, Richard B.; Maliwal, Badri P.; Feliccia, Vincent; Fierke, Carol A.
1998-04-01
Recent work has suggested that free Cu(II) may play a role in syndromes such as Crohn's and Wilson's diseases, as well as being a pollutant toxic at low levels to shellfish and sheep. Similarly, Zn(II) has been implicated in some neural damage in the brain resulting from epilepsy and ischemia. Several high sensitivity methods exist for determining these ions in solution, including GFAAS, ICP-MS, ICP-ES, and electrochemical techniques. However, these techniques are generally slow and costly, require pretreatment of the sample, require complex instruments and skilled personnel, and are incapable of imaging at the cellular and subcellular level. To address these shortcomings we developed fluorescence polarization (anisotropy) biosensing methods for these ions which are very sensitivity, highly selective, require simple instrumentation and little pretreatment, and are inexpensive. Thus free Cu(II) or Zn(II) can be determined at picomolar levels by changes in fluorescence polarization, lifetime, or wavelength ratio using these methods; these techniques may be adapted to microscopy.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-28
...; Submission for OMB Review; Buy American Act Certificate AGENCIES: Department of Defense (DOD), General... previously approved information collection requirement concerning the Buy American Act certificate. This... the use of appropriate technological collection techniques or other forms of information technology...
Centrifugal pumps for rocket engines
NASA Technical Reports Server (NTRS)
Campbell, W. E.; Farquhar, J.
1974-01-01
The use of centrifugal pumps for rocket engines is described in terms of general requirements of operational and planned systems. Hydrodynamic and mechanical design considerations and techniques and test procedures are summarized. Some of the pump development experiences, in terms of both problems and solutions, are highlighted.
Phase Retrieval System for Assessing Diamond Turning and Optical Surface Defects
NASA Technical Reports Server (NTRS)
Dean, Bruce; Maldonado, Alex; Bolcar, Matthew
2011-01-01
An optical design is presented for a measurement system used to assess the impact of surface errors originating from diamond turning artifacts. Diamond turning artifacts are common by-products of optical surface shaping using the diamond turning process (a diamond-tipped cutting tool used in a lathe configuration). Assessing and evaluating the errors imparted by diamond turning (including other surface errors attributed to optical manufacturing techniques) can be problematic and generally requires the use of an optical interferometer. Commercial interferometers can be expensive when compared to the simple optical setup developed here, which is used in combination with an image-based sensing technique (phase retrieval). Phase retrieval is a general term used in optics to describe the estimation of optical imperfections or aberrations. This turnkey system uses only image-based data and has minimal hardware requirements. The system is straightforward to set up, easy to align, and can provide nanometer accuracy on the measurement of optical surface defects.
NASA Astrophysics Data System (ADS)
Hamm, L. L.; Vanbrunt, V.
1982-08-01
The numerical solution to the ordinary differential equation which describes the high-pressure vapor-liquid equilibria of a binary system where one of the components is supercritical and exists as a noncondensable gas in the pure state is considered with emphasis on the implicit Runge-Kuta and orthogonal collocation methods. Some preliminary results indicate that the implicit Runge-Kutta method is superior. Due to the extreme nonlinearity of thermodynamic properties in the region near the critical locus, and extended cubic spline fitting technique is devised for correlating the P-x data. The least-squares criterion is employed in smoothing the experimental data. The technique could easily be applied to any thermodynamic data by changing the endpoint requirements. The volumetric behavior of the systems must be given or predicted in order to perform thermodynamic consistency tests. A general procedure is developed for predicting the volumetric behavior required and some indication as to the expected limit of accuracy is given.
The feasibility of laparoscopic extraperitoneal hernia repair under local anesthesia.
Ferzli, G; Sayad, P; Vasisht, B
1999-06-01
Laparoscopic preperitoneal herniorrhaphy has the advantage of being a minimally invasive procedure with a recurrence rate comparable to open preperitoneal repair. However, surgeons have been reluctant to adopt this procedure because it requires general anesthesia. In this report, we describe the technique used in the laparoscopic repair of inguinal hernias under local anesthesia using the preperitoneal approach. We also report our results with 10 inguinal hernias repaired using the same technique. Ten patients underwent their primary inguinal hernia repairs under local anesthesia. None were converted to general anesthesia. Four patients received a small amount of intravenous sedation. Three patients had bilateral hernias. There were five direct and eight indirect hernias. The average operative time was 47 min. The average lidocaine usage was 28 cc. All patients were discharged within a few hours of the surgery. There were no complications. Follow-up has ranged from 1 to 6 months. There has been no recurrences to date. The extraperitoneal laparoscopic repair of inguinal hernia is feasible under local anesthesia. This technique adds a new treatment option in the management of bilateral inguinal hernias, particularly in the population where general anesthesia is contraindicated or even for patients who are reluctant to receive general or epidural anesthesia.
Generalized query-based active learning to identify differentially methylated regions in DNA.
Haque, Md Muksitul; Holder, Lawrence B; Skinner, Michael K; Cook, Diane J
2013-01-01
Active learning is a supervised learning technique that reduces the number of examples required for building a successful classifier, because it can choose the data it learns from. This technique holds promise for many biological domains in which classified examples are expensive and time-consuming to obtain. Most traditional active learning methods ask very specific queries to the Oracle (e.g., a human expert) to label an unlabeled example. The example may consist of numerous features, many of which are irrelevant. Removing such features will create a shorter query with only relevant features, and it will be easier for the Oracle to answer. We propose a generalized query-based active learning (GQAL) approach that constructs generalized queries based on multiple instances. By constructing appropriately generalized queries, we can achieve higher accuracy compared to traditional active learning methods. We apply our active learning method to find differentially DNA methylated regions (DMRs). DMRs are DNA locations in the genome that are known to be involved in tissue differentiation, epigenetic regulation, and disease. We also apply our method on 13 other data sets and show that our method is better than another popular active learning technique.
Tumescent and syringe liposculpture: a logical partnership.
Hunstad, J P
1995-01-01
Liposuction has been traditionally performed under general anesthesia. Standard instrumentation for the procedure has included blunt-tipped suction cannulae connected to an electric vacuum pump by noncollapsible tubing. A subcutaneous injection of Lidocaine with Epinephrine is routinely employed to minimize blood loss during the procedure. This infiltration has been described as the "wet technique," but it is not a method to supplant general anesthesia. The tumescent technique, a method of infusing very large volumes of dilute lidocaine with epinephrine solutions, has been advocated as a satisfactory means for providing conscious anesthesia for liposuction procedures, avoiding the need for general anesthesia. The syringe technique employs blunt-tipped suction cannulae connected to a syringe. Drawing back the syringe plunger generates the negative pressures needed to remove fat during liposuction and replaces the electric vacuum pump and connecting tubing traditionally used for this procedure. This study evaluates the combined tumescent and syringe techniques for liposuction. One hundred consecutive patients were treated with the tumescent technique as the sole means of anesthesia and the syringe technique as the sole means of performing liposuction. A modified tumescent formula is presented. A comparison of liposuction aspirates using this modified tumescent technique is compared and contrasted to liposuction aspirates obtained using the "dry technique" and the "wet technique." A historical review of the syringe technique and its perceived attributes is also presented. Technical descriptions of the tumescent infusion method, tumescent fluid formulation, and suggested patient sedation and monitoring is presented. Photographic documentation of patients who underwent the combined tumescent and syringe liposculpture treating various body areas is shown. A critical analysis of the limitations of this combined technique is also described noting added time requirements, difficulties with under-correction of deformities, and need for reoperation, methods for determining the "end-point" for the procedure, as well as addressing large-volume liposuction problems. The conclusion reached by this study is that combining the tumescent technique and the syringe technique is a logical partnership. Each method complements the other, allowing liposuction to be performed with considerable advantage over traditional methods. These advantages include eliminating the need for general anesthesia, lessening blood loss and postoperative bruising, greater accuracy, precision, and overall high patient satisfaction.
NACA Conference on Aerodynamic Problems of Transonic Airplane Design
NASA Technical Reports Server (NTRS)
1949-01-01
During the past several years it has been necessary for aeronautical research workers to exert a good portion of their effort in developing the means for conducting research in the high-speed range. The transonic range particularly has presented a very acute problem because of the choking phenomena in wind tunnels at speeds close to the speed of sound. At the same time, the multiplicity of design problems for aircraft introduced by the peculiar flow problems of the transonic speed range has given rise to an enormous demand for detail design data. Substantial progress has been made, however, in developing the required research techniques and in supplying the demand for aerodynamic data required for design purposes. In meeting this demand, it has been necessary to resort to new techniques possessing such novel features that the results obtained have had to be viewed with caution. Furthermore, the kinds of measurements possible with these various techniques are so varied that the correlation of results obtained by different techniques generally becomes an indirect process that can only be accomplished in conjunction with the application of estimates of the extent to which the results of measurements by any given technique are modified by differences that are inherent in the techniques. Thus, in the establishment of the validity and applicability of data obtained by any given technique, direct comparisons between data from different sources are a supplement to but not a substitute for the detailed knowledge required of the characteristics of each technique and fundamental aerodynamic flow phenomena.
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
Influence of cross section variations on the structural behaviour of composite rotor blades
NASA Astrophysics Data System (ADS)
Rapp, Helmut; Woerndle, Rudolf
1991-09-01
A highly sophisticated structural analysis is required for helicopter rotor blades with nonhomogeneous cross sections made from nonisotropic material. Combinations of suitable analytical techniques with FEM-based techniques permit a cost effective and sufficiently accurate analysis of these complicated structures. It is determined that in general the 1D engineering theory of bending combined with 2D theories for determining the cross section properties is sufficient to describe the structural blade behavior.
Electrophoretic separator for purifying biologicals
NASA Technical Reports Server (NTRS)
1976-01-01
This technique separates a single narrow zone of sample mixture in an electrolyte medium into many zones containing a single component of the mixture and electrolyte between them. Since the densities of the separated zones generally differ from that of the intervening medium, such systems are gravitationally unstable and stabilization is required. The various techniques for stabilization include using the capillary space provided by thin films, the interstices of solid material such as filter paper and a variety of gel-forming substances.
Olvingson, C; Hallberg, N; Timpka, T; Lindqvist, K
2002-01-01
To evaluate Use Case Maps (UCMs) as a technique for Requirements Engineering (RE) in the development of information systems with functions for spatial analyses in inter-organizational public health settings. In this study, Participatory Action Research (PAR) is used to explore the UCM notation for requirements elicitation and to gather the opinions of the users. The Delphi technique is used to reach consensus in the construction of UCMs. The results show that UCMs can provide a visualization of the system's functionality and in combination with PAR provide a sound basis for gathering requirements in inter-organizational settings. UCMs were found to represent a suitable level for describing the organization and the dynamic flux of information including spatial resolution to all stakeholders. Moreover, by using PAR, the voices of the users and their tacit knowledge is intercepted. Further, UCMs are found useful in generating intuitive requirements by the creation of use cases. With UCMs and PAR it is possible to study the effects of design changes in the general information display and the spatial resolution in the same context. Both requirements on the information system in general and the functions for spatial analyses are possible to elicit when identifying the different responsibilities and the demands on spatial resolution associated to the actions of each administrative unit. However, the development process of UCM is not well documented and needs further investigation and formulation of guidelines.
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1991-01-01
The net upward longwave surface radiation is exceedingly difficult to measure from space. A hybrid method using General Circulation Model (GCM) simulations and satellite data from the Earth Radiation Budget Experiment (ERBE) and the International Satellite Cloud Climatology Project (ISCCP) was used to produce global maps of this quantity over oceanic areas. An advantage of this technique is that no independent knowledge or assumptions regarding cloud cover for a particular month are required. The only information required is a relationship between the cloud radiation forcing (CRF) at the top of the atmosphere and that at the surface, which is obtained from the GCM simulation. A flow diagram of the technique and results are given.
Salloum, Mariah L; Eberlin, Kyle R; Sethna, Navil; Hamdan, Usama S
2009-11-01
Perioperative analgesia in patients undergoing cleft lip and palate repair is complicated by the risk of postoperative airway obstruction. We describe a technique of combined infraorbital and external nasal nerve blocks to reduce the need for opioid analgesia. Using this technique, we have successfully performed cleft lip repair under local anesthesia alone, without general anesthesia or intravenous sedation, in adolescents and adults. In children, this technique can reduce the need for postoperative opioids. We describe this novel analgesic approach to decrease opioid requirements and minimize perioperative risk.
Surgical treatment of axillary hyperhidrosis by suction-curettage of sweat glands*
de Rezende, Rebeca Maffra; Luz, Flávio Barbosa
2014-01-01
Suction curettage is a dermatologic surgery technique for the treatment of axillary hyperhidrosis, which is becoming more popular. Objective: The purpose of this study is to describe the current technique of removal of axillary sweat glands, and evaluate its efficacy and safety. Conclusion: Suction-curettage of sweat glands is a minimally invasive surgical technique that is easy to perform, safe, has high rates of success and relatively few side-effects. It is generally well tolerated by patients and requires shorter time away from daily activities, when compared with other surgical modalities. PMID:25387499
Ecological Effects of Weather Modification: A Problem Analysis.
ERIC Educational Resources Information Center
Cooper, Charles F.; Jolly, William C.
This publication reviews the potential hazards to the environment of weather modification techniques as they eventually become capable of producing large scale weather pattern modifications. Such weather modifications could result in ecological changes which would generally require several years to be fully evident, including the alteration of…
General Nature of Multicollinearity in Multiple Regression Analysis.
ERIC Educational Resources Information Center
Liu, Richard
1981-01-01
Discusses multiple regression, a very popular statistical technique in the field of education. One of the basic assumptions in regression analysis requires that independent variables in the equation should not be highly correlated. The problem of multicollinearity and some of the solutions to it are discussed. (Author)
33 CFR 105.205 - Facility Security Officer (FSO).
Code of Federal Regulations, 2011 CFR
2011-07-01
... maintain a TWIC. (b) Qualifications. (1) The FSO must have general knowledge, through training or... of conducting audits, inspections, control, and monitoring techniques. (2) In addition to knowledge and training required in paragraph (b)(1) of this section, the FSO must have knowledge of and receive...
Phase-Enhanced 3D Snapshot ISAR Imaging and Interferometric SAR
2009-12-28
generalized technique requires the precession angle 9p be relatively small [see liq. (28)|. However, the noncoherent snapshot image equations remain...valid beyond this precession limit, and the unique sampling grid developed is still very useful for 3D imaging of the noncoherent snapshot equation
Vocal Qualities in Music Theater Voice: Perceptions of Expert Pedagogues.
Bourne, Tracy; Kenny, Dianna
2016-01-01
To gather qualitative descriptions of music theater vocal qualities including belt, legit, and mix from expert pedagogues to better define this voice type. This is a prospective, semistructured interview. Twelve expert teachers from United States, United Kingdom, Asia, and Australia were interviewed by Skype and asked to identify characteristics of music theater vocal qualities including vocal production, physiology, esthetics, pitch range, and pedagogical techniques. Responses were compared with published studies on music theater voice. Belt and legit were generally described as distinct sounds with differing physiological and technical requirements. Teachers were concerned that belt should be taught "safely" to minimize vocal health risks. There was consensus between teachers and published research on the physiology of the glottis and vocal tract; however, teachers were not in agreement about breathing techniques. Neither were teachers in agreement about the meaning of "mix." Most participants described belt as heavily weighted, thick folds, thyroarytenoid-dominant, or chest register; however, there was no consensus on an appropriate term. Belt substyles were named and generally categorized by weightedness or tone color. Descriptions of male belt were less clear than for female belt. This survey provides an overview of expert pedagogical perspectives on the characteristics of belt, legit, and mix qualities in the music theater voice. Although teacher responses are generally in agreement with published research, there are still many controversial issues and gaps in knowledge and understanding of this vocal technique. Breathing techniques, vocal range, mix, male belt, and vocal registers require continuing investigation so that we can learn more about efficient and healthy vocal function in music theater singing. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Bashandy, Ghada Mohammad Nabih; Elkholy, Abeer Hassan Hamed
2014-08-01
Many multimodal analgesia techniques have been tried to provide adequate analgesia for midline incisions extending above and below the umbilicus aiming at limiting the perioperative use of morphine thus limiting side effects. Ultrasound (US) guidance made the anesthesiologist reconsider old techniques for wider clinical use. The rectus sheath block (RSB) is a useful technique under-utilized in the adult population. Our study examined the efficacy of a preemptive single-injection rectus sheath block in providing better early postoperative pain scores compared to general anesthesia alone. Sixty patients were recruited in this randomized controlled trial. These patients were divided into two groups: RSB group had an RSB after induction of anesthesia and before surgical incision, and GA (general anesthesia) group had general anesthesia alone. Both groups were compared for verbal analogue scale (VAS) score, opioid consumption and hemodynamic variables in the post-anesthesia care unit (PACU). Analgesic requirements in surgical wards were recorded in postoperative days (POD) 0, 1 and 2. The median VAS score was significantly lower in RSB group compared with GA group in all 5 time points in the PACU (P ˂ 0.05). Also PACU morphine consumption was lower in RSB group than GA group patients (95% confidence interval [CI] of the difference in means between groups, -4.59 to -2.23 mg). Morphine consumption was also less in the first 2 postoperative days (POD0 and POD1). Ultrasound-guided rectus sheath block is an easy technique to learn. This technique, when it is used with general anesthesia, will be more effective in reducing pain scores and opioid consumption compared with general anesthesia alone.
Bashandy, Ghada Mohammad Nabih; Elkholy, Abeer Hassan Hamed
2014-01-01
Background: Many multimodal analgesia techniques have been tried to provide adequate analgesia for midline incisions extending above and below the umbilicus aiming at limiting the perioperative use of morphine thus limiting side effects. Ultrasound (US) guidance made the anesthesiologist reconsider old techniques for wider clinical use. The rectus sheath block (RSB) is a useful technique under-utilized in the adult population. Objectives: Our study examined the efficacy of a preemptive single-injection rectus sheath block in providing better early postoperative pain scores compared to general anesthesia alone. Patients and Methods: Sixty patients were recruited in this randomized controlled trial. These patients were divided into two groups: RSB group had an RSB after induction of anesthesia and before surgical incision, and GA (general anesthesia) group had general anesthesia alone. Both groups were compared for verbal analogue scale (VAS) score, opioid consumption and hemodynamic variables in the post-anesthesia care unit (PACU). Analgesic requirements in surgical wards were recorded in postoperative days (POD) 0, 1 and 2. Results: The median VAS score was significantly lower in RSB group compared with GA group in all 5 time points in the PACU (P ˂ 0.05). Also PACU morphine consumption was lower in RSB group than GA group patients (95% confidence interval [CI] of the difference in means between groups, −4.59 to −2.23 mg). Morphine consumption was also less in the first 2 postoperative days (POD0 and POD1). Conclusions: Ultrasound-guided rectus sheath block is an easy technique to learn. This technique, when it is used with general anesthesia, will be more effective in reducing pain scores and opioid consumption compared with general anesthesia alone. PMID:25289373
Network representations of angular regions for electromagnetic scattering
2017-01-01
Network modeling in electromagnetics is an effective technique in treating scattering problems by canonical and complex structures. Geometries constituted of angular regions (wedges) together with planar layers can now be approached with the Generalized Wiener-Hopf Technique supported by network representation in spectral domain. Even if the network representations in spectral planes are of great importance by themselves, the aim of this paper is to present a theoretical base and a general procedure for the formulation of complex scattering problems using network representation for the Generalized Wiener Hopf Technique starting basically from the wave equation. In particular while the spectral network representations are relatively well known for planar layers, the network modelling for an angular region requires a new theory that will be developed in this paper. With this theory we complete the formulation of a network methodology whose effectiveness is demonstrated by the application to a complex scattering problem with practical solutions given in terms of GTD/UTD diffraction coefficients and total far fields for engineering applications. The methodology can be applied to other physics fields. PMID:28817573
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
Perioperative Acupuncture and Related Techniques
Chernyak, Grigory V.; Sessler, Daniel I.
2005-01-01
Acupuncture and related techniques are increasingly practiced in conventional medical settings, and the number of patients willing to use these techniques is increasing. Despite more than 30 years of research, the exact mechanism of action and efficacy of acupuncture have not been established. Furthermore, most aspects of acupuncture have yet to be adequately tested. There thus remains considerable controversy about the role of acupuncture in clinical medicine. Acupuncture apparently does not reduce volatile anesthetic requirement by a clinically important amount. However, preoperative sedation seems to be a promising application of acupuncture in perioperative settings. Acupuncture may be effective for postoperative pain relief but requires a high level of expertise by the acupuncture practitioner. Acupuncture and related techniques can be used for treatment and prophylaxis of postoperative nausea and vomiting in routine clinical practice in combination with, or as an alternative to, conventional antiemetics when administered before induction of general anesthesia. Summary Statement: The use of acupuncture for perioperative analgesia, nausea and vomiting, sedation, anesthesia, and complications is reviewed. PMID:15851892
Robb, N
2014-03-01
The basic techniques of conscious sedation have been found to be safe and effective for the management of anxiety in adult dental patients requiring sedation to allow them to undergo dental treatment. There remains great debate within the profession as to the role of the so called advanced sedation techniques. This paper presents a series of nine patients who were managed with advanced sedation techniques where the basic techniques were either inappropriate or had previously failed to provide adequate relief of anxiety. In these cases, had there not been the availability of advanced sedation techniques, the most likely recourse would have been general anaesthesia--a treatment modality that current guidance indicates should not be used where there is an appropriate alternative. The sedation techniques used have provided that appropriate alternative management strategy.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
Protecting Against Faults in JPL Spacecraft
NASA Technical Reports Server (NTRS)
Morgan, Paula
2007-01-01
A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.
NASA Technical Reports Server (NTRS)
James, T. A.; Hall, B. C.; Newbold, P. M.
1972-01-01
A comparative evaluation was made of eight higher order languages of general interest in the aerospace field: PL/1; HAL; JOVIAL/J3; SPL/J6; CLASP; ALGOL 60; FORTRAN 4; and MAC360. A summary of the functional requirements for a language for general use in manned aerodynamic applications is presented. The evaluation supplies background material to be used in assessing the worth of each language for some particular application.
Hybrid Grid Techniques for Propulsion Applications
NASA Technical Reports Server (NTRS)
Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.
1996-01-01
During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.
Kirkbride, K Paul; Tridico, Silvana R
2010-02-25
An initial investigation of the application of laser scanning confocal microscopy to the examination of hairs and fibers has been conducted. This technique allows the production of virtual transverse and longitudinal cross-sectional images of a wide range of hairs and fibers. Special mounting techniques are not required; specimens that have been mounted for conventional microscopy require no further treatment. Unlike physical cross-sectioning, in which it is difficult to produce multiple cross-sections from a single hair or fiber and the process is destructive, confocal microscopy allows the examiner to image the cross-section at any point in the field of view along the hair or fiber and it is non-destructive. Confocal microscopy is a fluorescence-based technique. The images described in this article were collected using only the autofluorescence exhibited by the specimen (i.e. fluorescence staining was not necessary). Colorless fibers generally and hairs required excitation at 405 nm in order to stimulate useful autofluorescence; longer wavelength excitation was suitable for dyed fibers. Although confocal microscopy was found to be generally applicable to the generation virtual transverse cross-sections from a wide range of hairs and fibers, on some occasions the autofluorescence signal was attenuated by heavy pigmentation or the presence of an opaque medulla in hairs, and by heavy delustering or the presence of air-filled voids in the case of fibers. In these situations only partial cross-sections were obtained. 2009 Elsevier Ireland Ltd. All rights reserved.
Development of Low Cost Satellite Communications System for Helicopters and General Aviation
NASA Technical Reports Server (NTRS)
Farazian, K.; Abbe, B.; Divsalar, D.; Raphaeli, D.; Tulintseff, A.; Wu, T.; Hinedi, S.
1994-01-01
In this paper, the development of low-cost satellite communications (SATCOM) system for helicopters and General Aviation (GA) aircrafts is described. System design and standards analysis have been conducted to meet the low-cost, light-weight, small-size and low-power system requirements for helicopters and GA aircraft environments. Other specific issues investigated include coding schemes, spatial diversity, and antenna arraying techniques. Coding schemes employing Channel State Information (CSI) and inverleaving have been studied in order to mitigate severe banking angle fading and the periodic RF signal blockage due to the helicopter rotor blades. In addition, space diversity and antenna arraying techniques have been investigated to further reduce the fading effects and increase the link margin.
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Least squares techniques were applied for parameter estimation of functions to predict winter wheat phenological stage with daily maximum temperature, minimum temperature, daylength, and precipitation as independent variables. After parameter estimation, tests were conducted using independent data. It may generally be concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson triquadratic form, in general use for spring wheat, yielded good results, but special techniques and care are required. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with averaged daily environmental values as independent variables.
Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Lapene, A.; Pauget, L.
2012-12-01
During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible. Karimi-Fard et al. [2] have developed an upscaling technique based on DFM representation. The original version of this technique was developed to construct a dual-porosity model from a discrete fracture description. This technique has been extended and generalized so it can be applied to a wide range of problems from reservoirs with a few or no fracture to highly fractured reservoirs. In this work, we present the application of these techniques to two three-dimensional fractured reservoirs constructed using real data. The first model contains more than 600 medium and large scale fractures. The fractures are not always connected which requires a general modeling technique. The reservoir has 50 wells (injectors and producers) and water flooding simulations are performed. The second test case is a larger reservoir with sparsely distributed faults. Single-phase simulations are performed with 5 producing wells. [1] Karimi-Fard M., Durlofsky L.J., and Aziz K. 2004. An efficient discrete-fracture model applicable for general-purpose reservoir simulators. SPE Journal, 9(2): 227-236. [2] Karimi-Fard M., Gong B., and Durlofsky L.J. 2006. Generation of coarse-scale continuum flow models from detailed fracture characterizations. Water Resources Research, 42(10): W10423.
5 CFR 1320.5 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... collection of information unless, in advance of the adoption or revision of the collection of information— (1... use of automated, electronic, mechanical, or other technological collection techniques or other forms... Regulations (see § 1320.3(f)(3)). (D) In other cases, and where OMB determines in advance in writing that...
DOT National Transportation Integrated Search
1971-06-01
A study was conducted in which performance on a non-verbal problem- solving task was correlated with the Otis Quick Scoring Mental Ability Test and the Raven Progressive Matrices Test. The problem-solving task, called 'code- lock' required the subjec...
Collected Notes on the Workshop for Pattern Discovery in Large Databases
NASA Technical Reports Server (NTRS)
Buntine, Wray (Editor); Delalto, Martha (Editor)
1991-01-01
These collected notes are a record of material presented at the Workshop. The core data analysis is addressed that have traditionally required statistical or pattern recognition techniques. Some of the core tasks include classification, discrimination, clustering, supervised and unsupervised learning, discovery and diagnosis, i.e., general pattern discovery.
The New Southern FIA Data Compilation System
V. Clark Baldwin; Larry Royer
2001-01-01
In general, the major national Forest Inventory and Analysis annual inventory emphasis has been on data-base design and not on data processing and calculation of various new attributes. Two key programming techniques required for efficient data processing are indexing and modularization. The Southern Research Station Compilation System utilizes modular and indexing...
Smartphone Magnification Attachment: Microscope or Magnifying Glass
ERIC Educational Resources Information Center
Hergemöller, Timo; Laumann, Daniel
2017-01-01
Today smartphones and tablets do not merely pervade our daily life, but also play a major role in STEM education in general, and in experimental investigations in particular. Enabling teachers and students to make use of these new techniques in physics lessons requires supplying capable and affordable applications. Our article presents the…
Model-Driven Design: Systematically Building Integrated Blended Learning Experiences
ERIC Educational Resources Information Center
Laster, Stephen
2010-01-01
Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…
Is What Is Good for General Motors Good for Architecture?
ERIC Educational Resources Information Center
Myrick, Richard; And Others
1966-01-01
Problems of behavioral evaluation and determination of initial building stimuli are discussed in terms of architectural analysis. Application of management research techniques requires problem and goal definition. Analysis of both lower and higher order needs is contingent upon these definitions. Lower order needs relate to more abstract…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-31
...; Submission for OMB Review; Payment by Electronic Fund Transfer AGENCY: Department of Defense (DOD), General... collection requirement concerning payment by electronic fund transfer. A notice was published in the Federal... technological collection techniques or other forms of information technology. DATES: Submit comments on or...
A general method for decomposing the causes of socioeconomic inequality in health.
Heckley, Gawain; Gerdtham, Ulf-G; Kjellsson, Gustav
2016-07-01
We introduce a general decomposition method applicable to all forms of bivariate rank dependent indices of socioeconomic inequality in health, including the concentration index. The technique is based on recentered influence function regression and requires only the application of OLS to a transformed variable with similar interpretation. Our method requires few identifying assumptions to yield valid estimates in most common empirical applications, unlike current methods favoured in the literature. Using the Swedish Twin Registry and a within twin pair fixed effects identification strategy, our new method finds no evidence of a causal effect of education on income-related health inequality. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Air-to-air radar flight testing
NASA Astrophysics Data System (ADS)
Scott, Randall E.
1988-06-01
This volume in the AGARD Flight Test Techniques Series describes flight test techniques, flight test instrumentation, ground simulation, data reduction and analysis methods used to determine the performance characteristics of a modern air-to-air (a/a) radar system. Following a general coverage of specification requirements, test plans, support requirements, development and operational testing, and management information systems, the report goes into more detailed flight test techniques covering a/a radar capabilities of: detection, manual acquisition, automatic acquisition, tracking a single target, and detection and tracking of multiple targets. There follows a section on additional flight test considerations such as electromagnetic compatibility, electronic countermeasures, displays and controls, degraded and backup modes, radome effects, environmental considerations, and use of testbeds. Other sections cover ground simulation, flight test instrumentation, and data reduction and analysis. The final sections deal with reporting and a discussion of considerations for the future and how they may affect radar flight testing.
Ultrashort pulse energy distribution for propulsion in space
NASA Astrophysics Data System (ADS)
Bergstue, Grant Jared
This thesis effort focuses on the development of a novel, space-based ultrashort pulse transmission system for spacecraft. The goals of this research include: (1) ultrashort pulse transmission strategies for maximizing safety and efficiency; (2) optical transmission system requirements; (3) general system requirements including control techniques for stabilization; (4) optical system requirements for achieving effective ablative propulsion at the receiving spacecraft; and (5) ultrashort pulse transmission capabilities required for future missions in space. A key element of the research is the multiplexing device required for aligning the ultrashort pulses from multiple laser sources along a common optical axis for transmission. This strategy enables access to the higher average and peak powers required for useful missions in space.
Joint Manipulation: Toward a General Theory of High-Velocity, Low-Amplitude Thrust Techniques.
Harwich, Andrew S
2017-12-01
The objective of this study was to describe the initial stage of a generalized theory of high-velocity, low-amplitude thrust (HVLAT) techniques for joint manipulation. This study examined the movements described by authors from the fields of osteopathy, chiropractic, and physical therapy to produce joint cavitation in both the metacarpophalangeal (MCP) joint and the cervical spine apophysial joint. This study qualitatively compared the kinetics, the similarities, and the differences between MCP cavitation and cervical facet joint cavitation. A qualitative vector analysis of forces and movements was undertaken by constructing computer-generated, simplified graphical models of the MCP joint and a typical cervical apophysial joint and imposing the motions dictated by the clinical technique. Comparing the path to cavitation of 2 modes of HVLAT for the MCP joint, namely, distraction and hyperflexion, it was found that the hyperflexion method requires an axis of rotation, the hinge axis, which is also required for cervical HVLAT. These results show that there is an analogue of cervical HVLAT in one of the MCP joint HVLATs. The study demonstrated that in a theoretical model, the path to joint cavitation is the same for asymmetric separation of the joint surfaces in the cervical spine and the MCP joints.
Rastogi, Amit; Gyanesh, Prakhar; Nisha, Surbhi; Agarwal, Appurva; Mishra, Priya; Tiwari, Akhilesh Kumar
2014-04-01
The airway is the foremost challenge in maxillofacial surgery. The major concerns are difficulty in managing the patient's airway and sharing it between the anaesthetist and surgeons. General anaesthesia, with endotracheal intubation, is the commonly used technique for maxillofacial procedures. We assessed the efficacy and safety of a regional block with sedation technique in certain maxillofacial operations, specifically temporomandibular joint (TMJ) ankylosis and mandibular fracture cases, and compared it with conventional general anaesthesia. We compared the time to discharge from the post anaesthesia care unit (PACU) and the occurrence of side effects, as well as surgeon and patient satisfaction with the anaesthetic technique, between the two groups. We enrolled 50 patients of ASA grade 1 or 2, aged 15-50 years, scheduled for maxillofacial surgery (mandibular fracture or TMJ ankylosis). The patients were divided into two groups of 25 each, to receive sedation with a regional block with the use of a peripheral nerve stimulator in group I and general anaesthesia in group II. We observed haemodynamic parameters, intraoperative and postoperative complications and the amount of surgical bleeding in the two groups. Total anaesthesia time, patient and surgeon satisfaction, time to rescue analgesia, the number of rescue doses required, and the time to discharge from the PACU were compared. The groups were comparable with respect to demographic profile, intraoperative haemodynamic parameters, surgical time, and amount of blood loss. Postoperative pain was assessed using the visual analogue score (VAS). Patients in group I had lower VAS scores after surgery and remained pain-free for longer than those in group II. The mean pain-free interval in group I was 159.12 ± 43.95 min and in group II was 60.36 ± 19.77 min (p < 0.005). Patients in group I required lower doses of rescue analgesia than those undergoing the surgery under general anaesthesia (p < 0.005). Patients receiving regional blocks also had fewer episodes of postoperative nausea and vomiting (p = 0.005). These results led to earlier discharge of patients in group I from the PACU. Regional block with sedation is a safe alternative technique for patients undergoing surgery for mandible fracture or TMJ ankylosis, with clear advantages over general anaesthesia. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Britt, H; Miller, G C; Steven, I D; Howarth, G C; Nicholson, P A; Bhasale, A L; Norton, K J
1997-04-01
The prediction and subsequent prevention of errors, which are an integral element of human behaviour, require an understanding of their cause. The incident monitoring technique was developed in the study of aviation errors in the Second World War and has been applied more recently in the field of anaesthetics. This pilot study represents one of the first attempts to apply the incident monitoring technique in the general practice environment. A total of 297 GPs across Australia anonymously reported details of unintended events which harmed or could have harmed the patient. Reports were contemporaneously recorded on prepared forms which allowed a free text description of the incident, and structured responses for contributing and mitigating factors, immediate and long-term out-comes, additional costs etc. The first 500 reports were analysed using both of qualitative and quantitative methods and a brief overview of results is presented. The methodological issues arising in the application of this technique to such a large, widely spread profession, in which episodes of care are not necessarily confined to a single consultation, are discussed. This study demonstrated that the incident monitoring technique can be successfully applied in general practice and that the resulting information can facilitate the identification of common factors contributing to such events and allow the development of preventive interventions.
Apollo experience report: Electrical wiring subsystem
NASA Technical Reports Server (NTRS)
White, L. D.
1975-01-01
The general requirements of the electrical wiring subsystems and the problem areas and solutions that occurred during the major part of the Apollo Program are detailed in this report. The concepts and definitions of specific requirements for electrical wiring; wire-connecting devices; and wire-harness fabrication, checkout, and installation techniques are discussed. The design and development of electrical wiring and wire-connecting devices are described. Mission performance is discussed, and conclusions and recommendations for future programs are presented.
Complex surgery for locally advanced bone and soft tissue sarcomas of the shoulder girdle.
Lesenský, Jan; Mavrogenis, Andreas F; Igoumenou, Vasilios G; Matejovsky, Zdenek; Nemec, Karel; Papagelopoulos, Panayiotis J; Fabbri, Nicola
2017-08-01
Surgical management of primary musculoskeletal tumors of the shoulder girdle is cognitively and technically demanding. Over the last decades, advances in the medical treatments, imaging and surgical techniques have fostered limb salvage surgery and reduced the need for amputation. Despite well-accepted general principles, an individualized approach is often necessary to accommodate tumor extension, anatomical challenges and patient characteristics. A combination of techniques is often required to achieve optimal oncologic and durable functional outcome. Goal of this article is to review approach and management of patients with locally advanced sarcomas of the shoulder girdle requiring major tumor surgery, to illustrate principles of surgical strategy, outcome and complications, and to provide useful guidelines for the treating physicians.
NASA Technical Reports Server (NTRS)
Sellers, William L., III; Dwoyer, Douglas L.
1992-01-01
The design of a hypersonic aircraft poses unique challenges to the engineering community. Problems with duplicating flight conditions in ground based facilities have made performance predictions risky. Computational fluid dynamics (CFD) has been proposed as an additional means of providing design data. At the present time, CFD codes are being validated based on sparse experimental data and then used to predict performance at flight conditions with generally unknown levels of uncertainty. This paper will discuss the facility and measurement techniques that are required to support CFD development for the design of hypersonic aircraft. Illustrations are given of recent success in combining experimental and direct numerical simulation in CFD model development and validation for hypersonic perfect gas flows.
Congenital penile curvature: update and management.
Makovey, Iryna; Higuchi, Ty T; Montague, Drogo K; Angermeier, Kenneth W; Wood, Hadley M
2012-08-01
Congenital penile curvature results from disproportionate development of the tunica albuginea of the corporal bodies and is not associated with urethral malformation. Patients usually present after reaching puberty as the curvature becomes more apparent with erections, and severe curvature can make intercourse difficult or impossible, at which point surgical repair is recommended. Excellent outcomes can be expected with surgical intervention. The three most commonly used repair techniques are the original Nesbit procedure, modified Nesbit procedure, and plication. Nesbit and modified Nesbit techniques require that an incision is made in the tunica albuginea while plication techniques utilize plicating sutures without an incision. While Nesbit and modified Nesbit techniques are more complex operations, these generally result in less recurrences and more satisfactory outcomes as opposed to the quicker and simpler plication technique.
Jensen, Anne M; Ramasamy, Adaikalavan; Hall, Michael W
2012-08-01
General flexibility is a key component of health, well-being, and general physical conditioning. Reduced flexibility has both physical and mental/emotional etiologies and can lead to musculoskeletal injuries and athletic underperformance. Few studies have tested the effectiveness of a mind-body therapy on general flexibility. The aim of this study was to investigate if Neuro Emotional Technique® (NET), a mind-body technique shown to be effective in reducing stress, can also improve general flexibility. The sit-and-reach test (SR) score was used as a measure of general flexibility. Forty-five healthy participants were recruited from the general population and assessed for their initial SR score before being randomly allocated to receive (a) two 20-minute sessions of NET (experimental group); (b) two 20-minute sessions of stretching instruction (active control group); or (c) no intervention or instruction (passive control group). After intervention, the participants were reassessed in a similar manner by the same blind assessor. The participants also answered questions about demographics, usual water and caffeine consumption, and activity level, and they completed an anxiety/mood psychometric preintervention and postintervention. The mean (SD) change in the SR score was +3.1 cm (2.5) in the NET group, +1.2 cm (2.3) in the active control group and +1.0 cm (2.6) in the passive control group. Although all the 3 groups showed some improvement, the improvement in the NET group was statistically significant when compared with that of either the passive controls (p = 0.015) or the active controls (p = 0.021). This study suggests that NET could provide an effective treatment in improving general flexibility. A larger study is required to confirm these findings and also to assess longer term effectiveness of this therapy on general flexibility.
NASA Technical Reports Server (NTRS)
Melton, John E.
1994-01-01
EGADS is a comprehensive preliminary design tool for estimating the performance of light, single-engine general aviation aircraft. The software runs on the Apple Macintosh series of personal computers and assists amateur designers and aeronautical engineering students in performing the many repetitive calculations required in the aircraft design process. The program makes full use of the mouse and standard Macintosh interface techniques to simplify the input of various design parameters. Extensive graphics, plotting, and text output capabilities are also included.
Uncertainties in predicting solar panel power output
NASA Technical Reports Server (NTRS)
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
Martin, Alia; Santos, Laurie R
2014-04-01
Cook et al. propose that mirror neurons emerge developmentally through a domain-general associative mechanism. We argue that experience-sensitivity does not rule out an adaptive or genetic argument for mirror neuron function, and that current evidence suggests that mirror neurons are more specialized than the authors' account would predict. We propose that future work integrate behavioral and neurophysiological techniques used with primates to examine the proposed functions of mirror neurons in action understanding.
Rotative balance of the I.M.F. Lille and associated experimental techniques
NASA Technical Reports Server (NTRS)
Verbrugge, R.
1981-01-01
The study of aerodynamic effects at high incidence associated with motions of wide amplitude incorporating continuous rotations requires the consideration of coupled effects, which are generally nonlinear, in a formulation of equations of motion. A rotative balance designed to simulate such maneuvers in a windtunnel was created to form a test medium for analytical studies. A general description of the assembly is provided by considering two main ranges of application. The capacities and performance of the assembly are discussed.
Terrain modeling for microwave landing system
NASA Technical Reports Server (NTRS)
Poulose, M. M.
1991-01-01
A powerful analytical approach for evaluating the terrain effects on a microwave landing system (MLS) is presented. The approach combines a multiplate model with a powerful and exhaustive ray tracing technique and an accurate formulation for estimating the electromagnetic fields due to the antenna array in the presence of terrain. Both uniform theory of diffraction (UTD) and impedance UTD techniques have been employed to evaluate these fields. Innovative techniques are introduced at each stage to make the model versatile to handle most general terrain contours and also to reduce the computational requirement to a minimum. The model is applied to several terrain geometries, and the results are discussed.
NASA Technical Reports Server (NTRS)
Margaria, Tiziana (Inventor); Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor); Steffen, Bernard (Inventor)
2010-01-01
Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.
Microwave, Millimeter, Submillimeter, and Far Infrared Spectral Databases
NASA Technical Reports Server (NTRS)
Pearson, J. C.; Pickett, H. M.; Drouin, B. J.; Chen, P.; Cohen, E. A.
2002-01-01
The spectrum of most known astrophysical molecules is derived from transitions between a few hundred to a few hundred thousand energy levels populated at room temperature. In the microwave and millimeter wave regions. spectroscopy is almost always performed with traditional microwave techniques. In the submillimeter and far infrared microwave technique becomes progressively more technologically challenging and infrared techniques become more widely employed as the wavelength gets shorter. Infrared techniques are typically one to two orders of magnitude less precise but they do generate all the strong features in the spectrum. With microwave technique, it is generally impossible and rarely necessary to measure every single transition of a molecular species, so careful fitting of quantum mechanical Hamiltonians to the transitions measured are required to produce the complete spectral picture of the molecule required by astronomers. The fitting process produces the most precise data possible and is required in the interpret heterodyne observations. The drawback of traditional microwave technique is that precise knowledge of the band origins of low lying excited states is rarely gained. The fitting of data interpolates well for the range of quantum numbers where there is laboratory data, but extrapolation is almost never precise. The majority of high resolution spectroscopic data is millimeter or longer in wavelength and a very limited number of molecules have ever been studied with microwave techniques at wavelengths shorter than 0.3 millimeters. The situation with infrared technique is similarly dire in the submillimeter and far infrared because the black body sources used are competing with a very significant thermal background making the signal to noise poor. Regardless of the technique used the data must be archived in a way useful for the interpretation of observations.
Superdense teleportation using hyperentangled photons
Graham, Trent M.; Bernstein, Herbert J.; Wei, Tzu-Chieh; Junge, Marius; Kwiat, Paul G
2015-01-01
Transmitting quantum information between two remote parties is a requirement for many quantum applications; however, direct transmission of states is often impossible because of noise and loss in the communication channel. Entanglement-enhanced state communication can be used to avoid this issue, but current techniques require extensive experimental resources to transmit large quantum states deterministically. To reduce these resource requirements, we use photon pairs hyperentangled in polarization and orbital angular momentum to implement superdense teleportation, which can communicate a specific class of single-photon ququarts. We achieve an average fidelity of 87.0(1)%, almost twice the classical limit of 44% with reduced experimental resources than traditional techniques. We conclude by discussing the information content of this constrained set of states and demonstrate that this set has an exponentially larger state space volume than the lower-dimensional general states with the same number of state parameters. PMID:26018201
Kinematics and constraints associated with swashplate blade pitch control
NASA Technical Reports Server (NTRS)
Leyland, Jane A.
1993-01-01
An important class of techniques to reduce helicopter vibration is based on using a Higher Harmonic controller to optimally define the Higher Harmonic blade pitch. These techniques typically require solution of a general optimization problem requiring the determination of a control vector which minimizes a performance index where functions of the control vector are subject to inequality constraints. Six possible constraint functions associated with swashplate blade pitch control were identified and defined. These functions constrain: (1) blade pitch Fourier Coefficients expressed in the Rotating System, (2) blade pitch Fourier Coefficients expressed in the Nonrotating System, (3) stroke of the individual actuators expressed in the Nonrotating System, (4) blade pitch expressed as a function of blade azimuth and actuator stroke, (5) time rate-of-change of the aforementioned parameters, and (6) required actuator power. The aforementioned constraints and the associated kinematics of swashplate blade pitch control by means of the strokes of the individual actuators are documented.
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
47 CFR 15.611 - General technical requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... spectrum by licensed services. These techniques may include adaptive or “notch” filtering, or complete... frequencies below 30 MHz, when a notch filter is used to avoid interference to a specific frequency band, the... below the applicable part 15 limits. (ii) For frequencies above 30 MHz, when a notch filter is used to...
Infection control practices for dental radiography.
Palenik, Charles John
2004-06-01
Infection control for dental radiography employs the same materials, processes, and techniques used in the operatory, yet unless proper procedures are established and followed, there is a definite potential for cross-contamination to clinical area surfaces and DHCP. In general, the aseptic practices used are relatively simple and inexpensive, yet they require complete application in every situation.
Risk in fire management decisionmaking: techniques and criteria
Gail Blatternberger; William F. Hyde; Thomas J. Mills
1984-01-01
In the past, decisionmaking in wildland fire management generally has not included a full consideration of the risk and uncertainty that is inherent in evaluating alternatives. Fire management policies in some Federal land management agencies now require risk evaluation. The model for estimating the economic efficiency of fire program alternatives is the minimization...
Validation of helicopter noise prediction techniques
NASA Technical Reports Server (NTRS)
Succi, G. P.
1981-01-01
The current techniques of helicopter rotor noise prediction attempt to describe the details of the noise field precisely and remove the empiricisms and restrictions inherent in previous methods. These techniques require detailed inputs of the rotor geometry, operating conditions, and blade surface pressure distribution. The purpose of this paper is to review those techniques in general and the Farassat/Nystrom analysis in particular. The predictions of the Farassat/Nystrom noise computer program, using both measured and calculated blade surface pressure data, are compared to measured noise level data. This study is based on a contract from NASA to Bolt Beranek and Newman Inc. with measured data from the AH-1G Helicopter Operational Loads Survey flight test program supplied by Bell Helicopter Textron.
NASA Technical Reports Server (NTRS)
Dabney, James B.; Arthur, James Douglas
2017-01-01
Agile methods have gained wide acceptance over the past several years, to the point that they are now a standard management and execution approach for small-scale software development projects. While conventional Agile methods are not generally applicable to large multi-year and mission-critical systems, Agile hybrids are now being developed (such as SAFe) to exploit the productivity improvements of Agile while retaining the necessary process rigor and coordination needs of these projects. From the perspective of Independent Verification and Validation (IVV), however, the adoption of these hybrid Agile frameworks is becoming somewhat problematic. Hence, we find it prudent to question the compatibility of conventional IVV techniques with (hybrid) Agile practices.This paper documents our investigation of (a) relevant literature, (b) the modification and adoption of Agile frameworks to accommodate the development of large scale, mission critical systems, and (c) the compatibility of standard IVV techniques within hybrid Agile development frameworks. Specific to the latter, we found that the IVV methods employed within a hybrid Agile process can be divided into three groups: (1) early lifecycle IVV techniques that are fully compatible with the hybrid lifecycles, (2) IVV techniques that focus on tracing requirements, test objectives, etc. are somewhat incompatible, but can be tailored with a modest effort, and (3) IVV techniques involving an assessment requiring artifact completeness that are simply not compatible with hybrid Agile processes, e.g., those that assume complete requirement specification early in the development lifecycle.
Doyle, W. Harry
1981-01-01
A requirement of Public Law 95-87, the Surface Mining Control and Reclamation Act of 1977, is the understanding of the hydrology in actual and proposed surface-mined areas. Surface-water data for small specific-sites and for larger areas such as adjacent and general areas are needed also to satisfy the hydrologic requirements of the Act. The Act specifies that surface-water modeling techniques may be used to generate the data and information. The purpose of this report is to describe how this can be achieved for smaller watersheds. This report also characterizes 12 ' state-of-the-art ' strip-mining assessment models that are to be tested with data from two data-intensive studies involving small watersheds in Tennessee and Indiana. Watershed models are best applied to small watersheds with specific-site data. Extending the use of modeling techniques to larger watersheds remains relatively untested, and to date the upper limits for application have not been established. The U.S. Geological Survey is currently collecting regional hydrologic data in the major coal provinces of the United States and this data will be used to help satisfy the ' general-area ' data requirements of the Act. This program is reviewed and described in this report. (USGS)
Determination of the spin and recovery characteristics of a typical low-wing general aviation design
NASA Technical Reports Server (NTRS)
Tischler, M. B.; Barlow, J. B.
1980-01-01
The equilibrium spin technique implemented in a graphical form for obtaining spin and recovery characteristics from rotary balance data is outlined. Results of its application to recent rotary balance tests of the NASA Low-Wing General Aviation Aircraft are discussed. The present results, which are an extension of previously published findings, indicate the ability of the equilibrium method to accurately evaluate spin modes and recovery control effectiveness. A comparison of the calculated results with available spin tunnel and full scale findings is presented. The technique is suitable for preliminary design applications as determined from the available results and data base requirements. A full discussion of implementation considerations and a summary of the results obtained from this method to date are presented.
On Two-Scale Modelling of Heat and Mass Transfer
NASA Astrophysics Data System (ADS)
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
Design and fabrication of conventional and unconventional superconductors
NASA Technical Reports Server (NTRS)
Collings, E. W.
1983-01-01
The design and fabrication of conventional and unconventionally processed Ti-Nb base and Al5-compound-base, respectively, composite superconductors is discussed in a nine section review. The first two sections introduce the general properties of alloy and compound superconductors, and the design and processing requirements for the production of long lengths of stable low loss conductor. All aspects of flux jump stability, and the general requirements of cryogenic stabilization are addressed. Conductor design from an a.c.-loss standpoint; some basic formulae describing hysteretic and eddy current losses and the influences on a.c. loss of filament diameter, strand (conductor) diameter, twist pitch, and matrix resistivity are discussed. The basic techniques used in the fabrication of conventional multifilamentary conductors are described.
Multiclass Bayes error estimation by a feature space sampling technique
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.
1979-01-01
A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.
Spinal anesthesia in infants: recent developments.
Tirmizi, Henna
2015-06-01
Spinal anesthesia has long been described as a well-tolerated and effective means of providing anesthesia for infants undergoing lower abdominal surgery. Now, spinal anesthetics are being used for an increasing variety of surgeries previously believed to require a general anesthetic. This, along with increasing concerns over the neurocognitive effects of general anesthetics on developing brains, suggests that further exploration into this technique and its effects is essential. Exposure to spinal anesthesia in infancy has not shown the same suggestions of neurocognitive detriment as those resulting from general anesthesia. Ultrasound guidance has enhanced spinal technique by providing real-time guidance into the intrathecal space and confirming medication administration location, as well as helping avoid adverse outcomes by identifying aberrant anatomy. Spinal anesthesia provides benefits over general anesthesia, including cardiorespiratory stability, shorter postoperative recovery, and faster return of gastrointestinal function. Early findings of spinal anesthesia exposure in infancy have shown it to have no independent effect on neurocognitive delay as well as to provide sound cardiorespiratory stability. With safer means of administering a spinal anesthetic, such as with ultrasound guidance, it is a readily available and desirable tool for those providing anesthesia to infants.
Colombo, Fulvio; Casarico, Antonio
2008-11-01
As male genital corrective surgery is becoming increasingly requested by patients, the need to reach a general consensus on indications and techniques is now imperative. This review of published data provides an overview concerning patient selection modalities, benefits/risks and expected outcomes of surgery. Finally, the article focuses on ethical issues caused by the growing aesthetic nature of this surgery. Interest has been sparked by animal studies, the description of innovative techniques for lengthening and girth enhancement techniques, reconstructive phalloplasty and penile implant surgery. Data suggest that better objective surgical outcomes are possible, though in many cases long-term data and patient-rated satisfaction details are lacking. Most importantly, studies show the importance of having a multidisciplinary team in charge of patient selection. Although more long-term data are required before a general consensus can be reached, recent findings point to the absolute need for a thorough psychological assessment of men requesting penile enhancement surgery. Urologists should work in very close collaboration with psychologists or psychosexologists both during the preoperative phase (to verify eligibility for surgery) and afterwards (to provide counselling).
Magnetic particle testing of turbine blades mounted on the turbine rotor shaft
NASA Astrophysics Data System (ADS)
Imbert, Clement; Rampersad, Krishna
1992-07-01
An outline is presented of the general technique of magnetic particle inspection (MPI) of turbine blades mounted on the turbine rotor shaft with specific reference to the placement of the magnetizing coils. In particular, this study reports on the use of MPI in the examination of martensitic stainless steel turbine blades in power plants in Trinidad and Tobago in order to establish procedures for the detection of discontinuities. The techniques described are applicable to ferromagnetic turbine blades in general. The two practical techniques mentioned are the method of placing a preformed coil over a number of blades in one row and the method of wrapping the coil around the rotor shaft across an entire row of blades. Of the two methods, the former is preferred to the latter one, because there is greater uniformity of magnetic flux induced and lower current required to induce adequate flux density with the preformed coil. However, both methods provide satisfactory magnetic flux, and either can be used.
NASA Astrophysics Data System (ADS)
Avitabile, Peter; O'Callahan, John
2009-01-01
Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.
Achieving reutilization of scheduling software through abstraction and generalization
NASA Technical Reports Server (NTRS)
Wilkinson, George J.; Monteleone, Richard A.; Weinstein, Stuart M.; Mohler, Michael G.; Zoch, David R.; Tong, G. Michael
1995-01-01
Reutilization of software is a difficult goal to achieve particularly in complex environments that require advanced software systems. The Request-Oriented Scheduling Engine (ROSE) was developed to create a reusable scheduling system for the diverse scheduling needs of the National Aeronautics and Space Administration (NASA). ROSE is a data-driven scheduler that accepts inputs such as user activities, available resources, timing contraints, and user-defined events, and then produces a conflict-free schedule. To support reutilization, ROSE is designed to be flexible, extensible, and portable. With these design features, applying ROSE to a new scheduling application does not require changing the core scheduling engine, even if the new application requires significantly larger or smaller data sets, customized scheduling algorithms, or software portability. This paper includes a ROSE scheduling system description emphasizing its general-purpose features, reutilization techniques, and tasks for which ROSE reuse provided a low-risk solution with significant cost savings and reduced software development time.
The Fluorescent-Oil Film Method and Other Techniques for Boundary-Layer Flow Visualization
NASA Technical Reports Server (NTRS)
Loving, Donald L.; Katzoff, S.
1959-01-01
A flow-visualization technique, known as the fluorescent-oil film method, has been developed which appears to be generally simpler and to require less experience and development of technique than previously published methods. The method is especially adapted to use in the large high-powered wind tunnels which require considerable time to reach the desired test conditions. The method consists of smearing a film of fluorescent oil over a surface and observing where the thickness is affected by the shearing action of the boundary layer. These films are detected and identified, and their relative thicknesses are determined by use of ultraviolet light. Examples are given of the use of this technique. Other methods that show promise in the study of boundary-layer conditions are described. These methods include the use of a temperature-sensitive fluorescent paint and the use of a radiometer that is sensitive to the heat radiation from a surface. Some attention is also given to methods that can be used with a spray apparatus in front of the test model.
NASA Astrophysics Data System (ADS)
Kalashnikov, N. P.; Muravyev-Smirnov, S. S.; Samarchenko, D. A.; Tyulyusov, A. N.
2017-01-01
We discuss the remote training technique in general physics for foreign students. The examination for the student certification was chosen in the quiz form for all parts of the general physics course. This article describes the basic principles of the creation and placement of the structured question bank for the distance learning system. The possibility of creating an adaptive tests system on the basis of the minimal state education requirements is described. The examination results are analyzed and the tests validity is carried out based on the comparison of the exam results with a student certification during the semester.
Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?
Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie
2012-01-01
A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746
Wood, Michael
2010-01-01
Conscious Decision' was published in 2000 by the Department of Health, effectively ending the provision of dental general anaesthesia (DGA) outside the hospital environment. Other aspects of dental anxiety and behavioural management and sedation techniques were encouraged before the decision to refer for a DGA was reached. Although some anxious children may be managed with relative analgesia (RA), some may require different sedation techniques for dentists to accomplish dental treatment. Little evidence has been published in the UK to support the use of alternative sedation techniques in children. This paper presents another option using an alternative conscious sedation technique. to determine whether a combination of intranasal midazolam (IN) and inhalation sedation with nitrous oxide and oxygen is a safe and practical alternative to DGA. A prospective clinical audit of 100 cases was carried out on children referred to a centre for DGA. 100 children between 3 and 13 years of age who were referred for DGA were treated using this technique. Sedation was performed by intranasal midazolam followed by titrating a mixture of nitrous oxide and oxygen. A range of dental procedures was carried out while the children were sedated. Parents were present during the dental treatment. Data related to the patient, dentistry and treatment as well as sedation variables were collected at the treatment visit and a telephonic post-operative assessment from the parents was completed a week later. It was found that 96% of the required dental treatment was completed successfully using this technique, with parents finding this technique acceptable in 93% of cases. 50% of children found the intranasal administration of the midazolam acceptable. There was no clinically relevant oxygen desaturation during the procedure. Patients were haemodynamically stable and verbal contact was maintained throughout the procedure. In selected cases this technique provides a safe and effective alternative to DGA and could reduce the number of patients referred to hospitals for DGA. It is recommended that this technique should only be used by dentists skilled in sedation with the appropriate staff and equipment at their disposal.
Community Schools in Ohio: Implementation Issues and Impact on Ohio's Education System. Volume I.
ERIC Educational Resources Information Center
Ohio State Legislative Office of Education Oversight, Columbus.
Community schools were created in Ohio to provide additional educational options for children in low-performing schools and to develop innovative teaching and management techniques that may be transferable to traditional public schools. In 1997 the Ohio General Assembly required the Legislative Office of Education Oversight (LOEO) to evaluate the…
ERIC Educational Resources Information Center
Justin, J. Karl
Variables and parameters affecting architectural planning and audiovisual systems selection for lecture halls and other learning spaces are surveyed. Interrelationships of factors are discussed, including--(1) design requirements for modern educational techniques as differentiated from cinema, theater or auditorium design, (2) general hall…
ERIC Educational Resources Information Center
Schulz, Russel E.; And Others
The report, the first of two documents examining the relationship among job requirements, training, and manpower considerations for Army aviation maintenance Personnel, discusses the development of task data gathering techniques and procedures for incorporating this data into training programs for the UH-1 helicopter mechanic sPecialty (MOS…
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
Coarsening strategies for unstructured multigrid techniques with application to anisotropic problems
NASA Technical Reports Server (NTRS)
Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.
1995-01-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e., the aspect-ratio AR = delta y/delta x is much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotopic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.
Coarsening Strategies for Unstructured Multigrid Techniques with Application to Anisotropic Problems
NASA Technical Reports Server (NTRS)
Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.
1996-01-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e. the aspect-ratio AR = (delta)y/(delta)x much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotropic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.
NASA Astrophysics Data System (ADS)
Clark, T. L.; McCollum, M. B.; Trout, D. H.; Javor, K.
1995-06-01
The purpose of the MEDIC Handbook is to provide practical and helpful information in the design of electrical equipment for electromagnetic compatibility (EMS). Included is the definition of electromagnetic interference (EMI) terms and units as well as an explanation of the basic EMI interactions. An overview of typical NASA EMI test requirements and associated test setups is given. General design techniques to minimize the risk of EMI and EMI suppression techniques at the board and equipment interface levels are presented. The Handbook contains specific EMI test compliance design techniques and retrofit fixes for noncompliant equipment. Also presented are special tests that are useful in the design process or in instances of specification noncompliance.
NASA Technical Reports Server (NTRS)
Clark, T. L.; Mccollum, M. B.; Trout, D. H.; Javor, K.
1995-01-01
The purpose of the MEDIC Handbook is to provide practical and helpful information in the design of electrical equipment for electromagnetic compatibility (EMS). Included is the definition of electromagnetic interference (EMI) terms and units as well as an explanation of the basic EMI interactions. An overview of typical NASA EMI test requirements and associated test setups is given. General design techniques to minimize the risk of EMI and EMI suppression techniques at the board and equipment interface levels are presented. The Handbook contains specific EMI test compliance design techniques and retrofit fixes for noncompliant equipment. Also presented are special tests that are useful in the design process or in instances of specification noncompliance.
Retrieval techniques: LVLH and inertially stabilized payloads
NASA Technical Reports Server (NTRS)
Yglesias, J. A.
1980-01-01
Procedures and techniques are discussed for retrieving payloads that are inertially or local vertical/local horizontal (LVLH) stabilized. Selection of the retrieval profile to be used depends on several factors: (1) control authority of the payload, (2) payload sensitivity to primary reaction control system (PRCS) plumes, (3) whether the payload is inertially or LVLH stabilized, (4) location of the grapple fixture, and (5) orbiter propellant consumption. The general retrieval profiles recommended are a V-bar approach for payloads that are LVLH or gravity-gradient stabilized, and the V-bar approach with one or two phase flyaround for inertially stabilized payloads. Once the general type of profile has been selected, the detailed retrieval profile and timeline should consider the various guidelines, groundrules, and constraints associated with a particular payload or flight. Reaction control system (RCS) propellant requirements for the recommended profiles range from 200 to 1500 pounds, depending on such factors as braking techniques, flyaround maneuvers (if necessary), and stationkeeping operations. The time required to perform a retrieval (starting from 1000 feet) varies from 20 to 130 minutes, depending on the complexity of the profile. The goals of this project are to develop a profile which ensures mission success; to make the retrieval profiles simple; and to keep the pilot workload to a minimum by making use of the automatic features of the orbiter flight software whenever possible.
GOES-R SUVI EUV Flatfields Generated Using Boustrophedon Scans
NASA Astrophysics Data System (ADS)
Shing, L.; Edwards, C.; Mathur, D.; Vasudevan, G.; Shaw, M.; Nwachuku, C.
2017-12-01
The Solar Ultraviolet Imager (SUVI) is mounted on the Solar Pointing Platform (SPP) of the Geostationary Operational Environmental Satellite, GOES-R. SUVI is a Generalized Cassegrain telescope with a large field of view that employs multilayer coatings optimized to operate in six extreme ultraviolet (EUV) narrow bandpasses centered at 9.4, 13.1, 17.1, 19.5, 28.4 and 30.4 nm. The SUVI CCD flatfield response was determined using two different techniques; The Kuhn-Lin-Lorentz (KLL) Raster and a new technique called, Dynamic Boustrophedon Scans. The new technique requires less time to collect the data and is also less sensitive to Solar features compared with the KLL method. This paper presents the flatfield results of the SUVI using this technique during Post Launch Testing (PLT).
NASA Technical Reports Server (NTRS)
Sundstrom, J. L.
1980-01-01
The techniques required to produce and validate six detailed task timeline scenarios for crew workload studies are described. Specific emphasis is given to: general aviation single pilot instrument flight rules operations in a high density traffic area; fixed path metering and spacing operations; and comparative workload operation between the forward and aft-flight decks of the NASA terminal control vehicle. The validation efforts also provide a cursory examination of the resultant demand workload based on the operating procedures depicted in the detailed task scenarios.
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.
1985-01-01
A component mode synthesis method for damped structures was developed and modal test methods were explored which could be employed to determine the relevant parameters required by the component mode synthesis method. Research was conducted on the following topics: (1) Development of a generalized time-domain component mode synthesis technique for damped systems; (2) Development of a frequency-domain component mode synthesis method for damped systems; and (3) Development of a system identification algorithm applicable to general damped systems. Abstracts are presented of the major publications which have been previously issued on these topics.
NASA Technical Reports Server (NTRS)
Scardino, Frank L.
1992-01-01
In the design of textile composites, the selection of materials and constructional techniques must be matched with product performance, productivity, and cost requirements. Constructional techniques vary. A classification of various textile composite systems is given. In general, the chopped fiber system is not suitable for structural composite applications because of fiber discontinuity, uncontrolled fiber orientation and a lack of fiber integration or entanglement. Linear filament yarn systems are acceptable for structural components which are exposed to simple tension in their applications. To qualify for more general use as structural components, filament yarn systems must be multi-directionally positioned. With the most sophisticated filament winding and laying techniques, however, the Type 2 systems have limited potential for general load-bearing applications because of a lack of filament integration or entanglement, which means vulnerability to splitting and delamination among filament layers. The laminar systems (Type 3) represented by a variety of simple fabrics (woven, knitted, braided and nonwoven) are especially suitable for load-bearing panels in flat form and for beams in a roled up to wound form. The totally integrated, advanced fabric system (Type 4) are thought to be the most reliable for general load-bearing applications because of fiber continuity and because of controlled multiaxial fiber orientation and entanglement. Consequently, the risk of splitting and delamination is minimized and practically omitted. Type 4 systems can be woven, knitted, braided or stitched through with very special equipment. Multiaxial fabric technologies are discussed.
Smart Grid Privacy through Distributed Trust
NASA Astrophysics Data System (ADS)
Lipton, Benjamin
Though the smart electrical grid promises many advantages in efficiency and reliability, the risks to consumer privacy have impeded its deployment. Researchers have proposed protecting privacy by aggregating user data before it reaches the utility, using techniques of homomorphic encryption to prevent exposure of unaggregated values. However, such schemes generally require users to trust in the correct operation of a single aggregation server. We propose two alternative systems based on secret sharing techniques that distribute this trust among multiple service providers, protecting user privacy against a misbehaving server. We also provide an extensive evaluation of the systems considered, comparing their robustness to privacy compromise, error handling, computational performance, and data transmission costs. We conclude that while all the systems should be computationally feasible on smart meters, the two methods based on secret sharing require much less computation while also providing better protection against corrupted aggregators. Building systems using these techniques could help defend the privacy of electricity customers, as well as customers of other utilities as they move to a more data-driven architecture.
Huynh, Hai; Elkouri, Stephane; Beaudoin, Nathalie; Bruneau, Luc; Guimond, Cathie; Daniel, Véronique; Blair, Jean-François
2007-01-01
This study evaluated the learning curve for a second-year general surgery resident and compared 2 totally laparoscopic aortic surgery techniques in 10 pigs: the transretroperitoneal apron approach and the transperitoneal retrocolic approach. Five end points were compared: success rate, percentage of conversion, time required, laparoscopic anastomosis quality, and learning curve. The first 3 interventions required an open conversion. The last 7 were done without complications. Mean dissection time was significantly higher with the apron approach compared with the retrocolic approach. The total times for operation, clamping, and arteriotomy time were similar. All laparoscopic anastomoses were patent and without stenosis. The initial learning curve for laparoscopic anastomosis was relatively short for a second-year surgery resident. Both techniques resulted in satisfactory exposure of the aorta and similar mean operative and clamping time. Training on an ex vivo laparoscopic box trainer and on an animal model seems to be complementary to decrease laparoscopic anastomosis completion time.
Virus purification by CsCl density gradient using general centrifugation.
Nasukawa, Tadahiro; Uchiyama, Jumpei; Taharaguchi, Satoshi; Ota, Sumire; Ujihara, Takako; Matsuzaki, Shigenobu; Murakami, Hironobu; Mizukami, Keijirou; Sakaguchi, Masahiro
2017-11-01
Virus purification by cesium chloride (CsCl) density gradient, which generally requires an expensive ultracentrifuge, is an essential technique in virology. Here, we optimized virus purification by CsCl density gradient using general centrifugation (40,000 × g, 2 h, 4 °C), which showed almost the same purification ability as conventional CsCl density gradient ultracentrifugation (100,000 × g, 1 h, 4 °C) using phages S13' and φEF24C. Moreover, adenovirus strain JM1/1 was also successfully purified by this method. We suggest that general centrifugation can become a less costly alternative to ultracentrifugation for virus purification by CsCl densiy gradient and will thus encourage research in virology.
CIRCAL-2 - General-purpose on-line circuit design.
NASA Technical Reports Server (NTRS)
Dertouzos, M. L.; Jessel, G. P.; Stinger, J. R.
1972-01-01
CIRCAL-2 is a second-generation general-purpose on-line circuit-design program with the following main features: (1) multiple-analysis capability; (2) uniform and general data structures for handling text editing, network representations, and output results, regardless of analysis; (3) special techniques and structures for minimizing and controlling user-program interaction; (4) use of functionals for the description of hysteresis and heat effects; and (5) ability to define optimization procedures that 'replace' the user. The paper discusses the organization of CIRCAL-2, the aforementioned main features, and their consequences, such as a set of network elements and models general enough for most analyses and a set of functions tailored to circuit-design requirements. The presentation is descriptive, concentrating on conceptual rather than on program implementation details.
Range image segmentation using Zernike moment-based generalized edge detector
NASA Technical Reports Server (NTRS)
Ghosal, S.; Mehrotra, R.
1992-01-01
The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.
NASA Technical Reports Server (NTRS)
Lee, J.
1994-01-01
A generalized flow solver using an implicit Lower-upper (LU) diagonal decomposition based numerical technique has been coupled with three low-Reynolds number kappa-epsilon models for analysis of problems with engineering applications. The feasibility of using the LU technique to obtain efficient solutions to supersonic problems using the kappa-epsilon model has been demonstrated. The flow solver is then used to explore limitations and convergence characteristics of several popular two equation turbulence models. Several changes to the LU solver have been made to improve the efficiency of turbulent flow predictions. In general, the low-Reynolds number kappa-epsilon models are easier to implement than the models with wall-functions, but require much finer near-wall grid to accurately resolve the physics. The three kappa-epsilon models use different approaches to characterize the near wall regions of the flow. Therefore, the limitations imposed by the near wall characteristics have been carefully resolved. The convergence characteristics of a particular model using a given numerical technique are also an important, but most often overlooked, aspect of turbulence model predictions. It is found that some convergence characteristics could be sacrificed for more accurate near-wall prediction. However, even this gain in accuracy is not sufficient to model the effects of an external pressure gradient imposed by a shock-wave/ boundary-layer interaction. Additional work on turbulence models, especially for compressibility, is required since the solutions obtained with base line turbulence are in only reasonable agreement with the experimental data for the viscous interaction problems.
Are the expected benefits of requirements reuse hampered by distance? An experiment.
Carrillo de Gea, Juan M; Nicolás, Joaquín; Fernández-Alemán, José L; Toval, Ambrosio; Idri, Ali
2016-01-01
Software development processes are often performed by distributed teams which may be separated by great distances. Global software development (GSD) has undergone a significant growth in recent years. The challenges concerning GSD are especially relevant to requirements engineering (RE). Stakeholders need to share a common ground, but there are many difficulties as regards the potentially variable interpretation of the requirements in different contexts. We posit that the application of requirements reuse techniques could alleviate this problem through the diminution of the number of requirements open to misinterpretation. This paper presents a reuse-based approach with which to address RE in GSD, with special emphasis on specification techniques, namely parameterised requirements and traceability relationships. An experiment was carried out with the participation of 29 university students enrolled on a Computer Science and Engineering course. Two main scenarios that represented co-localisation and distribution in software development were portrayed by participants from Spain and Morocco. The global teams achieved a slightly better performance than the co-located teams as regards effectiveness , which could be a result of the worse productivity of the global teams in comparison to the co-located teams. Subjective perceptions were generally more positive in the case of the distributed teams ( difficulty , speed and understanding ), with the exception of quality . A theoretical model has been proposed as an evaluation framework with which to analyse, from the point of view of the factor of distance, the effect of requirements specification techniques on a set of performance and perception-based variables. The experiment utilised a new internationalisation requirements catalogue. None of the differences found between co-located and distributed teams were significant according to the outcome of our statistical tests. The well-known benefits of requirements reuse in traditional co-located projects could, therefore, also be expected in GSD projects.
A high temperature testing system for ceramic composites
NASA Technical Reports Server (NTRS)
Hemann, John
1994-01-01
Ceramic composites are presently being developed for high temperature use in heat engine and space power system applications. The operating temperature range is expected to be 1090 to 1650 C (2000 F to 3000 F). Very little material data is available at these temperatures and, therefore, it is desirable to thoroughly characterize the basic unidirectional fiber reinforced ceramic composite. This includes testing mainly for mechanical material properties at high temperatures. The proper conduct of such characterization tests requires the development of a tensile testing system includes unique gripping, heating, and strain measuring devices which require special considerations. The system also requires an optimized specimen shape. The purpose of this paper is to review various techniques for measuring displacements or strains, preferably at elevated temperatures. Due to current equipment limitations it is assumed that the specimen is to be tested at a temperature of 1430 C (2600F) in an oxidizing atmosphere. For the most part, previous high temperature material characterization tests, such as flexure and tensile tests, have been performed in inert atmospheres. Due to the harsh environment in which the ceramic specimen is to be tested, many conventional strain measuring techniques can not be applied. Initially a brief description of the more commonly used mechanical strain measuring techniques is given. Major advantages and disadvantages with their application to high temperature tensile testing of ceramic composites are discussed. Next, a general overview is given for various optical techniques. Advantages and disadvantages which are common to these techniques are noted. The optical methods for measuring strain or displacement are categorized into two sections. These include real-time techniques. Finally, an optical technique which offers optimum performance with the high temperature tensile testing of ceramic composites is recommended.
A rational framework for production decision making in blood establishments.
Ramoa, Augusto; Maia, Salomé; Lourenço, Anália
2012-07-24
SAD_BaSe is a blood bank data analysis software, created to assist in the management of blood donations and the blood production chain in blood establishments. In particular, the system keeps track of several collection and production indicators, enables the definition of collection and production strategies, and the measurement of quality indicators required by the Quality Management System regulating the general operation of blood establishments. This paper describes the general scenario of blood establishments and its main requirements in terms of data management and analysis. It presents the architecture of SAD_BaSe and identifies its main contributions. Specifically, it brings forward the generation of customized reports driven by decision making needs and the use of data mining techniques in the analysis of donor suspensions and donation discards.
A Rational Framework for Production Decision Making in Blood Establishments.
Ramoa, Augusto; Maia, Salomé; Lourenço, Anália
2012-12-01
SAD_BaSe is a blood bank data analysis software, created to assist in the management of blood donations and the blood production chain in blood establishments. In particular, the system keeps track of several collection and production indicators, enables the definition of collection and production strategies, and the measurement of quality indicators required by the Quality Management System regulating the general operation of blood establishments. This paper describes the general scenario of blood establishments and its main requirements in terms of data management and analysis. It presents the architecture of SAD_BaSe and identifies its main contributions. Specifically, it brings forward the generation of customized reports driven by decision making needs and the use of data mining techniques in the analysis of donor suspensions and donation discards.
Cost considerations in using simulations for medical training.
Fletcher, J D; Wind, Alexander P
2013-10-01
This article reviews simulation used for medical training, techniques for assessing simulation-based training, and cost analyses that can be included in such assessments. Simulation in medical training appears to take four general forms: human actors who are taught to simulate illnesses and ailments in standardized ways; virtual patients who are generally presented via computer-controlled, multimedia displays; full-body manikins that simulate patients using electronic sensors, responders, and controls; and part-task anatomical simulations of various body parts and systems. Techniques for assessing costs include benefit-cost analysis, return on investment, and cost-effectiveness analysis. Techniques for assessing the effectiveness of simulation-based medical training include the use of transfer effectiveness ratios and incremental transfer effectiveness ratios to measure transfer of knowledge and skill provided by simulation to the performance of medical procedures. Assessment of costs and simulation effectiveness can be combined with measures of transfer using techniques such as isoperformance analysis to identify ways of minimizing costs without reducing performance effectiveness or maximizing performance without increasing costs. In sum, economic analysis must be considered in training assessments if training budgets are to compete successfully with other requirements for funding. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
A Survey Of Techniques for Managing and Leveraging Caches in GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh
2014-09-01
Initially introduced as special-purpose accelerators for graphics applications, graphics processing units (GPUs) have now emerged as general purpose computing platforms for a wide range of applications. To address the requirements of these applications, modern GPUs include sizable hardware-managed caches. However, several factors, such as unique architecture of GPU, rise of CPU–GPU heterogeneous computing, etc., demand effective management of caches to achieve high performance and energy efficiency. Recently, several techniques have been proposed for this purpose. In this paper, we survey several architectural and system-level techniques proposed for managing and leveraging GPU caches. We also discuss the importance and challenges ofmore » cache management in GPUs. The aim of this paper is to provide the readers insights into cache management techniques for GPUs and motivate them to propose even better techniques for leveraging the full potential of caches in the GPUs of tomorrow.« less
Imaging in the newborn: infant immobilizer obviates the need for anesthesia.
Golan, Agneta; Marco, Rina; Raz, Hagit; Shany, Eilon
2011-11-01
Neonatal cerebral imaging is a sensitive technique for evaluating brain injury in the neonatal period. When performing computed tomography or magnetic resonance imaging, sedation is needed to prevent motion artifacts. However, general anesthesia in neonates carries significant risks and requires a complex logistic approach that often limits the use of these modalities. The development of infant immobilizers now enables imaging without general anesthesia and significantly increases clinical and research investigational opportunities. To assess the efficacy of the infant immobilizer instead of general anesthesia for infants undergoing imaging. The study group comprised all infants born over a 1 year period at Soroka University Medical Center who required imaging such as MRI, CT or bone scans. A MedVac Vacuum Splint infant immobilizer was used in all infants to prevent motion during imaging. The success rate of a single scan and the need for general anesthesia were assessed. Forty infants were examined during 1 year. The studies included 15 CT scans, 25 MRIs and 1 bone scan. The infants' gestational age at birth was 27-40 weeks and the examinations were performed at ages ranging from delivery to 6 months old. All imaging was successful and none of the infants required general anesthesia. An infant immobilizer should be used for imaging of newborns. Since this method carries a low risk and has a high success rate, general anesthesia in newborns is justified only when this non-invasive procedure fails.
Speil, Sidney
1974-01-01
The problems of quantitating chrysotile in water by fiber count techniques are reviewed briefly and the use of mass quantitation is suggested as a preferable measure. Chrysotile fiber has been found in almost every sample of natural water examined, but generally transmission electron miscroscopy (TEM) is required because of the small diameters involved. The extreme extrapolation required in mathematically converting a few fibers or fiber fragments under the TEM to the fiber content of a liquid sample casts considerable doubt on the validity of numbers used to compare chrysotile contents of different liquids. PMID:4470930
A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.
Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit
2017-07-01
Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.
ERIC Educational Resources Information Center
Charlton, Amanda K.; Sevcik, Richard S.; Tucker, Dorie A.; Schultz, Linda D.
2007-01-01
A general science experiment for high school chemistry students might serve as an excellent review of the concepts of solution preparation, solubility, pH, and qualitative and quantitative analysis of a common food product. The students could learn to use safe laboratory techniques, collect and analyze data using proper scientific methodology and…
Paths of Movement for Selected Body Segments During Typical Pilot Tasks
1976-03-01
11 Scope ........... ...................... 12 Past Human Motion Investigations ........... ... 13 Experimental Techniques in Human...of.literature has been generated during the past few decades in the field of human-motion recording and analysis. However, in most of these studies body...to meet the COMBIMAN model requirements. Past Human Motion Investigations The 15th century artist-scientist, Leonardo da Vinci, is generally credited
Satellites for Distress Alerting and Locating
1976-10-01
cipally from the techniques and frequencies internationally adopted for public correspondence (e.g., business and personal communications).- As a... Unicorn stations (30 percent of uncontrolled landing fields have li- cerised Unicom stations). Such monitoring could be performed with inexpensiver...departure f. Military aircraft (Note: most carry ELT’s). All other general aviation aircraft whether for business , pleasure, or charter are required
Hybrid state vector methods for structural dynamic and aeroelastic boundary value problems
NASA Technical Reports Server (NTRS)
Lehman, L. L.
1982-01-01
A computational technique is developed that is suitable for performing preliminary design aeroelastic and structural dynamic analyses of large aspect ratio lifting surfaces. The method proves to be quite general and can be adapted to solving various two point boundary value problems. The solution method, which is applicable to both fixed and rotating wing configurations, is based upon a formulation of the structural equilibrium equations in terms of a hybrid state vector containing generalized force and displacement variables. A mixed variational formulation is presented that conveniently yields a useful form for these state vector differential equations. Solutions to these equations are obtained by employing an integrating matrix method. The application of an integrating matrix provides a discretization of the differential equations that only requires solutions of standard linear matrix systems. It is demonstrated that matrix partitioning can be used to reduce the order of the required solutions. Results are presented for several example problems in structural dynamics and aeroelasticity to verify the technique and to demonstrate its use. These problems examine various types of loading and boundary conditions and include aeroelastic analyses of lifting surfaces constructed from anisotropic composite materials.
History, ethics, advantages and limitations of experimental models for hepatic ablation.
Ong, Seok Ling; Gravante, Gianpiero; Metcalfe, Matthew S; Dennison, Ashley R
2013-01-14
Numerous techniques developed in medicine require careful evaluation to determine their indications, limitations and potential side effects prior to their clinical use. At present this generally involves the use of animal models which is undesirable from an ethical standpoint, requires complex and time-consuming authorization, and is very expensive. This process is exemplified in the development of hepatic ablation techniques, starting experiments on explanted livers and progressing to safety and efficacy studies in living animals prior to clinical studies. The two main approaches used are ex vivo isolated non-perfused liver models and in vivo animal models. Ex vivo non perfused models are less expensive, easier to obtain but not suitable to study the heat sink effect or experiments requiring several hours. In vivo animal models closely resemble clinical subjects but often are expensive and have small sample sizes due to ethical guidelines. Isolated perfused ex vivo liver models have been used to study drug toxicity, liver failure, organ transplantation and hepatic ablation and combine advantages of both previous models.
Biofabrication: an overview of the approaches used for printing of living cells.
Ferris, Cameron J; Gilmore, Kerry G; Wallace, Gordon G; In het Panhuis, Marc
2013-05-01
The development of cell printing is vital for establishing biofabrication approaches as clinically relevant tools. Achieving this requires bio-inks which must not only be easily printable, but also allow controllable and reproducible printing of cells. This review outlines the general principles and current progress and compares the advantages and challenges for the most widely used biofabrication techniques for printing cells: extrusion, laser, microvalve, inkjet and tissue fragment printing. It is expected that significant advances in cell printing will result from synergistic combinations of these techniques and lead to optimised resolution, throughput and the overall complexity of printed constructs.
Dynamic stability and handling qualities tests on a highly augmented, statically unstable airplane
NASA Technical Reports Server (NTRS)
Gera, Joseph; Bosworth, John T.
1987-01-01
Novel flight test and analysis techniques in the flight dynamics and handling qualities area are described. These techniques were utilized at NASA Ames-Dryden during the initial flight envelope clearance of the X-29A aircraft. It is shown that the open-loop frequency response of an aircraft with highly relaxed static stability can be successfully computed on the ground from telemetry data. Postflight closed-loop frequency response data were obtained from pilot-generated frequency sweeps and it is found that the current handling quality requirements for high-maneuverability aircraft are generally applicable to the X-29A.
Doi-Peliti path integral methods for stochastic systems with partial exclusion
NASA Astrophysics Data System (ADS)
Greenman, Chris D.
2018-09-01
Doi-Peliti methods are developed for stochastic models with finite maximum occupation numbers per site. We provide a generalized framework for the different Fock spaces reported in the literature. Paragrassmannian techniques are then utilized to construct path integral formulations of factorial moments. We show that for many models of interest, a Magnus expansion is required to construct a suitable action, meaning actions containing a finite number of terms are not always feasible. However, for such systems, perturbative techniques are still viable, and for some examples, including carrying capacity population dynamics, and diffusion with partial exclusion, the expansions are exactly summable.
Non-Zero Net Force and Constant Velocity: A Study in Mazur's Peer Instruction
NASA Astrophysics Data System (ADS)
Newburgh, Ronald
2009-10-01
A problem addressed infrequently in beginning physics courses is that of a moving body with changing mass. Elementary texts often have footnotes referring to jet planes and rockets but rarely do they go further. This omission is understandable because calculations with variable mass generally require the tools of calculus. This paper presents a changing mass problem that can be treated on an elementary level, thereby leading to an understanding of the role of changing mass on Newton's second law. It also illustrates Mazur's technique of Peer Instruction, a technique that demands active student participation.
Radar studies of arctic ice and development of a real-time Arctic ice type identification system
NASA Technical Reports Server (NTRS)
Rouse, J. W., Jr.; Schell, J. A.; Permenter, J. A.
1973-01-01
Studies were conducted to develop a real-time Arctic ice type identification system. Data obtained by NASA Mission 126, conducted at Pt. Barrow, Alaska (Site 93) in April 1970 was analyzed in detail to more clearly define the major mechanisms at work affecting the radar energy illuminating a terrain cell of sea ice. General techniques for reduction of the scatterometer data to a form suitable for application of ice type decision criteria were investigated, and the electronic circuit requirements for implementation of these techniques were determined. Also, consideration of circuit requirements are extended to include the electronics necessary for analog programming of ice type decision algorithms. After completing the basic circuit designs a laboratory model was constructed and a preliminary evaluation performed. Several system modifications for improved performance are suggested. (Modified author abstract)
Five secrets to leveraging maximum buying power with your media project.
Hirsch, Lonnie
2010-11-01
Planning and executing a successful media campaign or project requires knowledge and expert execution of specific techniques and skills, including understanding of the requirements for proper media research and competitive intelligence, effective planning of media schedules, negotiation of best rates with media companies, monitoring the campaign, accurately tracking and evaluating results, and making smart adjustments based on tracking data to maximize the profitability and success of the enterprise. Some of the most important knowledge and techniques are not generally known by most advertisers, particularly small businesses like health care practices. This article reveals these tips that are the most effective and includes information on the use of experts and other professional resources that help increase the likelihood of a successful outcome for a well-planned and executed media campaign. Copyright © 2010 Elsevier Inc. All rights reserved.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Lytle, J; Thomas, N F
1992-07-01
Local anaesthesia is frequently used in combination with light general anaesthesia to reduce the reflex responses to surgical stimulation. This combination has not previously been evaluated for intra-ocular surgery. During cataract extraction under general anaesthesia, the effect of topical anaesthesia with oxybuprocaine 0.4% on the pressor response was compared with normal saline in a control group. The simple technique of instilling local anaesthetic drops into the conjunctival sac blocked the pain pathway sufficiently to prevent the pressor response to surgical stimulation (p less than 0.001). Higher inspired concentrations of enflurane were required in the control group to achieve and maintain haemodynamic stability (p less than 0.001).
It's time to reinvent the general aviation airplane
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1988-01-01
Current designs for general aviation airplanes have become obsolete, and avenues for major redesign must be considered. New designs should incorporate recent advances in electronics, aerodynamics, structures, materials, and propulsion. Future airplanes should be optimized to operate satisfactorily in a positive air traffic control environment, to afford safety and comfort for point-to-point transportation, and to take advantage of automated manufacturing techniques and high production rates. These requirements have broad implications for airplane design and flying qualities, leading to a concept for the Modern Equipment General Aviation (MEGA) airplane. Synergistic improvements in design, production, and operation can provide a much needed fresh start for the general aviation industry and the traveling public. In this investigation a small four place airplane is taken as the reference, although the proposed philosophy applies across the entire spectrum of general aviation.
Beaufrère, Hugues; Pariaut, Romain; Rodriguez, Daniel; Nevarez, Javier G; Tully, Thomas N
2012-10-01
To assess the agreement and reliability of cardiac measurements obtained with 3 echocardiographic techniques in anesthetized red-tailed hawks (Buteo jamaicensis). 10 red-tailed hawks. Transcoelomic, contrast transcoelomic, and transesophageal echocardiographic evaluations of the hawks were performed, and cineloops of imaging planes were recorded. Three observers performed echocardiographic measurements of cardiac variables 3 times on 3 days. The order in which hawks were assessed and echocardiographic techniques were used was randomized. Results were analyzed with linear mixed modeling, agreement was assessed with intraclass correlation coefficients, and variation was estimated with coefficients of variation. Significant differences were evident among the 3 echocardiographic methods for most measurements, and the agreement among findings was generally low. Interobserver agreement was generally low to medium. Intraobserver agreement was generally medium to high. Overall, better agreement was achieved for the left ventricular measurements and for the transesophageal approach than for other measurements and techniques. Echocardiographic measurements in hawks were not reliable, except when the left ventricle was measured by the same observer. Furthermore, cardiac morphometric measurements may not be clinically important. When measurements are required, one needs to consider that follow-up measurements should be performed by the same echocardiographer and should show at least a 20% difference from initial measurements to be confident that any difference is genuine.
Sun-Earth L1 Region Halo-To-Halo Orbit and Halo-To-LisaJous Orbit Transfers
NASA Technical Reports Server (NTRS)
Roberts, Craig E.; DeFazio, Robert
2004-01-01
Practical techniques for designing transfer trajectories between Libration Point Orbits (LPOs) are presented. Motivation for development of these techniques was provided by a hardware contingency experienced by the Solar Heliospheric Observatory (SOHO), a joint mission of the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA) orbiting the L1 point of the Sun-Earth system. A potential solution to the problem involved a transfer from SOHO s periodic halo orbit to a new LPO of substantially different dimensions. Assuming the SOHO halo orbit as the departure orbit, several practical LPO transfer techniques were developed to obtain new Lissajous or periodic halo orbits that satisfy mission requirements and constraints. While not implemented for the SOHO mission, practical LPO transfer techniques were devised that are generally applicable to current and future LPO missions.
Using elements of hypnosis prior to or during pediatric dental treatment.
Peretz, Benjamin; Bercovich, Roly; Blumer, Sigalit
2013-01-01
Most dental practitioners are familiar with pediatric patients expressing dental fear or anxiety. Occasionally, the dentist may encounter a situation where all behavioral techniques fail, while, for some reason, premedication or general anesthesia are contraindicated or rejected by the patient or his/her parents and a different approach is required. Hypnosis may solve the problem in some cases. The purpose of this study was to review the literature about techniques that use elements of hypnosis and hypnotic techniques prior to or during pediatric dental treatment. There is a limited amount of literature regarding the use of hypnosis and hypnotic elements in pediatric dentistry. Induction techniques, reframing, distraction, imagery suggestions, and hypnosis are identified, although mostly anecdotally, while there are very few structured controlled studies. Nevertheless, the advantages of using hypnotic elements and hypnosis in pediatric dentistry are evident.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
Liberale, Gabriel
2017-04-01
Major progress has been made in breast cancer reconstruction surgery. The standard technique for totally implanted vascular access device (TIVAD) implantation generally requires an incision for port insertion on the anterior part of the thorax that leaves a scar in the middle of the neckline in patients who have undergone mastectomy with complex breast reconstruction. The aim of this technical note is to report our revised surgical technique for TIVAD placement. In patients with breast cancer, we take a lateralized approach, performing an oblique incision on the lowest part of the deltopectoral groove. This allows us to introduce the port and to place it on the anterolateral part of the thorax, thus avoiding an unaesthetic scar on the anterior part of the thorax. Our modified technique for TIVAD implantation is described.
Guerra Valero, Yarmarly C; Wallis, Steven C; Lipman, Jeffrey; Stove, Christophe; Roberts, Jason A; Parker, Suzanne L
2018-03-01
Conventional sampling techniques for clinical pharmacokinetic studies often require the removal of large blood volumes from patients. This can result in a physiological or emotional burden, particularly for neonates or pediatric patients. Antibiotic pharmacokinetic studies are typically performed on healthy adults or general ward patients. These may not account for alterations to a patient's pathophysiology and can lead to suboptimal treatment. Microsampling offers an important opportunity for clinical pharmacokinetic studies in vulnerable patient populations, where smaller sample volumes can be collected. This systematic review provides a description of currently available microsampling techniques and an overview of studies reporting the quantitation and validation of antibiotics using microsampling. A comparison of microsampling to conventional sampling in clinical studies is included.
Zacharoulis, Dimitris; Fafoulakis, Frank; Baloyiannis, Ioannis; Sioka, Eleni; Georgopoulou, Stavroula; Pratsas, Costas; Hantzi, Eleni; Tzovaras, George
2009-09-01
The laparoscopic transabdominal preperitoneal (TAPP) inguinal hernia repair is an evolving technique associated with the well-known advantages of a minimally invasive approach. However, general anesthesia is routinely required for the procedure. Based on our previous experience in regional anesthesia for laparoscopic procedures, we designed a pilot study to assess the feasibility and safety of performing laparoscopic TAPP repair under spinal anesthesia. Forty-five American Society of Anesthesiologists I or II patients with a total of 50 inguinal hernias underwent TAPP repair under spinal anesthesia, using a low-pressure CO(2) pneumoperitoneum. Five patients had bilateral hernias, and 4 patients had recurrent hernias. Thirty hernias were indirect and the remaining direct. Intraoperative incidents, postoperative pain complications, and recovery in general as well as patient satisfaction at the follow-up examination were prospectively recorded. There was 1 conversion from spinal to general anesthesia and 2 conversions from laparoscopic to the open procedure at a median operative time of 50 minutes (range 30-130). Ten patients complained of shoulder pain during the procedure, and 6 patients suffered hypotension intraoperatively. The median pain score (visual analog scale) was 1 (0-5) at 4 hours after the completion of the procedure, 1.5 (0-6) at 8 hours, and 1.5 (0-5) at 24 hours, and the median hospital stay was 1 day (range 1-2). Sixteen patients had urinary retention requiring instant catheterization. At a median follow-up of 20 months (range 10 months-28 months), no recurrence was detected. TAPP repair is feasible and safe under spinal anesthesia. However, it seems to be associated with a high incidence of urinary retention. Further studies are required to validate this technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, L.; Burton, A.; Lu, H.X.
Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less
NASA Technical Reports Server (NTRS)
1975-01-01
A soils map for land evaluation in Potter County (Eastern South Dakota) was developed to demonstrate the use of remote sensing technology in the area of diverse parent materials and topography. General land use and soils maps have also been developed for land planning LANDSAT, RB-57 imagery, and USGS photographs are being evaluated for making soils and land use maps. LANDSAT fulfilled the requirements for general land use and a general soils map. RB-57 imagery supplemented by large scale black and white stereo coverage was required to provide the detail needed for the final soils map for land evaluation. Color infrared prints excelled black and white coverage for this soil mapping effort. An identification and classification key for wetland types in the Lake Dakota Plain was developed for June 1975 using color infrared imagery. Wetland types in the region are now being mapped via remote sensing techniques to provide a current inventory for development of mitigation measures.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Price, D. Marvin
1991-01-01
Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.
A new technique for calculations of binary stellar evolution, with application to magnetic braking
NASA Technical Reports Server (NTRS)
Rappaport, S.; Joss, P. C.; Verbunt, F.
1983-01-01
The development of appropriate computer programs has made it possible to conduct studies of stellar evolution which are more detailed and accurate than the investigations previously feasible. However, the use of such programs can also entail some serious drawbacks which are related to the time and expense required for the work. One approach for overcoming these drawbacks involves the employment of simplified stellar evolution codes which incorporate the essential physics of the problem of interest without attempting either great generality or maximal accuracy. Rappaport et al. (1982) have developed a simplified code to study the evolution of close binary stellar systems composed of a collapsed object and a low-mass secondary. The present investigation is concerned with a more general, but still simplified, technique for calculating the evolution of close binary systems with collapsed binaries and mass-losing secondaries.
Laparoscopic cholecystectomy under epidural anesthesia: a clinical feasibility study.
Lee, Ji Hyun; Huh, Jin; Kim, Duk Kyung; Gil, Jea Ryoung; Min, Sung Won; Han, Sun Sook
2010-12-01
Laparoscopic cholecystectomy (LC) has traditionally been performed under general anesthesia, however, owing in part to the advancement of surgical and anesthetic techniques, many laparoscopic cholecystectomies have been successfully performed under the spinal anesthetic technique. We hoped to determine the feasibility of segmental epidural anesthesia for LC. Twelve American Society of Anesthesiologists class I or II patients received an epidural block for LC. The level of epidural block and the satisfaction score of patients and the surgeon were checked to evaluate the efficacy of epidural block for LC. LC was performed successfully under epidural block, with the exception of 1 patient who required a conversion to general anesthesia owing to severe referred pain. There were no special postoperative complications, with the exception of one case of urinary retention. Epidural anesthesia might be applicable for LC. However, the incidence of intraoperative referred shoulder pain is high, and so careful patient recruitment and management of shoulder pain should be considered.
Fundamentals of bipolar high-frequency surgery.
Reidenbach, H D
1993-04-01
In endoscopic surgery a very precise surgical dissection technique and an efficient hemostasis are of decisive importance. The bipolar technique may be regarded as a method which satisfies both requirements, especially regarding a high safety standard in application. In this context the biophysical and technical fundamentals of this method, which have been known in principle for a long time, are described with regard to the special demands of a newly developed field of modern surgery. After classification of this method into a general and a quasi-bipolar mode, various technological solutions of specific bipolar probes, in a strict and in a generalized sense, are characterized in terms of indication. Experimental results obtained with different bipolar instruments and probes are given. The application of modern microprocessor-controlled high-frequency surgery equipment and, wherever necessary, the integration of additional ancillary technology into the specialized bipolar instruments may result in most useful and efficient tools of a key technology in endoscopic surgery.
Lidar Measurements for Desert Dust Characterization: An Overview
NASA Technical Reports Server (NTRS)
Mona, L.; Liu, Z.; Mueller, D.; Omar, A.; Papayannis, A.; Pappalardo, G.; Sugimoto, N.; Vaughan, M.
2012-01-01
We provide an overview of light detection and ranging (lidar) capability for describing and characterizing desert dust. This paper summarizes lidar techniques, observations, and fallouts of desert dust lidar measurements. The main objective is to provide the scientific community, including non-practitioners of lidar observations with a reference paper on dust lidar measurements. In particular, it will fill the current gap of communication between research-oriented lidar community and potential desert dust data users, such as air quality monitoring agencies and aviation advisory centers. The current capability of the different lidar techniques for the characterization of aerosol in general and desert dust in particular is presented. Technical aspects and required assumptions of these techniques are discussed, providing readers with the pros and cons of each technique. Information about desert dust collected up to date using lidar techniques is reviewed. Lidar techniques for aerosol characterization have a maturity level appropriate for addressing air quality and transportation issues, as demonstrated by some first results reported in this paper
Orange Peel Excision of Gland: A Novel Surgical Technique for Treatment of Gynecomastia.
S S, Shirol
2016-12-01
Gynecomastia is a common aesthetic problem faced by men with reported incidence as high as 65% with serious psychosocial impact. Although various techniques of liposculpture combined with glandular excision is the standard of treatment, many of the glandular excision techniques have inherent limitations and complications such as leaving a long scar, long operative time, contour abnormalities, and increased risk of hematoma. Here, we describe an innovative "the orange peel excision of gland (OPEG) technique" which overcomes these limitations with excellent cosmetic results. A total of 38 breasts were operated in 20 patients (18 bilateral and 2 unilateral). All the patients underwent suction-assisted liposuction and glandular excision under general anesthesia by our OPEG technique. The average operative time per breast was 60 minutes. One patient had a small hematoma which did not require evacuation. The patient satisfaction rate was 95%. The technique has reduced operative time and avoids residual gland and hematoma with excellent aesthetic outcome.
Holographic optical security systems
NASA Astrophysics Data System (ADS)
Fagan, William F.
1990-06-01
One of the most successful applications of Holography,in recent years,has been its use as an optical security technique.Indeed the general public's awareness of holograms has been greatly enhanced by the incorporation of holographic elements into the VISA and MASTERCHARGE credit cards.Optical techniques related to Holography,are also being used to protect the currencies of several countries against the counterfeiter. The mass production of high quality holographic images is by no means a trivial task as a considerable degree of expertise is required together with an optical laboratory and embossing machinery.This paper will present an overview of the principal holographic and related optical techniques used for security purposes.Worldwide, over thirty companies are involved in the production of security elements utilising holographic and related optical technologies.Counterfeiting of many products is a major criminal activity with severe consequences not only for the manufacturer but for the public in general as defective automobile parts,aircraft components,and pharmaceutical products, to cite only a few of the more prominent examples,have at one time or another been illegally copied.
NASA Technical Reports Server (NTRS)
1976-01-01
The items identified as required to support the AMPS mission and requiring SR and T support and further work are: (1) a general purpose Experiment Pointing Mount; (2) a technique for measuring the attitude of the pallet-mounted or deployed experiments; (3) the development of a common optics cryogenically cooled interferometer spectrometer; (4) the development of a differential absorption lidar system for the measurement of ozone densitites in the earth's atmosphere; (5) the development of dc to dc power processors which are capable of converting energy stored in a capacitor system at 500 V to energy supplied to equipment operating at 40 kV and at 20 kW (eventually up to 100 kW); and (6) the development of a magnetic or possibly electrostatic deflection system capable of bending the beam of an electron accelerator. A data sheet is included for each item, briefly describing the background and need for each item, and the general objectives of the required development, and identifying the schedule requirements in support of the AMPS program.
Nonlinear Unsteady Aerodynamic Modeling Using Wind Tunnel and Computational Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.
2016-01-01
Extensions to conventional aircraft aerodynamic models are required to adequately predict responses when nonlinear unsteady flight regimes are encountered, especially at high incidence angles and under maneuvering conditions. For a number of reasons, such as loss of control, both military and civilian aircraft may extend beyond normal and benign aerodynamic flight conditions. In addition, military applications may require controlled flight beyond the normal envelope, and civilian flight may require adequate recovery or prevention methods from these adverse conditions. These requirements have led to the development of more general aerodynamic modeling methods and provided impetus for researchers to improve both techniques and the degree of collaboration between analytical and experimental research efforts. In addition to more general mathematical model structures, dynamic test methods have been designed to provide sufficient information to allow model identification. This paper summarizes research to develop a modeling methodology appropriate for modeling aircraft aerodynamics that include nonlinear unsteady behaviors using both experimental and computational test methods. This work was done at Langley Research Center, primarily under the NASA Aviation Safety Program, to address aircraft loss of control, prevention, and recovery aerodynamics.
Earth resources mission performance studies. Volume 2: Simulation results
NASA Technical Reports Server (NTRS)
1974-01-01
Simulations were made at three month intervals to investigate the EOS mission performance over the four seasons of the year. The basic objectives of the study were: (1) to evaluate the ability of an EOS type system to meet a representative set of specific collection requirements, and (2) to understand the capabilities and limitations of the EOS that influence the system's ability to satisfy certain collection objectives. Although the results were obtained from a consideration of a two sensor EOS system, the analysis can be applied to any remote sensing system having similar optical and operational characteristics. While the category related results are applicable only to the specified requirement configuration, the results relating to general capability and limitations of the sensors can be applied in extrapolating to other U.S. based EOS collection requirements. The TRW general purpose mission simulator and analytic techniques discussed in this report can be applied to a wide range of collection and planning problems of earth orbiting imaging systems.
Development of a winter wheat adjustable crop calendar model
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.
Requirements for color technology
NASA Astrophysics Data System (ADS)
Campbell, Ronald B., Jr.
1993-06-01
The requirements for color technology in the general office are reviewed. The two most salient factors driving the requirements for color are the information explosion and the virtually negligible growth in white collar productivity in the recent past. Accordingly, the business requirement upon color technology is that it be utilized in an effective and efficient manner to increase office productivity. Recent research on productivity and growth has moved beyond the classical two factor productivity model of labor and capital to explicitly include knowledge as a third and vital factor. Documents are agents of knowledge in the general office. Documents articulate, express, disseminate, and communicate knowledge. The central question addressed here is how can color, in conjunction with other techniques such as graphics and document design, improve the growth of knowledge? The central thesis is that the effective use of color to convert information into knowledge is one of the most powerful ways to increase office productivity. Material on the value of color is reviewed. This material is related to the role of documents. Document services are the way in which users access and utilize color technology. The requirements for color technology are then defined against the services taxonomy.
Downsizing Antenna Technologies for Mobile and Satellite Communications
NASA Technical Reports Server (NTRS)
Huang, J.; Densmore, A.; Tulintseff, A.; Jamnejad, V.
1993-01-01
Due to the increasing and stringent functional requirements (larger capacity, longer distances, etc.) of modern day communication systems, higher antenna gains are generally needed. This higher gain implies larger antenna size and mass which are undesirable to many systems. Consequently, downsizing antenna technology becomes one of the most critical areas for research and development efforts. Techniques to reduce antenna size can be categorized and are briefly discussed.
Torrie, Arissa M; Kesler, William W; Elkin, Joshua; Gallo, Robert A
2015-12-01
Over the past decade, osteochondral allograft transplantation has soared in popularity. Advances in storage techniques have demonstrated improved chondrocyte viability at longer intervals and allowed for potential of increased graft availability. Recent studies have stratified outcomes according to location and etiology of the chondral or osteochondral defect. Unipolar lesions generally have favorable outcomes with promising 10-year survival rates. Though those undergoing osteochondral allograft transplantation often require reoperation, patient satisfaction remains high.
Software For Integer Programming
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1992-01-01
Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.
Hector G. Adegbidi; Nicholas B. Comerford; Hua Li; Eric J. Jokela; Nairam F. Barros
2002-01-01
Nutrient management represents a central component of intensive silvicultural systems that are designed to increase forest productivity in southern pine stands. Forest soils throughout the South are generally infertile, and fertilizers may be applied one or more times over the course of a rotation. Diagnostic techniques, such as foliar analysis and soil testing are...
Numerical Solution for Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Warsi, Z. U. A.; Weed, R. A.; Thompson, J. F.
1982-01-01
Carefully selected blend of computational techniques solves complete set of equations for viscous, unsteady, hypersonic flow in general curvilinear coordinates. New algorithm has tested computation of axially directed flow about blunt body having shape similar to that of such practical bodies as wide-body aircraft or artillery shells. Method offers significant computational advantages because of conservation-law form of equations and because it reduces amount of metric data required.
Wavelet Algorithms for Illumination Computations
NASA Astrophysics Data System (ADS)
Schroder, Peter
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Some practical universal noiseless coding techniques, part 2
NASA Technical Reports Server (NTRS)
Rice, R. F.; Lee, J. J.
1983-01-01
This report is an extension of earlier work (Part 1) which provided practical adaptive techniques for the efficient noiseless coding of a broad class of data sources characterized by only partially known and varying statistics (JPL Publication 79-22). The results here, while still claiming such general applicability, focus primarily on the noiseless coding of image data. A fairly complete and self-contained treatment is provided. Particular emphasis is given to the requirements of the forthcoming Voyager II encounters of Uranus and Neptune. Performance evaluations are supported both graphically and pictorially. Expanded definitions of the algorithms in Part 1 yield a computationally improved set of options for applications requiring efficient performance at entropies above 4 bits/sample. These expanded definitions include as an important subset, a somewhat less efficient but extremely simple "FAST' compressor which will be used at the Voyager Uranus encounter. Additionally, options are provided which enhance performance when atypical data spikes may be present.
Anchor Node Localization for Wireless Sensor Networks Using Video and Compass Information Fusion
Pescaru, Dan; Curiac, Daniel-Ioan
2014-01-01
Distributed sensing, computing and communication capabilities of wireless sensor networks require, in most situations, an efficient node localization procedure. In the case of random deployments in harsh or hostile environments, a general localization process within global coordinates is based on a set of anchor nodes able to determine their own position using GPS receivers. In this paper we propose another anchor node localization technique that can be used when GPS devices cannot accomplish their mission or are considered to be too expensive. This novel technique is based on the fusion of video and compass data acquired by the anchor nodes and is especially suitable for video- or multimedia-based wireless sensor networks. For these types of wireless networks the presence of video cameras is intrinsic, while the presence of digital compasses is also required for identifying the cameras' orientations. PMID:24594614
Photography and imagery: a clarification of terms
Robinove, Charles J.
1963-01-01
The increased use of pictorial displays of data in the fields of photogrammetry and photo interpretation has led to some confusion of terms, not so much b photogrammetrists as bu users and interpreters of pictorial data. The terms "remote sensing" and "remote sensing of environment" are being used as general terms to describe "the measurement of some property of an object without having the measuring device physically in contact with the object" (Parker, 1962).Measurements of size and shape by photogrammetric and optical means are common examples of remote sensing and therefore require no elaboration. Other techniques of remote sensing of electromagnetic radiation in and beyond the limits of the visible spectrum require some explanation and differentiation from the techniques used in the visible spectrum.The following definitions of "photography" and "imagery" are proposed to clarify these two terms in hope that this will lead to more precise understanding and explanation of the processes.
Relative Attitude Determination of Earth Orbiting Formations Using GPS Receivers
NASA Technical Reports Server (NTRS)
Lightsey, E. Glenn
2004-01-01
Satellite formation missions require the precise determination of both the position and attitude of multiple vehicles to achieve the desired objectives. In order to support the mission requirements for these applications, it is necessary to develop techniques for representing and controlling the attitude of formations of vehicles. A generalized method for representing the attitude of a formation of vehicles has been developed. The representation may be applied to both absolute and relative formation attitude control problems. The technique is able to accommodate formations of arbitrarily large number of vehicles. To demonstrate the formation attitude problem, the method is applied to the attitude determination of a simple leader-follower along-track orbit formation. A multiplicative extended Kalman filter is employed to estimate vehicle attitude. In a simulation study using GPS receivers as the attitude sensors, the relative attitude between vehicles in the formation is determined 3 times more accurately than the absolute attitude.
Dental treatment in patients with severe gag reflex using propofol-remifentanil intravenous sedation
Shin, Sooil
2017-01-01
Patients with severe gag reflex (SGR) have difficulty getting the treatment they require in local clinics, and many tend to postpone the start of their treatment. To address this problem, dentists have used behavioral techniques and/or pharmacological techniques for treatment. Among the pharmacological methods available, propofol IV sedation is preferred over general anesthesia because it is a simpler procedure. Propofol in combination with remifentanil is characterized by stable sedative effects and quick recovery, leading to a deep sedation. Remifentanil acts to reduce the pain caused by lipid-soluble propofol on injection. The synergistic effects of propofol-remifentanil include reduction in the total amount of drug required to achieve a desired sedation level and anti-emetic effects. In this case report, we outline how the use of propofol-remifentanil IV sedation enabled us to successfully complete a wide range of dental treatments in a patient with SGR. PMID:28879331
The ethical use of paradoxical interventions in psychotherapy.
Foreman, D M
1990-01-01
The purpose of this paper is to establish ethical guidelines for the use of paradoxical interventions in psychotherapy. These are defined as interventions which are counterintuitive, coercive, and which require non-observance by the client. Arguments are developed to show that such interventions are associated with a psychology that understands individuals solely in terms of their relationship: a 'strong interactionist' position. Ethical principles consistent with such a position are considered, and from these it is derived that: paradox is an ethical technique with resistive patients; it requires consent; its content should be consistent with general ethical principles, especially those of beneficence and non-maleficence; non-paradoxical techniques should be preferred when possible; and it should not be used as an assessment procedure. It is concluded that research is needed to explore the effect of such ethical guidelines of effectiveness, though preliminary impressions are encouraging. PMID:2287016
NASA Astrophysics Data System (ADS)
Stankunas, Gediminas; Batistoni, Paola; Sjöstrand, Henrik; Conroy, Sean; JET Contributors
2015-07-01
The neutron activation technique is routinely used in fusion experiments to measure the neutron yields. This paper investigates the uncertainty on these measurements as due to the uncertainties on dosimetry and activation reactions. For this purpose, activation cross-sections were taken from the International Reactor Dosimetry and Fusion File (IRDFF-v1.05) in 640 groups ENDF-6 format for several reactions of interest for both 2.5 and 14 MeV neutrons. Activation coefficients (reaction rates) have been calculated using the neutron flux spectra at JET vacuum vessel, both for DD and DT plasmas, calculated by MCNP in the required 640-energy group format. The related uncertainties for the JET neutron spectra are evaluated as well using the covariance data available in the library. These uncertainties are in general small, but not negligible when high accuracy is required in the determination of the fusion neutron yields.
Shin, Sooil; Kim, Seungoh
2017-03-01
Patients with severe gag reflex (SGR) have difficulty getting the treatment they require in local clinics, and many tend to postpone the start of their treatment. To address this problem, dentists have used behavioral techniques and/or pharmacological techniques for treatment. Among the pharmacological methods available, propofol IV sedation is preferred over general anesthesia because it is a simpler procedure. Propofol in combination with remifentanil is characterized by stable sedative effects and quick recovery, leading to a deep sedation. Remifentanil acts to reduce the pain caused by lipid-soluble propofol on injection. The synergistic effects of propofol-remifentanil include reduction in the total amount of drug required to achieve a desired sedation level and anti-emetic effects. In this case report, we outline how the use of propofol-remifentanil IV sedation enabled us to successfully complete a wide range of dental treatments in a patient with SGR.
[Surgical renal biopsies: technique, effectiveness and complications].
Pinsach Elías, L; Blasco Casares, F J; Ibarz Servió, L; Valero Milián, J; Areal Calama, J; Bucar Terrades, S; Saladié Roig, J M
1991-01-01
Retrospective study made on 140 renal surgical biopsies (RSB) performed throughout the past 4 years in our Unit. The technique's effectiveness and morbidity are emphasized and the surgical technique and type of anaesthesia described. The sample obtained was enough to perform an essay in 100% cases, and a diagnosis was reached in 98.5%. Thirty-nine patients (27.8%) presented complications, 13 (9.2%) of which were directly related to the surgical technique. No case required blood transfusion and no deaths were reported. The type of anaesthesia used was: local plus sedation in 104 (74.2%) cases, rachianaesthesia in 10 (7.1%) and general in 26 (18.5%). The same approach was used in all patients: minimal subcostal lumbotomy, using Wilde's forceps to obtain the samples. It is believed that RSB is a highly effective, low mortality procedure, easy and quick to perform, and suitable for selected patients.
Structural Damage Detection Using Virtual Passive Controllers
NASA Technical Reports Server (NTRS)
Lew, Jiann-Shiun; Juang, Jer-Nan
2001-01-01
This paper presents novel approaches for structural damage detection which uses the virtual passive controllers attached to structures, where passive controllers are energy dissipative devices and thus guarantee the closed-loop stability. The use of the identified parameters of various closed-loop systems can solve the problem that reliable identified parameters, such as natural frequencies of the open-loop system may not provide enough information for damage detection. Only a small number of sensors are required for the proposed approaches. The identified natural frequencies, which are generally much less sensitive to noise and more reliable than the identified natural frequencies, are used for damage detection. Two damage detection techniques are presented. One technique is based on the structures with direct output feedback controllers while the other technique uses the second-order dynamic feedback controllers. A least-squares technique, which is based on the sensitivity of natural frequencies to damage variables, is used for accurately identifying the damage variables.
Fusing modeling techniques to support domain analysis for reuse opportunities identification
NASA Technical Reports Server (NTRS)
Hall, Susan Main; Mcguire, Eileen
1993-01-01
Functional modeling techniques or object-oriented graphical representations, which are more useful to someone trying to understand the general design or high level requirements of a system? For a recent domain analysis effort, the answer was a fusion of popular modeling techniques of both types. By using both functional and object-oriented techniques, the analysts involved were able to lean on their experience in function oriented software development, while taking advantage of the descriptive power available in object oriented models. In addition, a base of familiar modeling methods permitted the group of mostly new domain analysts to learn the details of the domain analysis process while producing a quality product. This paper describes the background of this project and then provides a high level definition of domain analysis. The majority of this paper focuses on the modeling method developed and utilized during this analysis effort.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
Improved techniques for thermomechanical testing in support of deformation modeling
NASA Technical Reports Server (NTRS)
Castelli, Michael G.; Ellis, John R.
1992-01-01
The feasibility of generating precise thermomechanical deformation data to support constitutive model development was investigated. Here, the requirement is for experimental data that is free from anomalies caused by less than ideal equipment and procedures. A series of exploratory tests conducted on Hastelloy X showed that generally accepted techniques for strain controlled tests were lacking in at least three areas. Specifically, problems were encountered with specimen stability, thermal strain compensation, and temperature/mechanical strain phasing. The source of these difficulties was identified and improved thermomechanical testing techniques to correct them were developed. These goals were achieved by developing improved procedures for measuring and controlling thermal gradients and by designing a specimen specifically for thermomechanical testing. In addition, innovative control strategies were developed to correctly proportion and phase the thermal and mechanical components of strain. Subsequently, the improved techniques were used to generate deformation data for Hastelloy X over the temperature range, 200 to 1000 C.
Microencapsulation techniques to develop formulations of insulin for oral delivery: a review.
Cárdenas-Bailón, Fernando; Osorio-Revilla, Guillermo; Gallardo-Velázquez, Tzayhrí
2013-01-01
Oral insulin delivery represents one of the most challenging goals for pharmaceutical industry. In general, it is accepted that oral administration of insulin would be more accepted by patients and insulin would be delivered in a more physiological way than the parenteral route. From all strategies to deliverer insulin orally, microencapsulation or nanoencapsulation of insulin are the most promising approaches because these techniques protect insulin from enzymatic degradation in stomach, show a good release profile at intestine pH values, maintain biological activity during formulation and enhance intestinal permeation at certain extent. From different microencapsulation techniques, it seems that complex coacervation, multiple emulsion and internal gelation are the most appropriate techniques to encapsulate insulin due to their relative ease of preparation. Besides that, the use of organic solvents is not required and can be scaled up at low cost; however, relative oral bioavailability still needs to be improved.
Jameson, K; Averley, P A; Shackley, P; Steele, J
2007-09-22
To compare the cost-effectiveness of dental sedation techniques used in the treatment of children, focusing on hospital-based dental general anaesthetic (DGA) and advanced conscious sedation in a controlled primary care environment. Data on fees, costs and treatment pathways were obtained from a primary care clinic specialising in advanced sedation techniques. For the hospital-based DGA cohort, data were gathered from hospital trusts in the same area. Comparison was via an average cost per child treated and subsequent sensitivity analysis. Analysing records spanning one year, the average cost per child treated via advanced conscious sedation was pound245.47. As some treatments fail (3.5% of cases attempted), and the technique is not deemed suitable for all patients (4-5%), DGA is still required and has been factored into this cost. DGA has an average cost per case treated of pound359.91, 46.6% more expensive than advanced conscious sedation. These cost savings were robust to plausible variation in all parameters. The costs of advanced conscious sedation techniques, applied in a controlled primary care environment, are substantially lower than the equivalent costs of hospital-based DGA, informing the debate about the optimum way of managing this patient group.
Summary report of the Lightning and Static Electricity Committee
NASA Technical Reports Server (NTRS)
Plumer, J. A.
1979-01-01
Lightning protection technology as applied to aviation and identifying these technology needs are presented. The flight areas of technical needs include; (1) the need for In-Flight data on lightning electrical parameters; (2) technology base and guidelines for protection of advanced systems and structures; (3) improved laboratory test techniques; (4) analysis techniques for predicting induced effects; (5) lightning strike incident data from General Aviation; (6) lightning detection systems; (7) obtain pilot reports of lightning strikes; and (8) better training in lightning awareness. The nature of each problem, timeliness, impact of solutions, degree of effort required, and the roles of government and industry in achieving solutions are discussed.
SAR calibration technology review
NASA Technical Reports Server (NTRS)
Walker, J. L.; Larson, R. W.
1981-01-01
Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.
Inexpensive but accurate driving circuits for quartz crystal microbalances
NASA Astrophysics Data System (ADS)
Bruschi, L.; Delfitto, G.; Mistura, G.
1999-01-01
The quartz crystal microbalance (QCM) is a common technique which finds a wide variety of applications in many different areas like adsorption, catalysis, analytical chemistry, biochemistry, etc., and more generally as a sensor in the investigation of viscoelastic films. In this article we describe some driving circuits of the quartz which we have realized and tested in our laboratory. These can be assembled with standard components which can be easily found. Their performance, in some cases, is as good as that of the much more expensive frequency modulation technique employed in very precise QCM measurements and which requires high-quality commercial radiofrequency generators and amplifiers.
Torsion testing for general constitutive relations: Gilles Canova's Master's Thesis
NASA Astrophysics Data System (ADS)
Kocks, U. F.; Stout, M. G.
1999-09-01
Torsion testing is useful but cumbersome: useful as a technique to determine large-strain plastic behaviour in a plane-strain mode; cumbersome because the testing of a solid rod provides reliable data in only the simplest of circumstances, for example, when there is no strain hardening or rate sensitivity. The testing of short thin-walled tubes is widely regarded as the best current technique to determine general constitutive behaviour; a drawback is the requirement for large specimen blanks (say, 3 cm cube) and the complex machining procedure. Gilles Canova proposed an alternative - the testing of a series of solid rods of differing diameters, proving the principle by using 10 such rods for one test. We have undertaken a similar series of tests on just four rods plus one short tube for comparison. We found the results from the two types of torsion test in excellent agreement; however, it was not a critical test, inasmuch as the rate sensitivity of the pure copper used was too small. Had it been greater (as, for example, in aluminium and at higher temperature), the evaluation of perhaps six rods would have provided the constitutive response with regard to both hardening and rate sensitivity at the same time - which would require two short-tube tests (plus duplications). The main drawback of the multiple-rod test is that it requires considerable numerical effort in the evaluation. A closer integration with modelling of the constitutive behaviour would be helpful.
From statistical proofs of the Kochen-Specker theorem to noise-robust noncontextuality inequalities
NASA Astrophysics Data System (ADS)
Kunjwal, Ravi; Spekkens, Robert W.
2018-05-01
The Kochen-Specker theorem rules out models of quantum theory wherein projective measurements are assigned outcomes deterministically and independently of context. This notion of noncontextuality is not applicable to experimental measurements because these are never free of noise and thus never truly projective. For nonprojective measurements, therefore, one must drop the requirement that an outcome be assigned deterministically in the model and merely require that it be assigned a distribution over outcomes in a manner that is context-independent. By demanding context independence in the representation of preparations as well, one obtains a generalized principle of noncontextuality that also supports a quantum no-go theorem. Several recent works have shown how to derive inequalities on experimental data which, if violated, demonstrate the impossibility of finding a generalized-noncontextual model of this data. That is, these inequalities do not presume quantum theory and, in particular, they make sense without requiring an operational analog of the quantum notion of projectiveness. We here describe a technique for deriving such inequalities starting from arbitrary proofs of the Kochen-Specker theorem. It extends significantly previous techniques that worked only for logical proofs, which are based on sets of projective measurements that fail to admit of any deterministic noncontextual assignment, to the case of statistical proofs, which are based on sets of projective measurements that d o admit of some deterministic noncontextual assignments, but not enough to explain the quantum statistics.
Laparoscopic cholecystectomy under segmental thoracic spinal anaesthesia: a feasibility study.
van Zundert, A A J; Stultiens, G; Jakimowicz, J J; Peek, D; van der Ham, W G J M; Korsten, H H M; Wildsmith, J A W
2007-05-01
Laparoscopic surgery is normally performed under general anaesthesia, but regional techniques have been found beneficial, usually in the management of patients with major medical problems. Encouraged by such experience, we performed a feasibility study of segmental spinal anaesthesia in healthy patients. Twenty ASA I or II patients undergoing elective laparoscopic cholecystectomy received a segmental (T10 injection) spinal anaesthetic using 1 ml of bupivacaine 5 mg ml-1 mixed with 0.5 ml of sufentanil 5 microg ml-1. Other drugs were only given (systemically) to manage patient anxiety, pain, nausea, hypotension, or pruritus during or after surgery. The patients were reviewed 3 days postoperatively by telephone. The spinal anaesthetic was performed easily in all patients, although one complained of paraesthesiae which responded to slight needle withdrawal. The block was effective for surgery in all 20 patients, six experiencing some discomfort which was readily treated with small doses of fentanyl, but none requiring conversion to general anaesthesia. Two patients required midazolam for anxiety and two ephedrine for hypotension. Recovery was uneventful and without sequelae, only three patients (all for surgical reasons) not being discharged home on the day of operation. This preliminary study has shown that segmental spinal anaesthesia can be used successfully and effectively for laparoscopic surgery in healthy patients. However, the use of an anaesthetic technique involving needle insertion into the vertebral canal above the level of termination of the spinal cord requires great caution and should be restricted in application until much larger numbers of patients have been studied.
Broadband photonic transport between waveguides by adiabatic elimination
NASA Astrophysics Data System (ADS)
Oukraou, Hassan; Coda, Virginie; Rangelov, Andon A.; Montemezzani, Germano
2018-02-01
We propose an adiabatic method for the robust transfer of light between the two outer waveguides in a three-waveguide directional coupler. Unlike the established technique inherited from stimulated Raman adiabatic passage (STIRAP), the method proposed here is symmetric with respect to an exchange of the left and right waveguides in the structure and permits the transfer in both directions. The technique uses the adiabatic elimination of the middle waveguide together with level crossing and adiabatic passage in an effective two-state system involving only the external waveguides. It requires a strong detuning between the outer and the middle waveguide and does not rely on the adiabatic transfer state (dark state) underlying the STIRAP process. The suggested technique is generalized to an array of N waveguides and verified by numerical beam propagation calculations.
Orthogonal feeding techniques for tapered slot antennas
NASA Technical Reports Server (NTRS)
Lee, Richard Q.; Simons, Rainee N.
1998-01-01
For array of "brick" configuration there are electrical and mechanical advantages to feed the antenna with a feed on a substrate perpendicular to the antenna substrate. Different techniques have been proposed for exciting patch antennas using such a feed structure.Rncently, an aperture-coupled dielectric resonator antenna using a perpendicular feed substrate has been demonstrated to have very good power coupling efficiency. For a two-dimensional rectangular array with tapered slot antenna elements, a power combining network on perpendicular substrate is generally required to couple power to or from the array. In this paper, we will describe two aperture-coupled techniques for coupling microwave power from a linearly tapered slot antenna (LTSA) to a microstrip feed on a perpendicular substrate. In addition, we will present measured results for return losses and radiation patterns.
Sjöstrand, Henrik; Andersson Sundén, E; Conroy, S; Ericsson, G; Gatu Johnson, M; Giacomelli, L; Gorini, G; Hellesen, C; Hjalmarsson, A; Popovichev, S; Ronchi, E; Tardocchi, M; Weiszflog, M
2009-06-01
Burning plasma experiments such as ITER and DEMO require diagnostics capable of withstanding the harsh environment generated by the intense neutron flux and to maintain stable operating conditions for times longer than present day systems. For these reasons, advanced control and monitoring (CM) systems will be necessary for the reliable operation of diagnostics. This paper describes the CM system of the upgraded magnetic proton recoil neutron spectrometer installed at the Joint European Torus focusing in particular on a technique for the stabilization of the gain of the photomultipliers coupled to the neutron detectors. The results presented here show that this technique provides good results over long time scales. The technique is of general interest for all diagnostics that employ scintillators coupled to photomultiplier tubes.
Flicker Detection, Measurement and Means of Mitigation: A Review
NASA Astrophysics Data System (ADS)
Virulkar, V. B.; Aware, M. V.
2014-04-01
The voltage fluctuations caused by rapid industrial load change have been a major concern for supply utilities, regulatory agencies and customers. This paper gives a general review about how to examine/assess voltage flicker and methods followed in measuring the flickers due to rapid changing loads and means for its mitigation. It discusses the effects on utilities conditions, compensators response time and compensator capacity of flicker mitigation. A comparison between conventional mitigation techniques and the state-of-art mitigation techniques are carried out. It is shown in many cases that the state-of-art solution provides higher performance compared with conventional mitigation techniques. However, the choice of most suitable solution depends on characteristics of the supply at the point of connection, the requirement of the load and economics.
An assessment of PERT as a technique for schedule planning and control
NASA Technical Reports Server (NTRS)
Sibbers, C. W.
1982-01-01
The PERT technique including the types of reports which can be computer generated using the NASA/LaRC PPARS System is described. An assessment is made of the effectiveness of PERT on various types of efforts as well as for specific purposes, namely, schedule planning, schedule analysis, schedule control, monitoring contractor schedule performance, and management reporting. This assessment is based primarily on the author's knowledge of the usage of PERT by NASA/LaRC personnel since the early 1960's. Both strengths and weaknesses of the technique for various applications are discussed. It is intended to serve as a reference guide for personnel performing project planning and control functions and technical personnel whose responsibilities either include schedule planning and control or require a general knowledge of the subject.
Physical techniques for delivering microwave energy to tissues.
Hand, J. W.
1982-01-01
Some of the physical aspects of delivering microwave energy to tissues have been discussed. Effective penetration of a few cm may be achieved with external applicators whilst small coaxial or cylindrical devices can induce localized heating in sites accessible to catheters or to direct invasion. To heat deep tissue sites in general, systems of greater complexity involving a number of applicators with particular phase relationships between them are required. The problems of thermometry in the presence of electromagnetic fields fall outside the scope of this article. Their solution, however, is no less important to the future of clinical hyperthermia than the development of heating techniques. Finally, it should be remembered that physiological parameters such as blood flow have appreciable effects in determining the efficacy of the physical techniques described above. PMID:6950781
Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.
Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana
2012-05-15
Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability distribution of parameters), it was found that SCEM-UA and AMALGAM produce results quicker than GLUE in terms of required number of simulations. However, GLUE requires the lowest modelling skills and is easy to implement. All non-Bayesian methods have problems with the way they accept behavioural parameter sets, e.g. GLUE, SCEM-UA and AMALGAM have subjective acceptance thresholds, while MICA has usually problem with its hypothesis on normality of residuals. It is concluded that modellers should select the method which is most suitable for the system they are modelling (e.g. complexity of the model's structure including the number of parameters), their skill/knowledge level, the available information, and the purpose of their study. Copyright © 2012 Elsevier Ltd. All rights reserved.
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
NASA Astrophysics Data System (ADS)
Assawaroongruengchot, Monchai
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR-BOC, CVR-EOC and keff-EOC adjustment of a CANDU lattice of which the burnup period is extended from 300 to 450 FPDs. The cases with the central pin containing either Dysprosium or Gadolinium in the natural Uranium are considered in our study. (Abstract shortened by UMI.)
[Current radionuclear methods in the diagnosis of regional myocardial circulation disorders].
Felix, R; Winkler, C
1977-01-29
Among nuclear medical diagnostic procedures a distinction can be made between non-invasive and invasive methods. The non-invasive methods serve either to image the still viable myocardium ("cold spot" technique) or for direct visualization of recently infarcted myocardial tissue ("hot spot" technique). These methods have the advantage of simple handling and good reproducibility. Side effects and risks are thus far unknown. Improvement of local dissolution should be aimed at in the future and wound greatly increase diagnostic and topographic security. The invasive procedures always require catheterization of the coronary arteries. This is the reason why they can be performed only with coronary arteriography. The Xenon "wash out" technique permits, with some restrictions, quantitative measurement of the regional flow rate. The "inflow technique" permits determination of perfusion distribution. The possibilities of the "double-radionuclide" scintigramm are discussed. For measurement of activity distribution, sationary detectors are generally preferred. In the case of the time-activity curves with the Xenon "wash out" technique, single detectors offer certain advantages.
Transparent electrode for optical switch
Goldhar, J.; Henesian, M.A.
1984-10-19
The invention relates generally to optical switches and techniques for applying a voltage to an electro-optical crystal, and more particularly, to transparent electodes for an optical switch. System architectures for very large inertial confinement fusion (ICF) lasers require active optical elements with apertures on the order of one meter. Large aperture optical switches are needed for isolation of stages, switch-out from regenerative amplifier cavities and protection from target retroreflections.
Spot-shadowing optimization to mitigate damage growth in a high-energy-laser amplifier chain.
Bahk, Seung-Whan; Zuegel, Jonathan D; Fienup, James R; Widmayer, C Clay; Heebner, John
2008-12-10
A spot-shadowing technique to mitigate damage growth in a high-energy laser is studied. Its goal is to minimize the energy loss and undesirable hot spots in intermediate planes of the laser. A nonlinear optimization algorithm solves for the complex fields required to mitigate damage growth in the National Ignition Facility amplifier chain. The method is generally applicable to any large fusion laser.
Applicability of different onboard routing and processing techniques to mobile satellite systems
NASA Technical Reports Server (NTRS)
Craig, A. D.; Marston, P. C.; Bakken, P. M.; Vernucci, A.; Benedicto, J.
1993-01-01
The paper summarizes a study contract recently undertaken for ESA. The study compared the effectiveness of several processing architectures applied to multiple beam, geostationary global and European regional missions. The paper discusses architectures based on transparent SS-FDMA analog, transparent DSP and regenerative processing. Quantitative comparisons are presented and general conclusions are given with respect to suitability of the architectures to different mission requirements.
A sequential anesthesia technique for surgical repair of unilateral vocal fold paralysis.
Rosero, Eric B; Ozayar, Esra; Mau, Ted; Joshi, Girish P
2016-12-01
Thyroplasty with arytenoid adduction, a combined procedure for treatment of unilateral vocal fold paralysis, is typically performed under local anesthesia with sedation to allow for intraoperative voice assessment. However, the need for patient immobility and suppression of laryngeal responses to surgical manipulation can make sedation-analgesia challenging. We describe our first 26 consecutive cases undergoing thyroplasty and arytenoid adduction with a standardized technique consisting of a combination of general anesthesia with tracheal intubation followed by sedation-analgesia. Most patients (69 %) were women, with age of 53 ± 15 years (mean ± SD). Neck surgery was the cause of vocal fold paralysis in 50 % of patients. Initially, general anesthesia was maintained with desflurane and remifentanil with dexmedetomidine added just before tracheal extubation. During the sedation-analgesia phase, patients received infusions of remifentanil and dexmedetomidine. Duration of general anesthesia and sedation-analgesia phases was 162 ± 68.2 and 79 ± 18.3 min, respectively. Mean (SD) wake-up time was 8.0 ± 4.0 min after desflurane discontinuation. Extubation occurred without coughing, bucking, or agitation in 96 % of patients. All the patients were able to phonate appropriately and remained comfortable after emergence. This technique allowed improved surgical conditions with reduced patient discomfort and may be advantageous for other laryngeal and neck surgeries in which intraoperative patient feedback is required.
Survey of glaucoma surgical preferences and post-operative care in the United Kingdom.
Rodriguez-Una, Ignacio; Azuara-Blanco, Augusto; King, Anthony J
2017-04-01
To evaluate the spectrum of glaucoma surgery and the post-operative follow-up regimes undertaken among glaucoma specialists in the United Kingdom. National survey. Seventy-five glaucoma specialists (consultants and fellows). An eight-question survey was emailed to all glaucoma subspecialists members of the United Kingdom and Eire Glaucoma Society. Surgery undertaken, post-operative management, awareness of intervention tariff and handling of the follow-up burden generated through surgery. Almost all the participants (74/75: 99%) routinely performed trabeculectomy, 54 responders (72%) undertook tube surgery and Minimally Invasive Glaucoma Surgery (MIGS) was more frequently undertaken (33.0%) than non-penetrating surgery (23%). In general, for patients with advanced glaucoma requiring a low target intraocular pressure (IOP), the most frequent primary intervention was trabeculectomy (99%), followed by tubes (64%). Similarly, in patients with less advanced glaucoma requiring moderate target IOP, participants preferred trabeculectomy (99%), followed by MIGS (60%). By the first 6 months after the procedure, trabeculectomy and Baerveldt tube implant required a larger number of postoperative visits (9 and 7, respectively), than iStent® and non-penetrating deep sclerectomy (3 and 5, respectively). The majority of participants were not aware of the costs of their interventions. A wide variety of glaucoma surgery techniques are undertaken. Post-operative follow-up regimes are variable between techniques and for surgeons using the same technique. Trabeculectomy requires more follow-up than any other intervention. For patients requiring low IOP, trabeculectomy is the operation of choice for most surgeons. © 2016 Royal Australian and New Zealand College of Ophthalmologists.
Real time software tools and methodologies
NASA Technical Reports Server (NTRS)
Christofferson, M. J.
1981-01-01
Real time systems are characterized by high speed processing and throughput as well as asynchronous event processing requirements. These requirements give rise to particular implementations of parallel or pipeline multitasking structures, of intertask or interprocess communications mechanisms, and finally of message (buffer) routing or switching mechanisms. These mechanisms or structures, along with the data structue, describe the essential character of the system. These common structural elements and mechanisms are identified, their implementation in the form of routines, tasks or macros - in other words, tools are formalized. The tools developed support or make available the following: reentrant task creation, generalized message routing techniques, generalized task structures/task families, standardized intertask communications mechanisms, and pipeline and parallel processing architectures in a multitasking environment. Tools development raise some interesting prospects in the areas of software instrumentation and software portability. These issues are discussed following the description of the tools themselves.
Yalavarthy, Phaneendra K; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D
2007-06-01
Diffuse optical tomography (DOT) involves estimation of tissue optical properties using noninvasive boundary measurements. The image reconstruction procedure is a nonlinear, ill-posed, and ill-determined problem, so overcoming these difficulties requires regularization of the solution. While the methods developed for solving the DOT image reconstruction procedure have a long history, there is less direct evidence on the optimal regularization methods, or exploring a common theoretical framework for techniques which uses least-squares (LS) minimization. A generalized least-squares (GLS) method is discussed here, which takes into account the variances and covariances among the individual data points and optical properties in the image into a structured weight matrix. It is shown that most of the least-squares techniques applied in DOT can be considered as special cases of this more generalized LS approach. The performance of three minimization techniques using the same implementation scheme is compared using test problems with increasing noise level and increasing complexity within the imaging field. Techniques that use spatial-prior information as constraints can be also incorporated into the GLS formalism. It is also illustrated that inclusion of spatial priors reduces the image error by at least a factor of 2. The improvement of GLS minimization is even more apparent when the noise level in the data is high (as high as 10%), indicating that the benefits of this approach are important for reconstruction of data in a routine setting where the data variance can be known based upon the signal to noise properties of the instruments.
Local and general anesthesia in the laparoscopic preperitoneal hernia repair.
Frezza, E E; Ferzli, G
2000-01-01
The extraperitoneal laparoscopic approach (EXTRA) has been shown to be an effective and safe repair for primary (PIH), recurrent (RIH) and bilateral hernia (BIH). There is very little data examining the merits of laparoscopic repair for hernias under local anesthesia. In this' paper, we compare EXTRA performed under both general and local anesthesia. This nonrandomized prospective study was performed selectively on a male population only. Patients with associated pulmonary disease and high risk for general surgery were selected. Patients with recurrence and previous abdominal operations were excluded to decrease confounding variables in the study. A Prolene mesh was used in all patients. Between May 1997 and September 1998, 92 male patients underwent the repair of 107 groin hernias using the EXTRA technique. The procedure was explained to them, and different anesthesia options were given. Fourteen of these repairs were performed under local anesthesia and 93 under general anesthesia. Of the 10 patients who underwent a repair under local anesthesia, there were 8 indirect, 5 direct and 1 pantaloon. The mean age was 53 years. In the group of general anesthesia, the types of hernias repaired were 45 indirect, 30 direct and 11 pantaloon. The mean age was 45 years. The mean follow-up was 15 months. Each patient was sent home the same day. Two peritoneal tears were recorded in the first group. The operative time was longer in the local group (47 +/- 11 vs 18 +/- 3). None of the patients required conversion to an open technique or change of anesthesia. No recurrences were found in either group. The average time of return to work and regular activity was 3.5 +/- 1 and 3 +/- 1 days, respectively. There appears to be no significant difference in recurrence and complication rates when the EXTRA is performed under local anesthesia as compared to general. Blunt dissection of the preperitoneal space does not trigger pain and does not require lidocaine injection. The most painful area is the peritoneal reflection over the cord structure. The laparoscopic repair under local anesthesia represents an advantage in the repair of the inguinal hernia, particularly in the population where general anesthesia is contraindicated.
Polyenergetic known-component reconstruction without prior shape models
NASA Astrophysics Data System (ADS)
Zhang, C.; Zbijewski, W.; Zhang, X.; Xu, S.; Stayman, J. W.
2017-03-01
Purpose: Previous work has demonstrated that structural models of surgical tools and implants can be integrated into model-based CT reconstruction to greatly reduce metal artifacts and improve image quality. This work extends a polyenergetic formulation of known-component reconstruction (Poly-KCR) by removing the requirement that a physical model (e.g. CAD drawing) be known a priori, permitting much more widespread application. Methods: We adopt a single-threshold segmentation technique with the help of morphological structuring elements to build a shape model of metal components in a patient scan based on initial filtered-backprojection (FBP) reconstruction. This shape model is used as an input to Poly-KCR, a formulation of known-component reconstruction that does not require a prior knowledge of beam quality or component material composition. An investigation of performance as a function of segmentation thresholds is performed in simulation studies, and qualitative comparisons to Poly-KCR with an a priori shape model are made using physical CBCT data of an implanted cadaver and in patient data from a prototype extremities scanner. Results: We find that model-free Poly-KCR (MF-Poly-KCR) provides much better image quality compared to conventional reconstruction techniques (e.g. FBP). Moreover, the performance closely approximates that of Poly- KCR with an a prior shape model. In simulation studies, we find that imaging performance generally follows segmentation accuracy with slight under- or over-estimation based on the shape of the implant. In both simulation and physical data studies we find that the proposed approach can remove most of the blooming and streak artifacts around the component permitting visualization of the surrounding soft-tissues. Conclusion: This work shows that it is possible to perform known-component reconstruction without prior knowledge of the known component. In conjunction with the Poly-KCR technique that does not require knowledge of beam quality or material composition, very little needs to be known about the metal implant and system beforehand. These generalizations will allow more widespread application of KCR techniques in real patient studies where the information of surgical tools and implants is limited or not available.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
NASA Astrophysics Data System (ADS)
Khusainov, T. A.; Shalashov, A. G.; Gospodchikov, E. D.
2018-05-01
The field structure of quasi-optical wave beams tunneled through the evanescence region in the vicinity of the plasma cutoff in a nonuniform magnetoactive plasma is analyzed. This problem is traditionally associated with the process of linear transformation of ordinary and extraordinary waves. An approximate analytical solution is constructed for a rather general magnetic configuration applicable to spherical tokamaks, optimized stellarators, and other magnetic confinement systems with a constant plasma density on magnetic surfaces. A general technique for calculating the transformation coefficient of a finite-aperture wave beam is proposed, and the physical conditions required for the most efficient transformation are analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, P.
The majority of general-purpose low-temperature handheld radiation thermometers are severely affected by the size-of-source effect (SSE). Calibration of these instruments is pointless unless the SSE is accounted for in the calibration process. Traditional SSE measurement techniques, however, are costly and time consuming, and because the instruments are direct-reading in temperature, traditional SSE results are not easily interpretable, particularly by the general user. This paper describes a simplified method for measuring the SSE, suitable for second-tier calibration laboratories and requiring no additional equipment, and proposes a means of reporting SSE results on a calibration certificate that should be easily understood bymore » the non-specialist user.« less
NASA Astrophysics Data System (ADS)
Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2018-05-01
A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Hatipkarasulu, Yilmaz; Gill, James H
2004-04-01
The increasing number of companies providing internet services and auction tools helped popularize the online reverse auction trend for purchasing commodities and services in the last decade. As a result, a number of owners, both public and private, accepted the online reverse auctions as the bidding technique for their construction projects. Owners, while trying to minimize their costs for construction projects, are also required to address their ethical responsibilities to the shareholders. In the case of online reverse auctions for construction projects, the ethical issues involved in the bidding technique directly reflects on the owner's ethical and social responsibilities to their shareholders. The goal of this paper is to identify the shareholder ethics and responsibilities in online reverse auctions for construction projects by analyzing the ethical issues for the parties involved in the process. The identification of the ethical issues and responsibilities requires clear definition and understanding of professional ethics and the roles of the involved parties. In this paper, first, the concept of professional ethics and social responsibility is described in a general form. To illustrate the ethical issues and responsibilities, a sample case of bidding for a construction project using online reverse auction techniques is presented in which the shareholders were actively involved in questioning the ethical issues. The issues involved in the bidding process and their reflection on the shareholder responsibilities are described and analyzed for each stage of the process. A brief discussion of the overall process is also included to address the general ethical issues involved in online reverse auctions.
Graff, Mario; Poli, Riccardo; Flores, Juan J
2013-01-01
Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.
Otero, Raquel; Carrera, Guillem; Dulsat, Joan Francesc; Fábregas, José Luís; Claramunt, Juan
2004-11-19
A static headspace (HS) gas chromatographic method for quantitative determination of residual solvents in a drug substance has been developed according to European Pharmacopoeia general procedure. A water-dimethylformamide mixture is proposed as sample solvent to obtain good sensitivity and recovery. The standard addition technique with internal standard quantitation was used for ethanol, tetrahydrofuran and toluene determination. Validation was performed within the requirements of ICH validation guidelines Q2A and Q2B. Selectivity was tested for 36 solvents, and system suitability requirements described in the European Pharmacopoeia were checked. Limits of detection and quantitation, precision, linearity, accuracy, intermediate precision and robustness were determined, and excellent results were obtained.
Fracture control procedures for aircraft structural integrity
NASA Technical Reports Server (NTRS)
Wood, H. A.
1972-01-01
The application of applied fracture mechanics in the design, analysis, and qualification of aircraft structural systems are reviewed. Recent service experiences are cited. Current trends in high-strength materials application are reviewed with particular emphasis on the manner in which fracture toughness and structural efficiency may affect the material selection process. General fracture control procedures are reviewed in depth with specific reference to the impact of inspectability, structural arrangement, and material on proposed analysis requirements for safe crack growth. The relative impact on allowable design stress is indicated by example. Design criteria, material, and analysis requirements for implementation of fracture control procedures are reviewed together with limitations in current available data techniques. A summary of items which require further study and attention is presented.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
NASA Astrophysics Data System (ADS)
Aldeen Yousra, S.; Mazleena, Salleh
2018-05-01
Recent advancement in Information and Communication Technologies (ICT) demanded much of cloud services to sharing users’ private data. Data from various organizations are the vital information source for analysis and research. Generally, this sensitive or private data information involves medical, census, voter registration, social network, and customer services. Primary concern of cloud service providers in data publishing is to hide the sensitive information of individuals. One of the cloud services that fulfill the confidentiality concerns is Privacy Preserving Data Mining (PPDM). The PPDM service in Cloud Computing (CC) enables data publishing with minimized distortion and absolute privacy. In this method, datasets are anonymized via generalization to accomplish the privacy requirements. However, the well-known privacy preserving data mining technique called K-anonymity suffers from several limitations. To surmount those shortcomings, I propose a new heuristic anonymization framework for preserving the privacy of sensitive datasets when publishing on cloud. The advantages of K-anonymity, L-diversity and (α, k)-anonymity methods for efficient information utilization and privacy protection are emphasized. Experimental results revealed the superiority and outperformance of the developed technique than K-anonymity, L-diversity, and (α, k)-anonymity measure.
Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.
Macías, M T; Navarro, T; Lavara, A; Robredo, L M; Sierra, I; Lopez, M A
2003-01-01
The radioisotope techniques used in molecular and cellular biology involve external and internal irradiation risk. The personal dosemeter may be a reasonable indicator for external irradiation. However, it is necessary to control the possible internal contamination associated with the development of these techniques. The aim of this project is to analyse the most usual techniques and to establish programmes of internal monitoring for specific radionuclides (32P, 35S, 14C, 3H, 125I and 131I). To elaborate these programmes it was necessary to analyse the radioisotope techniques. Two models have been applied (NRPB and IAEA) to the more significant techniques, according to the physical and chemical nature of the radionuclides, their potential importance in occupational exposure and the possible injury to the genetic material of the cell. The results allowed the identification of the techniques with possible risk of internal contamination. It was necessary to identify groups of workers that require individual monitoring. The risk groups have been established among the professionals exposed, according to different parameters: the general characteristics of receptor, the radionuclides used (the same user can work with one, two or three radionuclides at the same time) and the results of the models applied. Also a control group was established. The study of possible intakes in these groups has been made by urinalysis and whole-body counter. The theoretical results are coherent with the experimental results. They have allowed guidance to individual monitoring to be proposed. Basically, the document shows: (1) the analysis of the radiosotopic techniques, taking into account the special containment equipment; (2) the establishment of the need of individual monitoring; and (3) the required frequency of measurements in a routine programme.
Minimum Colour Differences Required To Recognise Small Objects On A Colour CRT
NASA Astrophysics Data System (ADS)
Phillips, Peter L.
1985-05-01
Data is required to assist in the assessment, evaluation and optimisation of colour and other displays for both military and general use. A general aim is to develop a mathematical technique to aid optimisation and reduce the amount of expensive hardware development and trials necessary when introducing new displays. The present standards and methods available for evaluating colour differences are known not to apply to the perception of typical objects on a display. Data is required for irregular objects viewed at small angular subtense ((1°) and relating the recognition of form rather than colour matching. Therefore laboratory experiments have been carried out using a computer controlled CRT to measure the threshold colour difference that an observer requires between object and background so that he can discriminate a variety of similar objects. Measurements are included for a variety of background and object colourings. The results are presented in the CIE colorimetric system similar to current standards used by the display engineer. Apart from the characteristic small field tritanopia, the results show that larger colour differences are required for object recognition than those assumed from conventional colour discrimination data. A simple relationship to account for object size and background colour is suggested to aid visual performance assessments and modelling.
Koompapong, Khuanchai; Sutthikornchai, Chantira
2009-01-01
Cryptosporidium can cause gastrointestinal diseases worldwide, consequently posing public health problems and economic burden. Effective techniques for detecting contaminated oocysts in water are important to prevent and control the contamination. Immunomagnetic separation (IMS) method has been widely employed recently due to its efficiency, but, it is costly. Sucrose floatation technique is generally used for separating organisms by using their different specific gravity. It is effective and cheap but time consuming as well as requiring highly skilled personnel. Water turbidity and parasite load in water sample are additional factors affecting to the recovery rate of those 2 methods. We compared the efficiency of IMS and sucrose floatation methods to recover the spiked Cryptosporidium oocysts in various turbidity water samples. Cryptosporidium oocysts concentration at 1, 101, 102, and 103 per 10 µl were spiked into 3 sets of 10 ml-water turbidity (5, 50, and 500 NTU). The recovery rate of the 2 methods was not different. Oocyst load at the concentration < 102 per 10 ml yielded unreliable results. Water turbidity at 500 NTU decreased the recovery rate of both techniques. The combination of sucrose floatation and immunofluorescense assay techniques (SF-FA) showed higher recovery rate than IMS and immunofluorescense assay (IMS-FA). We used this SF-FA to detect Cryptosporidium and Giardia from the river water samples and found 9 and 19 out of 30 (30% and 63.3%) positive, respectively. Our results favored sucrose floatation technique enhanced with immunofluorescense assay for detecting contaminated protozoa in water samples in general laboratories and in the real practical setting. PMID:19967082
Koompapong, Khuanchai; Sutthikornchai, Chantira; Sukthana, Yowalark
2009-12-01
Cryptosporidium can cause gastrointestinal diseases worldwide, consequently posing public health problems and economic burden. Effective techniques for detecting contaminated oocysts in water are important to prevent and control the contamination. Immunomagnetic separation (IMS) method has been widely employed recently due to its efficiency, but, it is costly. Sucrose floatation technique is generally used for separating organisms by using their different specific gravity. It is effective and cheap but time consuming as well as requiring highly skilled personnel. Water turbidity and parasite load in water sample are additional factors affecting to the recovery rate of those 2 methods. We compared the efficiency of IMS and sucrose floatation methods to recover the spiked Cryptosporidium oocysts in various turbidity water samples. Cryptosporidium oocysts concentration at 1, 10(1), 10(2), and 10(3) per 10 microl were spiked into 3 sets of 10 ml-water turbidity (5, 50, and 500 NTU). The recovery rate of the 2 methods was not different. Oocyst load at the concentration < 10(2) per 10 ml yielded unreliable results. Water turbidity at 500 NTU decreased the recovery rate of both techniques. The combination of sucrose floatation and immunofluorescense assay techniques (SF-FA) showed higher recovery rate than IMS and immunofluorescense assay (IMS-FA). We used this SF-FA to detect Cryptosporidium and Giardia from the river water samples and found 9 and 19 out of 30 (30% and 63.3%) positive, respectively. Our results favored sucrose floatation technique enhanced with immunofluorescense assay for detecting contaminated protozoa in water samples in general laboratories and in the real practical setting.
SFG synthesis of general high-order all-pass and all-pole current transfer functions using CFTAs.
Tangsrirat, Worapong
2014-01-01
An approach of using the signal flow graph (SFG) technique to synthesize general high-order all-pass and all-pole current transfer functions with current follower transconductance amplifiers (CFTAs) and grounded capacitors has been presented. For general nth-order systems, the realized all-pass structure contains at most n + 1 CFTAs and n grounded capacitors, while the all-pole lowpass circuit requires only n CFTAs and n grounded capacitors. The resulting circuits obtained from the synthesis procedure are resistor-less structures and especially suitable for integration. They also exhibit low-input and high-output impedances and also convenient electronic controllability through the g m-value of the CFTA. Simulation results using real transistor model parameters ALA400 are also included to confirm the theory.
SFG Synthesis of General High-Order All-Pass and All-Pole Current Transfer Functions Using CFTAs
Tangsrirat, Worapong
2014-01-01
An approach of using the signal flow graph (SFG) technique to synthesize general high-order all-pass and all-pole current transfer functions with current follower transconductance amplifiers (CFTAs) and grounded capacitors has been presented. For general nth-order systems, the realized all-pass structure contains at most n + 1 CFTAs and n grounded capacitors, while the all-pole lowpass circuit requires only n CFTAs and n grounded capacitors. The resulting circuits obtained from the synthesis procedure are resistor-less structures and especially suitable for integration. They also exhibit low-input and high-output impedances and also convenient electronic controllability through the g m-value of the CFTA. Simulation results using real transistor model parameters ALA400 are also included to confirm the theory. PMID:24688375
NASA Astrophysics Data System (ADS)
masini, nicola; Lasaponara, Rosa
2013-04-01
The papers deals with the use of VHR satellite multitemporal data set to extract cultural landscape changes in the roman site of Grumentum Grumentum is an ancient town, 50 km south of Potenza, located near the roman road of Via Herculea which connected the Venusia, in the north est of Basilicata, with Heraclea in the Ionian coast. The first settlement date back to the 6th century BC. It was resettled by the Romans in the 3rd century BC. Its urban fabric which evidences a long history from the Republican age to late Antiquity (III BC-V AD) is composed of the typical urban pattern of cardi and decumani. Its excavated ruins include a large amphitheatre, a theatre, the thermae, the Forum and some temples. There are many techniques nowadays available to capture and record differences in two or more images. In this paper we focus and apply the two main approaches which can be distinguished into : (i) unsupervised and (ii) supervised change detection methods. Unsupervised change detection methods are generally based on the transformation of the two multispectral images in to a single band or multiband image which are further analyzed to identify changes Unsupervised change detection techniques are generally based on three basic steps (i) the preprocessing step, (ii) a pixel-by-pixel comparison is performed, (iii). Identification of changes according to the magnitude an direction (positive /negative). Unsupervised change detection are generally based on the transformation of the two multispectral images into a single band or multiband image which are further analyzed to identify changes. Than the separation between changed and unchanged classes is obtained from the magnitude of the resulting spectral change vectors by means of empirical or theoretical well founded approaches Supervised change detection methods are generally based on supervised classification methods, which require the availability of a suitable training set for the learning process of the classifiers. Unsupervised change detection techniques are generally based on three basic steps (i) the preprocessing step, (ii) supervised classification is performed on the single dates or on the map obtained as the difference of two dates, (iii). Identification of changes according to the magnitude an direction (positive /negative). Supervised change detection are generally based on supervised classification methods, which require the availability of a suitable training set for the learning process of the classifiers, therefore these algorithms require a preliminary knowledge necessary: (i) to generate representative parameters for each class of interest; and (ii) to carry out the training stage Advantages and disadvantages of the supervised and unsupervised approaches are discuss. Finally results from the the satellite multitemporal dataset was also integrated with aerial photos from historical archive in order to expand the time window of the investigation and capture landscape changes occurred from the Agrarian Reform, in the 50s, up today.
NASA Astrophysics Data System (ADS)
Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.
2017-05-01
The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.
Savas, Jeannie F; Litwack, Robert; Davis, Kevin; Miller, Thomas A
2004-11-01
It is known that smokers and patients with chronic obstructive pulmonary disease (COPD) experience a higher rate of pulmonary-related complications following abdominal surgery. The impact of anesthetic technique (regional [RA] versus general [GA] versus combination of both) on the complication rate has not been established. This study examined the outcomes of abdominal surgery performed using RA (epidural or continuous spinal) as the sole anesthetic technique in patients with severe pulmonary impairment (SPI). We reviewed a series of 8 general surgery cases performed using RA alone (T4-T6 sensory level) in patients with SPI, as evidenced by an forced expiratory volume in 1 second (FEV(1)) less than 50% predicted and/or home oxygen requirement. One patient also received postoperative epidural analgesia. FEV(1) ranged from 0.3 to 1.84 L; 3 patients required home oxygen therapy, and 5 of the 8 were American Society of Anesthesiology (ASA) class 4. Operations included segmental colectomy (n = 2), open cholecystectomy (n = 1), incisional herniorrhaphy (n = 1), and laparoscopic herniorrhaphy (n = 4). Intraoperative conditions were adequate with RA alone for successful completion of the procedure in all cases. All patients recovered uneventfully except for 1 who developed postoperative pneumonia that resolved with standard therapy. Length of stay was less than 24 hours for 5 of 8 patients. Mortality was 0%. Abdominal surgery can be safely performed using RA alone in selected high-risk patients, making this option an attractive alternative to GA for those with severe pulmonary impairment.
Nelson, Travis
2013-01-01
Early childhood caries presents unique treatment challenges that often require advanced behavior management techniques, such as general anesthesia or procedural sedation. In some cases, use of these pharmacologic adjuncts is undesirable or not possible. The interim therapeutic restoration is a treatment method that, while sometimes employed in such cases, can often produce unsatisfactory results in primary anterior teeth. This is often due to insufficient bulk of material and lack of retention. The purpose of this report was to describe a simple alternative technique (resin modified glass ionomer strip crowns) that may be employed to deliver esthetic anterior restorations to marginally cooperative children in the dental clinic setting and to report on two cases in which it was successfully used. \\\\\\Department of Pediatric Dentistry, University of Washington, Seattle, Wash., USA. tmnelson@uw.edu
Multiobjective optimization techniques for structural design
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.
An implicit-iterative solution of the heat conduction equation with a radiation boundary condition
NASA Technical Reports Server (NTRS)
Williams, S. D.; Curry, D. M.
1977-01-01
For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Honorio, J.; Goldstein, R.; Honorio, J.
We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less
Smartphone Magnification Attachment: Microscope or Magnifying Glass
NASA Astrophysics Data System (ADS)
Hergemöller, Timo; Laumann, Daniel
2017-09-01
Today smartphones and tablets do not merely pervade our daily life, but also play a major role in STEM education in general, and in experimental investigations in particular. Enabling teachers and students to make use of these new techniques in physics lessons requires supplying capable and affordable applications. Our article presents the improvement of a low-cost technique turning smartphones into powerful magnifying glasses or microscopes. Adding only a 3D-printed clip attached to the smartphone's camera and inserting a small glass bead in this clip enables smartphones to take pictures with up to 780x magnification (see Fig. 1). In addition, the construction of the smartphone attachments helps to explain and examine the differences between magnifying glasses and microscopes, and shows that the widespread term "smartphone microscope" for this technique is inaccurate from a physics educational perspective.
Research and development of metals for medical devices based on clinical needs
Hanawa, Takao
2012-01-01
The current research and development of metallic materials used for medicine and dentistry is reviewed. First, the general properties required of metals used in medical devices are summarized, followed by the needs for the development of α + β type Ti alloys with large elongation and β type Ti alloys with a low Young's modulus. In addition, nickel-free Ni–Ti alloys and austenitic stainless steels are described. As new topics, we review metals that are bioabsorbable and compatible with magnetic resonance imaging. Surface treatment and modification techniques to improve biofunctions and biocompatibility are categorized, and the related problems are presented at the end of this review. The metal surface may be biofunctionalized by various techniques, such as dry and wet processes. These techniques make it possible to apply metals to scaffolds in tissue engineering. PMID:27877526
Data handling and analysis for the 1971 corn blight watch experiment
NASA Technical Reports Server (NTRS)
Anuta, P. E.; Phillips, T. L.
1973-01-01
The overall corn blight watch experiment data flow is described and the organization of the LARS/Purdue data center is discussed. Data analysis techniques are discussed in general and the use of statistical multispectral pattern recognition methods for automatic computer analysis of aircraft scanner data is described. Some of the results obtained are discussed and the implications of the experiment on future data communication requirements for earth resource survey systems is discussed.
The New ROSIE Reference Manual and User’s Guide
1987-06-01
control structures found in most symbolic languages Features such as rulesets and the pattern matcher blend with the naturalness of ROSIE’s English-like...tasks and does not embody any particular problem-solving techniques or paradigms. Because of its "general-purpose" flavor, it is less structured and... structure . Some operations required special arguments, others performed actions that were considered expedient in n programming language. As the number of
Teaching severely multihandicapped students to put on their own hearing aids.
Tucker, D J; Berry, G W
1980-01-01
Two experiments were conducted with six severely multihandicapped students with hearing impairments to: (a) train the six students to put on their own hearing aids independently, and (b) provide an empirical evaluation of a comprehensive instructional program for putting on a hearing aid by assessing acquisition, maintenance, and generalization of that skill across environments. All six students acquired the skill rapidly, with two students requiring remedial training on one step of the program. Because for two of the original three students the newly learned skill failed initially to generalize to other environments, a second experiment was initiated to assess generalization across environments as well as to replicate the efficiency of the acquisition program. When a variation of the multiple-probe baseline technique was used, the behavior of three additional students generalized to other settings without direct training in those settings. PMID:6444931
Factorization in large-scale many-body calculations
Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.
2013-08-07
One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less
Automatic specification of reliability models for fault-tolerant computers
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1993-01-01
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.
[Anaesthesia for patients with obstructive airway diseases].
Groeben, H; Keller, V; Silvanus, M T
2014-01-01
Obstructive lung diseases like asthma or chronic obstructive lung diseases have a high prevalence and are one of the four most frequent causes of death. Obstructive lung diseases can be significantly influenced by the choice of anesthetic techniques and anesthetic agents. Basically, the severity of the COPD and the degree of bronchial hyperreactivity will determine the perioperative anesthetic risk. This risk has to be assessed by a thorough preoperative evaluation and will give the rationale on which to decide for the adequate anaesthetic technique. In particular, airway instrumentation can cause severe reflex bronchoconstriction. The use of regional anaesthesia alone or in combination with general anaesthesia can help to avoid airway irritation and leads to reduced postoperative complications. Prophylactic antiobstructive treatment, volatile anesthetics, propofol, opioids, and an adequate choice of muscle relaxants minimize the anesthetic risk, when general anesthesia is required In case, despite all precautions intra-operative bronchospasm occurs, deepening of anaesthesia, repeated administration of beta2-adrenergic agents and parasympatholytics, and a single systemic dose of corticosteroids represent the main treatment options.
Fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, R.
1986-01-01
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
A Review of Endoscopic Simulation: Current Evidence on Simulators and Curricula.
King, Neil; Kunac, Anastasia; Merchant, Aziz M
2016-01-01
Upper and lower endoscopy is an important tool that is being utilized more frequently by general surgeons. Training in therapeutic endoscopic techniques has become a mandatory requirement for general surgery residency programs in the United States. The Fundamentals of Endoscopic Surgery has been developed to train and assess competency in these advanced techniques. Simulation has been shown to increase the skill and learning curve of trainees in other surgical disciplines. Several types of endoscopy simulators are commercially available; mechanical trainers, animal based, and virtual reality or computer-based simulators all have their benefits and limitations. However they have all been shown to improve trainee's endoscopic skills. Endoscopic simulators will play a critical role as part of a comprehensive curriculum designed to train the next generation of surgeons. We reviewed recent literature related to the various types of endoscopic simulators and their use in an educational curriculum, and discuss the relevant findings. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Ultrasound in regional anaesthesia.
Griffin, J; Nicholls, B
2010-04-01
Ultrasound guidance is rapidly becoming the gold standard for regional anaesthesia. There is an ever growing weight of evidence, matched with improving technology, to show that the use of ultrasound has significant benefits over conventional techniques, such as nerve stimulation and loss of resistance. The improved safety and efficacy that ultrasound brings to regional anaesthesia will help promote its use and realise the benefits that regional anaesthesia has over general anaesthesia, such as decreased morbidity and mortality, superior postoperative analgesia, cost-effectiveness, decreased postoperative complications and an improved postoperative course. In this review we consider the evidence behind the improved safety and efficacy of ultrasound-guided regional anaesthesia, before discussing its use in pain medicine, paediatrics and in the facilitation of neuraxial blockade. The Achilles' heel of ultrasound-guided regional anaesthesia is that anaesthetists are far more familiar with providing general anaesthesia, which in most cases requires skills that are achieved faster and more reliably. To this ends we go on to provide practical advice on ultrasound-guided techniques and the introduction of ultrasound into a department.
High efficiency solar cell processing
NASA Technical Reports Server (NTRS)
Ho, F.; Iles, P. A.
1985-01-01
At the time of writing, cells made by several groups are approaching 19% efficiency. General aspects of the processing required for such cells are discussed. Most processing used for high efficiency cells is derived from space-cell or concentrator cell technology, and recent advances have been obtained from improved techniques rather than from better understanding of the limiting mechanisms. Theory and modeling are fairly well developed, and adequate to guide further asymptotic increases in performance of near conventional cells. There are several competitive cell designs with promise of higher performance ( 20%) but for these designs further improvements are required. The available cell processing technology to fabricate high efficiency cells is examined.
Structural analyses for the modification and verification of the Viking aeroshell
NASA Technical Reports Server (NTRS)
Stephens, W. B.; Anderson, M. S.
1976-01-01
The Viking aeroshell is an extremely lightweight flexible shell structure that has undergone thorough buckling analyses in the course of its development. The analytical tools and modeling technique required to reveal the structural behavior are presented. Significant results are given which illustrate the complex failure modes not usually observed in simple models and analyses. Both shell-of-revolution analysis for the pressure loads and thermal loads during entry and a general shell analysis for concentrated tank loads during launch were used. In many cases fixes or alterations to the structure were required, and the role of the analytical results in determining these modifications is indicated.
Benarous, X; Legrand, C; Consoli, S M
2014-05-01
Many situations in common medical practice, especially in chronic diseases, require patients to be mobilized for health behavior decisions: for daily intake of an antihypertensive drug, performing a mammography for cancer screening, as well as adopting new diet habits in diabetes. Ability to initiate a health behavior depends on several parameters. Some of them are related to the patient, his personality, his illness and treatment's perception; others directly rely on the physician, his attitude and his communication style during the visit, independently of patient's level of resistance to change. Motivational interviewing (MI) is a communication technique, first developed for patients presenting a substance abuse disorder, to explore their ambivalence, overcome their resistances and give them the willingness of a better self-care. Its general principles and basic techniques can be applied by every practitioner and deserve to be better known, given that scientific literature provides evidence for generalizing it in a variety of medical conditions, in structured patient education programs as well as in usual follow-up, for which time is generally restricted. This article provides an overview of MI recent applications and argues for its diffusion in everyday medical practice. Copyright © 2013. Published by Elsevier SAS.
Field Tests of the Magnetotelluric Method to Detect Gas Hydrates, Mallik, Mackenzie Delta, Canada
NASA Astrophysics Data System (ADS)
Craven, J. A.; Roberts, B.; Bellefleur, G.; Spratt, J.; Wright, F.; Dallimore, S. R.
2008-12-01
The magnetotelluric method is not generally utilized at extreme latitudes due primarily to difficulties in making the good electrical contact with the ground required to measure the electric field. As such, the magnetotelluric technique has not been previously investigated to direct detect gas hydrates in on-shore permafrost environments. We present the results of preliminary field tests at Mallik, Northwest Territories, Canada, that demonstrate good quality magnetotelluric data can be obtained in this environment using specialized electrodes and buffer amplifiers similar to those utilized by Wannamaker et al (2004). This result suggests that subsurface images from larger magnetotelluric surveys will be useful to complement other techniques to detect, quantify and characterize gas hydrates.
Moros, J; Lorenzo, J A; Laserna, J J
2011-07-01
In general, any standoff sensor for the effective detection of explosives must meet two basic requirements: first, a capacity to detect the response generated from only a small amount of material located at a distance of several meters (high sensitivity) and second, the ability to provide easily distinguishable responses for different materials (high specificity). Raman spectroscopy and laser-induced breakdown spectroscopy (LIBS) are two analytical techniques which share similar instrumentation and, at the same time, generate complementary data. These factors have been taken into account recently for the design of sensors used in the detection of explosives. Similarly, research on the proper integration of both techniques has been around for a while. A priori, the different operational conditions required by the two techniques oblige the acquisition of the response for each sensor through sequential analysis, previously necessary to define the proper hierarchy of actuation. However, such an approach does not guarantee that Raman and LIBS responses obtained may relate to each other. Nonetheless, the possible advantages arising from the integration of the molecular and elemental spectroscopic information come with an obvious underlying requirement, simultaneous data acquisition. In the present paper, strong and weak points of Raman spectroscopy and LIBS for solving explosives detection problems, in terms of selectivity, sensitivity, and throughput, are critically examined, discussed, and compared for assessing the ensuing options on the fusion of the responses of both sensing technologies.
Iron physiological requirements in Chinese adults assessed by the stable isotope labeling technique.
Cai, Jie; Ren, Tongxiang; Zhang, Yuhui; Wang, Zhilin; Gou, Lingyan; Huang, Zhengwu; Wang, Jun; Piao, Jianhua; Yang, Xiaoguang; Yang, Lichen
2018-01-01
Iron is a kind of essential trace mineral in the human body, while the studies on its physiological requirement are very limited recently, especially in China. And most studies were performed with the radioisotope tracer technique, which was harmful to health. This study aimed to first get the value of iron physiological requirements in Chinese adults assessed by the stable isotope labeling technique. Forty-four eligible young Chinese healthy adults were randomly recruited from the Bethune Military Medical College (Shijiazhuang, Hebei, China) between January 2010 and March 2011, and 19 subjects were included in the final data analysis. After adaptive diets and observation, subjects received 58 Fe intravenously. The baseline venous blood sample and general information were collected on day 0. Venous blood samples were also collected on day 14, 30, 60, 100, 120, 150, 240, 330, 425, 515, 605, 767, 1155, respectively. The blood samples were acid digested by a Microwave Digestion System and then analyzed by the MC-ICP-MS and Atomic Absorption Spectroscopy to get the abundance of Fe isotopes and the total iron concentration respectively. The circulation rate (the proportion of blood iron to whole body iron) could be calculated by the intake amount, background content and the peak isotope content. When the abundance changed stably, the iron physiological requirement could be calculated by the iron loss in a period of time. The abundance of 58 Fe reached its peak on day 14, and changed stably from day 425. The average circulation rate was 84%, with no significance difference between the 2 genders. The mean iron requirement in females was 1101.68 μg/d, and the mean requirement adjusted by body weight was 20.69 μg/kg.d. For males, the mean iron requirement was 959.9 μg/d, and the requirement adjusted by body weight was 14.04 μg/kg.d. Our study has obtained the data about the iron physiological requirements of Chinese adults using stable isotope labeling technique, which could provide the basis for adjusting iron DRIs of Chinese people in the future. The trial was registered at the Chinese Clinical Trial Registry (No: ChiCTR-TRC-09000581).
Computer assisted audit techniques for UNIX (UNIX-CAATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polk, W.T.
1991-12-31
Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General`s Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less
[Rotator cuff repair: single- vs double-row. Clinical and biomechanical results].
Baums, M H; Kostuj, T; Klinger, H-M; Papalia, R
2016-02-01
The goal of rotator cuff repair is a high initial mechanical stability as a requirement for adequate biological recovery of the tendon-to-bone complex. Notwithstanding the significant increase in publications concerning the topic of rotator cuff repair, there are still controversies regarding surgical technique. The aim of this work is to present an overview of the recently published results of biomechanical and clinical studies on rotator cuff repair using single- and double-row techniques. The review is based on a selective literature research of PubMed, Embase, and the Cochrane Database on the subject of the clinical and biomechanical results of single- and double-row repair. In general, neither the biomechanical nor the clinical evidence can recommend the use of a double-row concept for the treatment for every rotator cuff tear. Only tears of more than 3 cm seem to benefit from better results on both imaging and in clinical outcome studies compared with the use of single-row techniques. Despite a significant increase in publications on the surgical treatment of rotator cuff tears in recent years, the clinical results were not significantly improved in the literature so far. Unique information and algorithms, from which the optimal treatment of this entity can be derived, are still inadequate. Because of the cost-effectiveness and the currently vague evidence, the double-row techniques cannot be generally recommended for the repair of all rotator cuff tears.
Komal
2018-05-01
Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Lossless compression techniques for maskless lithography data
NASA Astrophysics Data System (ADS)
Dai, Vito; Zakhor, Avideh
2002-07-01
Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.
Formal Requirements-Based Programming for Complex Systems
NASA Technical Reports Server (NTRS)
Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis
2005-01-01
Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.
NASA Technical Reports Server (NTRS)
Locci, Ivan E.; Noebe, Ronald D.
1989-01-01
Advanced composite processing techniques for fiber reinforced metal matrix composites require the flexibility to meet several widespread objectives. The development of uniquely desired matrix microstructures and uniformly arrayed fiber spacing with sufficient bonding between fiber and matrix to transmit load between them without degradation to the fiber or matrix are the minimum requirements necessary of any fabrication process. For most applications these criteria can be met by fabricating composite monotapes which are then consolidated into composite panels or more complicated components such as fiber reinforced turbine blades. Regardless of the end component, composite monotapes are the building blocks from which near net shape composite structures can be formed. The most common methods for forming composite monotapes are the powder cloth, foil/fiber, plasma spray, and arc spray processes. These practices, however, employ rapid solidification techniques in processing of the composite matrix phase. Consequently, rapid solidification processes play a vital and yet generally overlooked role in composite fabrication. The future potential of rapid solidification processing is discussed.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
On the energy integral for first post-Newtonian approximation
NASA Astrophysics Data System (ADS)
O'Leary, Joseph; Hill, James M.; Bennett, James C.
2018-07-01
The post-Newtonian approximation for general relativity is widely adopted by the geodesy and astronomy communities. It has been successfully exploited for the inclusion of relativistic effects in practically all geodetic applications and techniques such as satellite/lunar laser ranging and very long baseline interferometry. Presently, the levels of accuracy required in geodetic techniques require that reference frames, planetary and satellite orbits and signal propagation be treated within the post-Newtonian regime. For arbitrary scalar W and vector gravitational potentials W^j (j=1,2,3), we present a novel derivation of the energy associated with a test particle in the post-Newtonian regime. The integral so obtained appears not to have been given previously in the literature and is deduced through algebraic manipulation on seeking a Jacobi-like integral associated with the standard post-Newtonian equations of motion. The new integral is independently verified through a variational formulation using the post-Newtonian metric components and is subsequently verified by numerical integration of the post-Newtonian equations of motion.
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Kračun, Stjepan Krešimir; Fangel, Jonatan Ulrik; Rydahl, Maja Gro; Pedersen, Henriette Lodberg; Vidal-Melgosa, Silvia; Willats, William George Tycho
2017-01-01
Cell walls are an important feature of plant cells and a major component of the plant glycome. They have both structural and physiological functions and are critical for plant growth and development. The diversity and complexity of these structures demand advanced high-throughput techniques to answer questions about their structure, functions and roles in both fundamental and applied scientific fields. Microarray technology provides both the high-throughput and the feasibility aspects required to meet that demand. In this chapter, some of the most recent microarray-based techniques relating to plant cell walls are described together with an overview of related contemporary techniques applied to carbohydrate microarrays and their general potential in glycoscience. A detailed experimental procedure for high-throughput mapping of plant cell wall glycans using the comprehensive microarray polymer profiling (CoMPP) technique is included in the chapter and provides a good example of both the robust and high-throughput nature of microarrays as well as their applicability to plant glycomics.
McLeod, B C
1991-05-01
Therapeutic apheresis is a generic term that refers to removal of abnormal blood cells and plasma constituents. The terms "plasmapheresis," "leukapheresis," and "erythrocytapheresis" describe the specific blood element that is removed. Apheresis therapies can be performed in the ICU to manage a number of neurologic, hematologic, and autoimmune disorders, including myasthenia gravis, Guillain-Barré syndrome, sickle-cell disease, and Goodpasture's syndrome. Apheresis procedures generally require two points of contact with the circulation--one for blood withdrawal and one for return; the withdrawal site should sustain a flow rate of at least 50 mL/min. Although apheresis is generally quite safe, hemodynamic instability, hypocalcemia, and dilutional coagulopathy can occur.
NASA Technical Reports Server (NTRS)
Liou, J.; Tezduyar, T. E.
1990-01-01
Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.
Increasing mathematical problem-solving performance through relaxation training
NASA Astrophysics Data System (ADS)
Sharp, Conni; Coltharp, Hazel; Hurford, David; Cole, Amykay
2000-04-01
Two intact classes of 30 undergraduate students enrolled in the same general education mathematics course were each administered the IPSP Mathematics Problem Solving Test and the Mathematics Anxiety Rating Scale at the beginning and end of the semester. Both groups experienced the same syllabus, lectures, course requirements, and assessment techniques; however, one group received relaxation training during an initial class meeting and during the first 5 to 7 minutes of each subsequent class. The group which had received relaxation training had significantly lower mathematics anxiety and significantly higher mathematics performance at the end of the course. The results suggest that relaxation training may be a useful tool for treating anxiety in undergraduate general education mathematics students.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
Structure-Based Characterization of Multiprotein Complexes
Wiederstein, Markus; Gruber, Markus; Frank, Karl; Melo, Francisco; Sippl, Manfred J.
2014-01-01
Summary Multiprotein complexes govern virtually all cellular processes. Their 3D structures provide important clues to their biological roles, especially through structural correlations among protein molecules and complexes. The detection of such correlations generally requires comprehensive searches in databases of known protein structures by means of appropriate structure-matching techniques. Here, we present a high-speed structure search engine capable of instantly matching large protein oligomers against the complete and up-to-date database of biologically functional assemblies of protein molecules. We use this tool to reveal unseen structural correlations on the level of protein quaternary structure and demonstrate its general usefulness for efficiently exploring complex structural relationships among known protein assemblies. PMID:24954616
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Lower lip reconstruction with nasolabial flap--going back to basics.
Coutinho, Inês; Ramos, Leonor; Gameiro, Ana Rita; Vieira, Ricardo; Figueiredo, Américo
2015-01-01
Squamous cell carcinoma of the lower lip is frequent, and radical excision sometimes leads to complex defects. Many lip repair techniques are aggressive requiring general anesthesia and a prolonged post-operative period. The nasolabial flap, while a common flap for the repair of other facial defects, is an under-recognized option for the reconstruction of the lower lip. We describe the use of nasolabial flap for the repair of a large defect of the lower lip in a ninety year-old male, with good functional results and acceptable cosmetic outcome. We believe the nasolabial flap is a good alternative for intermediate-to-large lower lip defects in patients with impaired general condition.
Microgravity isolation system design: A modern control synthesis framework
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.
1994-01-01
Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.
Microgravity isolation system design: A modern control synthesis framework
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.
1994-01-01
Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.
Data handling and analysis for the 1971 corn blight watch experiment.
NASA Technical Reports Server (NTRS)
Anuta, P. E.; Phillips, T. L.; Landgrebe, D. A.
1972-01-01
Review of the data handling and analysis methods used in the near-operational test of remote sensing systems provided by the 1971 corn blight watch experiment. The general data analysis techniques and, particularly, the statistical multispectral pattern recognition methods for automatic computer analysis of aircraft scanner data are described. Some of the results obtained are examined, and the implications of the experiment for future data communication requirements of earth resource survey systems are discussed.
Biosafety principles and practices for the veterinary diagnostic laboratory.
Kozlovac, Joseph; Schmitt, Beverly
2015-01-01
Good biosafety and biocontainment programs and practices are critical components of the successful operation of any veterinary diagnostic laboratory. In this chapter we provide information and guidance on critical biosafety management program elements, facility requirements, protective equipment, and procedures necessary to ensure that the laboratory worker and the environment are adequately protected in the challenging work environment of the veterinary diagnostic laboratory in general and provide specific guidance for those laboratories employing molecular diagnostic techniques.
Understanding the Role of Context in the Interpretation of Complex Battlespace Intelligence
2006-01-01
Level 2 Fusion)" that there remains a significant need for higher levels of information fusion such as those required for generic situation awareness ... information in a set of reports, 2) general background knowledge e.g., doctrine, techniques, practices) plus 4) known situation-specific information (e.g... aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information
Jia, Shaoyang; Pennington, M. R.
2016-12-12
In this paper, we derive the gauge covariance requirement imposed on the QED fermion-photon three-point function within the framework of a spectral representation for fermion propagators. When satisfied, such requirement ensures solutions to the fermion propagator Schwinger-Dyson equation (SDE) in any covariant gauge with arbitrary numbers of spacetime dimensions to be consistent with the Landau-Khalatnikov-Fradkin transformation (LKFT). The general result has been verified by the special cases of three and four dimensions. Additionally, we present the condition that ensures the vacuum polarization is independent of the gauge parameter. Finally, as an illustration, we show how the gauge technique dimensionally regularizedmore » in four dimensions does not satisfy the covariance requirement.« less
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Berkowitz, Brian
2015-03-01
We develop continuous-time random walk (CTRW) equations governing the transport of two species that annihilate when in proximity to one another. In comparison with catalytic or spontaneous transformation reactions that have been previously considered in concert with CTRW, both species have spatially variant concentrations that require consideration. We develop two distinct formulations. The first treats transport and reaction microscopically, potentially capturing behavior at sharp fronts, but at the cost of being strongly nonlinear. The second, mesoscopic, formulation relies on a separation-of-scales technique we develop to separate microscopic-scale reaction and upscaled transport. This simplifies the governing equations and allows treatment of more general reaction dynamics, but requires stronger smoothness assumptions of the solution. The mesoscopic formulation is easily tractable using an existing solution from the literature (we also provide an alternative derivation), and the generalized master equation (GME) for particles undergoing A +B →0 reactions is presented. We show that this GME simplifies, under appropriate circumstances, to both the GME for the unreactive CTRW and to the advection-dispersion-reaction equation. An additional major contribution of this work is on the numerical side: to corroborate our development, we develop an indirect particle-tracking-partial-integro-differential-equation (PIDE) hybrid verification technique which could be applicable widely in reactive anomalous transport. Numerical simulations support the mesoscopic analysis.
Molecular-Based Optical Measurement Techniques for Transition and Turbulence in High-Speed Flow
NASA Technical Reports Server (NTRS)
Bathel, Brett F.; Danehy, Paul M.; Cutler, Andrew D.
2013-01-01
High-speed laminar-to-turbulent transition and turbulence affect the control of flight vehicles, the heat transfer rate to a flight vehicle's surface, the material selected to protect such vehicles from high heating loads, the ultimate weight of a flight vehicle due to the presence of thermal protection systems, the efficiency of fuel-air mixing processes in high-speed combustion applications, etc. Gaining a fundamental understanding of the physical mechanisms involved in the transition process will lead to the development of predictive capabilities that can identify transition location and its impact on parameters like surface heating. Currently, there is no general theory that can completely describe the transition-to-turbulence process. However, transition research has led to the identification of the predominant pathways by which this process occurs. For a truly physics-based model of transition to be developed, the individual stages in the paths leading to the onset of fully turbulent flow must be well understood. This requires that each pathway be computationally modeled and experimentally characterized and validated. This may also lead to the discovery of new physical pathways. This document is intended to describe molecular based measurement techniques that have been developed, addressing the needs of the high-speed transition-to-turbulence and high-speed turbulence research fields. In particular, we focus on techniques that have either been used to study high speed transition and turbulence or techniques that show promise for studying these flows. This review is not exhaustive. In addition to the probe-based techniques described in the previous paragraph, several other classes of measurement techniques that are, or could be, used to study high speed transition and turbulence are excluded from this manuscript. For example, surface measurement techniques such as pressure and temperature paint, phosphor thermography, skin friction measurements and photogrammetry (for model attitude and deformation measurement) are excluded to limit the scope of this report. Other physical probes such as heat flux gauges, total temperature probes are also excluded. We further exclude measurement techniques that require particle seeding though particle based methods may still be useful in many high speed flow applications. This manuscript details some of the more widely used molecular-based measurement techniques for studying transition and turbulence: laser-induced fluorescence (LIF), Rayleigh and Raman Scattering and coherent anti-Stokes Raman scattering (CARS). These techniques are emphasized, in part, because of the prior experience of the authors. Additional molecular based techniques are described, albeit in less detail. Where possible, an effort is made to compare the relative advantages and disadvantages of the various measurement techniques, although these comparisons can be subjective views of the authors. Finally, the manuscript concludes by evaluating the different measurement techniques in view of the precision requirements described in this chapter. Additional requirements and considerations are discussed to assist with choosing an optical measurement technique for a given application.
Qu, Yongzhi; He, David; Yoon, Jae; Van Hecke, Brandon; Bechhoefer, Eric; Zhu, Junda
2014-01-01
In recent years, acoustic emission (AE) sensors and AE-based techniques have been developed and tested for gearbox fault diagnosis. In general, AE-based techniques require much higher sampling rates than vibration analysis-based techniques for gearbox fault diagnosis. Therefore, it is questionable whether an AE-based technique would give a better or at least the same performance as the vibration analysis-based techniques using the same sampling rate. To answer the question, this paper presents a comparative study for gearbox tooth damage level diagnostics using AE and vibration measurements, the first known attempt to compare the gearbox fault diagnostic performance of AE- and vibration analysis-based approaches using the same sampling rate. Partial tooth cut faults are seeded in a gearbox test rig and experimentally tested in a laboratory. Results have shown that the AE-based approach has the potential to differentiate gear tooth damage levels in comparison with the vibration-based approach. While vibration signals are easily affected by mechanical resonance, the AE signals show more stable performance. PMID:24424467
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, I.G.; Joseph, D.; Lal, M.
1995-10-01
A wide range of ferroalloys are used to facilitate the addition of different alloying elements to molten steel. High-carbon ferroalloys are produced on a tonnage basis by carbothermic smelting in an electric furnace, and an aluminothermic route is generally adopted for small scale production of low-carbon varieties. The physicochemical principles of carbothermy and aluminothermy have been well documented in the literature. However, limited technical data are reported on the production of individual ferroalloys of low-carbon varieties from their selected resources. The authors demonstrate her the application of an energy dispersive X-ray fluorescence (EDXRF) technique in meeting the analytical requirements ofmore » a thermite smelting campaign, carried out with the aim of preparing low-carbon-low-nitrogen Fe-Ni, Fe-Cr, and Fe-Ti alloys from indigenously available nickel bearing spent catalyst, mineral chromite, and ilmenite/rutile, respectively. They have chosen the EDXRF technique to meet the analytical requirements because of its capability to analyze samples of ore, minerals, a metal, and alloys in different forms, such as powder, sponge, as-smelted, or as-cast, to obtain rapid multielement analyses with ease. Rapid analyses of thermite feed and product by this technique have aided in the appropriate alterations of the charge constitutents to obtain optimum charge consumption.« less
Improving Focal Photostimulation of Cortical Neurons with Pre-derived Wavefront Correction
Choy, Julian M. C.; Sané, Sharmila S.; Lee, Woei M.; Stricker, Christian; Bachor, Hans A.; Daria, Vincent R.
2017-01-01
Recent progress in neuroscience to image and investigate brain function has been made possible by impressive developments in optogenetic and opto-molecular tools. Such research requires advances in optical techniques for the delivery of light through brain tissue with high spatial resolution. The tissue causes distortions to the wavefront of the incoming light which broadens the focus and consequently reduces the intensity and degrades the resolution. Such effects are detrimental in techniques requiring focal stimulation. Adaptive wavefront correction has been demonstrated to compensate for these distortions. However, iterative derivation of the corrective wavefront introduces time constraints that limit its applicability to probe living cells. Here, we demonstrate that we can pre-determine and generalize a small set of Zernike modes to correct for aberrations of the light propagating through specific brain regions. A priori identification of a corrective wavefront is a direct and fast technique that improves the quality of the focus without the need for iterative adaptive wavefront correction. We verify our technique by measuring the efficiency of two-photon photolysis of caged neurotransmitters along the dendrites of a whole-cell patched neuron. Our results show that encoding the selected Zernike modes on the excitation light can improve light propagation through brain slices of rats as observed by the neuron's evoked excitatory post-synaptic potential in response to localized focal uncaging at the spines of the neuron's dendrites. PMID:28507508
A seismic refraction technique used for subsurface investigations at Meteor Crater, Arizona
NASA Technical Reports Server (NTRS)
Ackermann, H. D.; Godson, R. H.; Watkins, J. S.
1975-01-01
A seismic refraction technique for interpreting the subsurface shape and velocity distribution of an anomalous surface feature such as an impact crater is described. The method requires the existence of a relatively deep refracting horizon and combines data obtained from both standard shallow refraction spreads and distant offset shots by using the deep refractor as a source of initial arrivals. Results obtained from applying the technique to Meteor crater generally agree with the known structure of the crater deduced by other investigators and provide new data on an extensive fractured zone surrounding the crater. The breccia lens is computed to extend roughly 190 m below the crater floor, about 30 m less than the value deduced from early drilling data. Rocks around the crater are fractured as distant as 900 m from the rim crest and to a depth of at least 800 m beneath the crater floor.
NASA Astrophysics Data System (ADS)
Navarrete, Álvaro; Wang, Wenyuan; Xu, Feihu; Curty, Marcos
2018-04-01
The experimental characterization of multi-photon quantum interference effects in optical networks is essential in many applications of photonic quantum technologies, which include quantum computing and quantum communication as two prominent examples. However, such characterization often requires technologies which are beyond our current experimental capabilities, and today's methods suffer from errors due to the use of imperfect sources and photodetectors. In this paper, we introduce a simple experimental technique to characterize multi-photon quantum interference by means of practical laser sources and threshold single-photon detectors. Our technique is based on well-known methods in quantum cryptography which use decoy settings to tightly estimate the statistics provided by perfect devices. As an illustration of its practicality, we use this technique to obtain a tight estimation of both the generalized Hong‑Ou‑Mandel dip in a beamsplitter with six input photons and the three-photon coincidence probability at the output of a tritter.
Advances in paper-based sample pretreatment for point-of-care testing.
Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng
2017-06-01
In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.
Single-molecule imaging of cytoplasmic dynein in vivo.
Ananthanarayanan, Vaishnavi; Tolić, Iva M
2015-01-01
While early fluorescence microscopy experiments employing fluorescent probes afforded snapshots of the cell, the power of live-cell microscopy is required to understand complex dynamics in biological processes. The first successful cloning of green fluorescent protein in the 1990s paved the way for development of approaches that we now utilize for visualization in a living cell. In this chapter, we discuss a technique to observe fluorescently tagged single molecules in fission yeast. With a few simple modifications to the established total internal reflection fluorescence microscopy, cytoplasmic dynein molecules in the cytoplasm and on the microtubules can be visualized and their intracellular dynamics can be studied. We illustrate a technique to study motor behavior, which is not apparent in conventional ensemble studies of motors. In general, this technique can be employed to study single-molecule dynamics of fluorescently tagged proteins in the cell interior. Copyright © 2015 Elsevier Inc. All rights reserved.
A Clinicopathological Study of Various Oral Cancer Diagnostic Techniques
Ulaganathan, G.; Mohamed Niazi, K. Thanvir; Srinivasan, Soundarya; Balaji, V. R.; Manikandan, D.; Hameed, K. A. Shahul; Banumathi, A.
2017-01-01
Oral cancer is one of the most commonly occurring malignant tumors in the head and neck regions with high incident rate and mortality rate in the developed countries than in the developing countries. Generally, the survival rate of cancer patients may increase when diagnosed at early stage, followed by prompt treatment and therapy. Recently, cancer diagnosis and therapy design for a specific cancer patient have been performed with the advanced computer-aided techniques. The responses of the cancer therapy could be continuously monitored to ensure the effectiveness of the treatment process that hardly requires diagnostic result as quick as possible to improve the quality and patient care. This paper gives an overview of oral cancer occurrence, different types, and various diagnostic techniques. In addition, a brief introduction is given to various stages of immunoanalysis including tissue image preparation, whole slide imaging, and microscopic image analysis. PMID:29284926
Kilic, Ali; Denney, Brad; de la Torre, Jorge
2018-05-31
Generally, reconstruction of knee defects with exposed bone, joint, tendon, and/or hardware requires a vascularized muscle flap for coverage. Although there are several surgical options for a knee defect reconstruction, the pedicled gastrocnemius muscle still remains the workhorse flap. Although this flap is commonly used for knee defect reconstruction and the technique is described very well, there is an absence of information in the literature detailing the technique of harvesting and insetting of the gastrocnemius flap step by step with illustrations. The purpose of this article is to describe in detail the technique to reconstruct defects of the knee with pedicled gastrocnemius muscle flap as well as to present demographics and surgical results of 21 patients who had knee reconstruction with a pedicled gastrocnemius muscle flap and split-thickness skin grafting. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Feline dental radiography and radiology: A primer.
Niemiec, Brook A
2014-11-01
Information crucial to the diagnosis and treatment of feline oral diseases can be ascertained using dental radiography and the inclusion of this technology has been shown to be the best way to improve a dental practice. Becoming familar with the techniques required for dental radiology and radiography can, therefore, be greatly beneficial. Novices to dental radiography may need some time to adjust and become comfortable with the techniques. If using dental radiographic film, the generally recommended 'E' or 'F' speeds may be frustrating at first, due to their more specific exposure and image development requirements. Although interpreting dental radiographs is similar to interpreting a standard bony radiograph, there are pathologic states that are unique to the oral cavity and several normal anatomic structures that may mimic pathologic changes. Determining which teeth have been imaged also requires a firm knowledge of oral anatomy as well as the architecture of dental films/digital systems. This article draws on a range of dental radiography and radiology resources, and the benefit of the author's own experience, to review the basics of taking and interpreting intraoral dental radiographs. A simplified method for positioning the tubehead is explained and classic examples of some common oral pathologies are provided. © ISFM and AAFP 2014.
Captive care and welfare considerations for beavers.
Campbell-Palmer, Róisín; Rosell, Frank
2015-01-01
Beavers (Castor spp.) tend not to be a commonly held species and little published material exists relating to their captive care. We review published material and discuss husbandry issues taking into account the requirements of wild beavers. As social mammals with complex chemical communication systems and with such an ability to modify their environments, studies of wild counterparts suggest the captive requirements of beavers may actually be more sophisticated than generally perceived. Common field techniques may have practical application in the captive setting. Their widespread utilisation in conservation, including reintroductions, translocations and habitat management, also requires components of captive care. As welfare science advances there is increasing pressure on captive collections to improve standards and justify the keeping of animals. Conservation science is increasingly challenged to address individual welfare standards. Further research focusing on the captive care of beavers is required. © 2015 Wiley Periodicals, Inc.
A Formal Approach to Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.
Advanced paediatric conscious sedation: an alternative to dental general anaesthetic in the U.K.
Hand, Darren; Averley, Paul; Lyne, John; Girdler, Nick
2011-01-01
Child dental anxiety is widespread, and it is not always possible to treat children using traditional methods such as behavioural management, local anaesthesia and even relative analgesia. In such cases a dental general anaesthetic (DGA) is the only option available to facilitate dental treatment in anxious children. This study describes an advanced conscious sedation protocol which allows invasive treatment to be carried out in anxious children. It incorporates the use of titrated intravenous midazolam and fentanyl and inhalation agents, sevoflurane and nitrous oxide/oxygen, which is administered by a Consultant Anaesthetist. The aim is to produce an evidence- based study which can offer a sedation technique as a safe and effective alternative to a DGA. Retrospective audit. 267 clinical records were audited retrospectively from a specialist sedation-based clinic, for children aged 5-15 years old. The subjects all underwent invasive dental procedures with this technique between August and November 2008 as an alternative to a DGA. 262/267 (98%) of the subjects were treated safely and successfully and without the loss of verbal communication using this technique. This included many treatments requiring four quadrant dentistry, with both restorations and extractions as necessary being carried out in one visit. 5 subjects (2%) did not tolerate treatment and had to be referred for a DGA. No medical emergencies occurred. Based on the evidence for this group of patients, this advanced conscious sedation technique, offers a safe and effective alternative to DGA. This technique must be carried out in an appropriate environment by an appropriately trained and experienced team who are able to comply with the recommendations for "alternative" sedation techniques.
Worthington, Jo; Taylor, Hilary; Abrams, Paul; Brookes, Sara T; Cotterill, Nikki; Noble, Sian M; Page, Tobias; Swami, K Satchi; Lane, J Athene; Hashim, Hashim
2017-04-17
Transurethral resection of the prostate (TURP) has been the standard operation for benign prostatic obstruction (BPO) for 40 years, with approximately 25,000 procedures performed annually, and has remained largely unchanged. It is generally a successful operation, but has well-documented risks for the patient. Thulium laser transurethral vaporesection of the prostate (ThuVARP) vaporises and resects the prostate using a surgical technique similar to TURP. The small amount of study data currently available suggests that ThuVARP may have certain advantages over TURP, including reduced blood loss and shorter hospital stay, earlier return to normal activities, and shorter duration of catheterisation. A multicentre, pragmatic, randomised, controlled, parallel-group trial of ThuVARP versus standard TURP in men with BPO. Four hundred and ten men suitable for prostate surgery were randomised to receive either ThuVARP or TURP at four university teaching hospitals, and three district general hospitals. The key aim of the trial is to determine whether ThuVARP is equivalent to TURP judged on both the patient-reported International Prostate Symptom Score (IPSS) and the maximum urine flow rate (Qmax) at 12 months post-surgery. The general population has an increased life expectancy. As men get older their prostates enlarge, potentially causing BPO, which often requires surgery. Therefore, as the population ages, more prostate operations are needed to relieve obstruction. There is hence sustained interest in the condition and increasing need to find safer techniques than TURP. Various laser techniques have become available but none are widely used in the NHS because of lengthy training required for surgeons or inferior performance on clinical outcomes. Promising initial evidence from one RCT shows that ThuVARP has equivalent clinical effectiveness when compared to TURP, as well as other potential advantages. As ThuVARP uses a technique similar to that used in TURP, the learning curve is short, potentially making it also very quickly generalisable. This randomised study is designed to provide the high-quality evidence, in an NHS setting, with a range of patient-reported, clinical and cost-effectiveness outcomes, which will underpin and inform future NICE guidance. ISRCTN registry, ISRCTN00788389 . Registered on 20 September 2013.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berra, P.B.; Chung, S.M.; Hachem, N.I.
This article presents techniques for managing a very large data/knowledge base to support multiple inference-mechanisms for logic programming. Because evaluation of goals can require accessing data from the extensional database, or EDB, in very general ways, one must often resort to indexing on all fields of the extensional database facts. This presents a formidable management problem in that the index data may be larger than the EDB itself. This problem becomes even more serious in this case of very large data/knowledge bases (hundreds of gigabytes), since considerably more hardware will be required to process and store the index data. Inmore » order to reduce the amount of index data considerably without losing generality, the authors form a surrogate file, which is a hashing transformation of the facts. Superimposed code words (SCW), concatenated code words (CCW), and transformed inverted lists (TIL) are possible structures for the surrogate file. since these transformations are quite regular and compact, the authors consider possible computer architecture for the processing of the surrogate file.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.
We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters.
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Serrano, Alejandro; Godoy, Jorge; Martínez-Álvarez, Antonio; Villagra, Jorge
2017-11-11
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle.
A study of poultry processing plant noise control techniques
NASA Technical Reports Server (NTRS)
Wyvill, J. C.; Morrison, W. G., Jr.
1981-01-01
A number of techniques can be used to reduce noise in poultry processing plants. In general, covering the ceiling with a noise-absorbing medium is a practical first step. Once the reflected noise levels are abated, treatment of specific identifiable noise courses can take place. The development, flammability, and mechanical properties of acoustic panels to be vertically suspended from the ceiling are discussed as well as the covers need to comply with USDA cleanability requirements. The isolation of drive motors and pumps from large expansive areas, the muffling of pneumatic devices, and the insulation of ice chutes are methods of source quieting. Proper maintenance of machinery and vibration monitoring are also needed to reduce hearing damage risk and to improve worker productivity and employee/supervisor relations.
A BAYESIAN APPROACH TO DERIVING AGES OF INDIVIDUAL FIELD WHITE DWARFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Erin M.; Von Hippel, Ted; Van Dyk, David A., E-mail: ted.vonhippel@erau.edu, E-mail: dvandyke@imperial.ac.uk
2013-09-20
We apply a self-consistent and robust Bayesian statistical approach to determine the ages, distances, and zero-age main sequence (ZAMS) masses of 28 field DA white dwarfs (WDs) with ages of approximately 4-8 Gyr. Our technique requires only quality optical and near-infrared photometry to derive ages with <15% uncertainties, generally with little sensitivity to our choice of modern initial-final mass relation. We find that age, distance, and ZAMS mass are correlated in a manner that is too complex to be captured by traditional error propagation techniques. We further find that the posterior distributions of age are often asymmetric, indicating that themore » standard approach to deriving WD ages can yield misleading results.« less
A unified framework for building high performance DVEs
NASA Astrophysics Data System (ADS)
Lei, Kaibin; Ma, Zhixia; Xiong, Hua
2011-10-01
A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.
Transfer-arm evaporator cell for rapid loading and deposition of organic thin films.
Greiner, M T; Helander, M G; Wang, Z B; Lu, Z H
2009-12-01
Described herein is a transfer-arm evaporator cell (TAE-cell), which allows for rapid loading of materials into vacuum for low-temperature sublimation deposition of thin films. This design can be incorporated with an existing analysis system for convenient in situ thin film characterization. This evaporator is especially well suited for photoemission characterization of organic semiconductor interfaces. Photoemission is one of the most important techniques for characterizing such, however, it generally requires in situ sample preparation. The ease with which materials can be loaded and evaporated with this design increases the throughput of in situ photoemission characterization, and broadens the research scope of the technique. Here, we describe the design, operation, and performance of the TAE-cell.
Nonlinear models for estimating GSFC travel requirements
NASA Technical Reports Server (NTRS)
Buffalano, C.; Hagan, F. J.
1974-01-01
A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.
Human Protein and Amino Acid Requirements.
Hoffer, L John
2016-05-01
Human protein and amino acid nutrition encompasses a wide, complex, frequently misunderstood, and often contentious area of clinical research and practice. This tutorial explains the basic biochemical and physiologic principles that underlie our current understanding of protein and amino acid nutrition. The following topics are discussed: (1) the identity, measurement, and essentiality of nutritional proteins; (2) the definition and determination of minimum requirements; (3) nutrition adaptation; (4) obligatory nitrogen excretion and the minimum protein requirement; (5) minimum versus optimum protein intakes; (6) metabolic responses to surfeit and deficient protein intakes; (7) body composition and protein requirements; (8) labile protein; (9) N balance; (10) the principles of protein and amino acid turnover, including an analysis of the controversial indicator amino acid oxidation technique; (11) general guidelines for evaluating protein turnover articles; (12) amino acid turnover versus clearance; (13) the protein content of hydrated amino acid solutions; (14) protein requirements in special situations, including protein-catabolic critical illness; (15) amino acid supplements and additives, including monosodium glutamate and glutamine; and (16) a perspective on the future of protein and amino acid nutrition research. In addition to providing practical information, this tutorial aims to demonstrate the importance of rigorous physiologic reasoning, stimulate intellectual curiosity, and encourage fresh ideas in this dynamic area of human nutrition. In general, references are provided only for topics that are not well covered in modern textbooks. © 2016 American Society for Parenteral and Enteral Nutrition.
NASA Astrophysics Data System (ADS)
Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia
2017-04-01
The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.
Theoretical molecular studies of astrophysical interest
NASA Technical Reports Server (NTRS)
Flynn, George
1991-01-01
When work under this grant began in 1974 there was a great need for state-to-state collisional excitation rates for interstellar molecules observed by radio astronomers. These were required to interpret observed line intensities in terms of local temperatures and densities, but, owing to lack of experimental or theoretical values, estimates then being used for this purpose ranged over several orders of magnitude. A problem of particular interest was collisional excitation of formaldehyde; Townes and Cheung had suggested that the relative size of different state-to-state rates (propensity rules) was responsible for the anomalous absorption observed for this species. We believed that numerical molecular scattering techniques (in particular the close coupling or coupled channel method) could be used to obtain accurate results, and that these would be computationally feasible since only a few molecular rotational levels are populated at the low temperatures thought to prevail in the observed regions. Such calculations also require detailed knowledge of the intermolecular forces, but we thought that those could also be obtained with sufficient accuracy by theoretical (quantum chemical) techniques. Others, notably Roy Gordon at Harvard, had made progress in solving the molecular scattering equations, generally using semi-empirical intermolecular potentials. Work done under this grant generalized Gordon's scattering code, and introduced the use of theoretical interaction potentials obtained by solving the molecular Schroedinger equation. Earlier work had considered only the excitation of a diatomic molecule by collisions with an atom, and we extended the formalism to include excitation of more general molecular rotors (e.g., H2CO, NH2, and H2O) and also collisions of two rotors (e.g., H2-H2).
Frawley, Geoff; Bell, Graham; Disma, Nicola; Withington, Davinia E.; de Graaff, Jurgen C.; Morton, Neil S.; McCann, Mary Ellen; Arnup, Sarah J.; Bagshaw, Oliver; Wolfler, Andrea; Bellinger, David; Davidson, Andrew J.
2015-01-01
Background Awake regional anesthesia (RA) is a viable alternative to general anesthesia (GA) for infants undergoing lower abdominal surgery. Benefits include lower incidence of postoperative apnea and avoidance of anesthetic agents that may increase neuroapoptosis and worsen neurocognitive outcomes. The General Anesthesia compared to Spinal anesthesia (GAS) study compares neurodevelopmental outcomes following awake RA or GA in otherwise healthy infants. Our aim was to describe success and failure rates of RA in this study and report factors associated with failure. Methods This was a nested cohort study within a prospective randomized, controlled, observer blind, equivalence trial. Seven hundred twenty two infants ≤ 60 weeks postmenstrual age, scheduled for herniorrhaphy under anesthesia were randomly assigned to receive RA (spinal, caudal epidural or combined spinal caudal anesthetic) or GA with sevoflurane. The data of 339 infants, where spinal or combined spinal caudal anesthetic was attempted, was analyzed. Possible predictors of failure were assessed including: patient factors, technique, experience of site and anesthetist and type of local anesthetic. Results RA was sufficient for the completion of surgery in 83.2% of patients. Spinal anesthesia was successful in 86.9% of cases and combined spinal caudal anesthetic in 76.1%. Thirty four patients required conversion to GA and an additional 23 (6.8%) required brief sedation. Bloody tap on the first attempt at lumbar puncture was the only risk factor significantly associated with block failure (OR = 2.46). Conclusions The failure rate of spinal anesthesia was low. Variability in application of combined spinal caudal anesthetic limited attempts to compare the success of this technique to spinal alone. PMID:26001028
The stonehenge technique: a new method of crystal alignment for coherent bremsstrahlung experiments
NASA Astrophysics Data System (ADS)
Livingston, Kenneth
2005-08-01
In the coherent bremsstrahlung technique a thin diamond crystal oriented correctly in an electron beam can produce photons with a high degree of linear polarization.1 The crystal is mounted on a goniometer to control its orientation and it is necessary to measure the angular offsets a) between the crystal axes and the goniometer axes and b) between the goniometer and the electron beam axis. A method for measuring these offsets and aligning the crystal was developed by Lohman et al, and has been used successfully in Mainz.2 However, recent attempts to investigate new crystals have shown that this approach has limitations which become more serious at higher beam energies where more accurate setting of the crystal angles, which scale with l/Ebeam, is required. (Eg. the recent installation of coherent bremsstrahlung facility at Jlab, with Ebeam = 6 GeV ) This paper describes a new, more general alignment technique, which overcomes these limitations. The technique is based on scans where the horizontal and vertical rotation axes of the goniometer are adjusted in a series of steps to make the normal to the crystal describe a cone of a given angle. For each step in the scan, the photon energy spectrum is measured using a tagging spectrometer, and the offsets between the electron beam and the crystal lattice are inferred from the resulting 2D plot. Using this method, it is possible to align the crystal with the beam quickly, and hence to set any desired orientation of the crystal relative to the beam. This is essential for any experiment requiring linearly polarized photons produced via coherent bremsstrahlung, and is also required for a systematic study of the channeling radiation produced by the electron beam incident on the crystal.
Learning inverse kinematics: reduced sampling through decomposition into virtual robots.
de Angulo, Vicente Ruiz; Torras, Carme
2008-12-01
We propose a technique to speedup the learning of the inverse kinematics of a robot manipulator by decomposing it into two or more virtual robot arms. Unlike previous decomposition approaches, this one does not place any requirement on the robot architecture, and thus, it is completely general. Parametrized self-organizing maps are particularly adequate for this type of learning, and permit comparing results directly obtained and through the decomposition. Experimentation shows that time reductions of up to two orders of magnitude are easily attained.
Planning Complex Projects Automatically
NASA Technical Reports Server (NTRS)
Henke, Andrea L.; Stottler, Richard H.; Maher, Timothy P.
1995-01-01
Automated Manifest Planner (AMP) computer program applies combination of artificial-intelligence techniques to assist both expert and novice planners, reducing planning time by orders of magnitude. Gives planners flexibility to modify plans and constraints easily, without need for programming expertise. Developed specifically for planning space shuttle missions 5 to 10 years ahead, with modifications, applicable in general to planning other complex projects requiring scheduling of activities depending on other activities and/or timely allocation of resources. Adaptable to variety of complex scheduling problems in manufacturing, transportation, business, architecture, and construction.
NASA Astrophysics Data System (ADS)
Proux, Denys; Segond, Frédérique; Gerbier, Solweig; Metzger, Marie Hélène
Hospital Acquired Infections (HAI) is a real burden for doctors and risk surveillance experts. The impact on patients' health and related healthcare cost is very significant and a major concern even for rich countries. Furthermore required data to evaluate the threat is generally not available to experts and that prevents from fast reaction. However, recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems allow now to address this problem.
Ultrasonic nondestructive evaluation, microstructure, and mechanical property interrelations
NASA Technical Reports Server (NTRS)
Vary, A.
1984-01-01
Ultrasonic techniques for mechanical property characterizations are reviewed and conceptual models are advanced for explaining and interpreting the empirically based results. At present, the technology is generally empirically based and is emerging from the research laboratory. Advancement of the technology will require establishment of theoretical foundations for the experimentally observed interrelations among ultrasonic measurements, mechanical properties, and microstructure. Conceptual models are applied to ultrasonic assessment of fracture toughness to illustrate an approach for predicting correlations found among ultrasonic measurements, microstructure, and mechanical properties.
Intercommunications in Real Time, Redundant, Distributed Computer System
NASA Technical Reports Server (NTRS)
Zanger, H.
1980-01-01
An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.
Investigation of charge coupled device correlation techniques
NASA Technical Reports Server (NTRS)
Lampe, D. R.; Lin, H. C.; Shutt, T. J.
1978-01-01
Analog Charge Transfer Devices (CTD's) offer unique advantages to signal processing systems, which often have large development costs, making it desirable to define those devices which can be developed for general system's use. Such devices are best identified and developed early to give system's designers some interchangeable subsystem blocks, not requiring additional individual development for each new signal processing system. The objective of this work is to describe a discrete analog signal processing device with a reasonably broad system use and to implement its design, fabrication, and testing.
Cervical Radiculopathy due to Cervical Degenerative Diseases : Anatomy, Diagnosis and Treatment
Kim, Kyoung-Tae
2010-01-01
A cervical radiculopathy is the most common symptom of cervical degenerative disease and its natural course is generally favorable. With a precise diagnosis using appropriate tools, the majority of patients will respond well to conservative treatment. Cervical radiculopathy with persistent radicular pain after conservative treatment and progressive or profound motor weakness may require surgery. Options for surgical management are extensive. Each technique has strengths and weaknesses, so the choice will depend on the patient's clinical profile and the surgeon's judgment. PMID:21430971
Construction of 3-D Audio Systems: Background, Research, and General Requirements
2008-10-01
silicone rubber tube. This tube was placed such that its tip was 1 to 2 mm from the listener’s eardrum and was held in position using a customised ...but implemented different techniques to create the customised shells used to keep the probe microphone in place. They chose to use the same moulding...why this is not a convenient procedure for commercial applications. This is particularly true where the application is aimed at a mass market where
An application of fractional integration to a long temperature series
NASA Astrophysics Data System (ADS)
Gil-Alana, L. A.
2003-11-01
Some recently proposed techniques of fractional integration are applied to a long UK temperature series. The tests are valid under general forms of serial correlation and do not require estimation of the fractional differencing parameter. The results show that central England temperatures have increased about 0.23 °C per 100 years in recent history. Attempting to summarize the conclusions for each of the months, we are left with the impression that the highest increase has occurred during the months from October to March.
Superconducting thin-film gyroscope readout for Gravity Probe-B
NASA Technical Reports Server (NTRS)
Lockhart, James M.; Cheung, W. Stephen; Gill, Dale K.
1987-01-01
The high-resolution gyroscope readout system for the Stanford Gravity Probe-B experiment, whose purpose is to measure two general relativistic precessions of gyroscopes in earth orbit, is described. In order to achieve the required resolution in angle (0.001 arcsec), the readout system combines high-precision mechanical fabrication and measurement techniques with superconducting thin-film technology, ultralow magnetic fields, and SQUID detectors. The system design, performance limits achievable with current technology, and the results of fabrication and laboratory testing to date are discussed.
Properties of finite difference models of non-linear conservative oscillators
NASA Technical Reports Server (NTRS)
Mickens, R. E.
1988-01-01
Finite-difference (FD) approaches to the numerical solution of the differential equations describing the motion of a nonlinear conservative oscillator are investigated analytically. A generalized formulation of the Duffing and modified Duffing equations is derived and analyzed using several FD techniques, and it is concluded that, although it is always possible to contstruct FD models of conservative oscillators which are themselves conservative, caution is required to avoid numerical solutions which do not accurately reflect the properties of the original equation.
Safety Precautions and Operating Procedures in an (A)BSL-4 Laboratory: 2. General Practices.
Mazur, Steven; Holbrook, Michael R; Burdette, Tracey; Joselyn, Nicole; Barr, Jason; Pusl, Daniela; Bollinger, Laura; Coe, Linda; Jahrling, Peter B; Lackemeyer, Matthew G; Wada, Jiro; Kuhn, Jens H; Janosko, Krisztina
2016-10-03
Work in a biosafety level 4 (BSL-4) containment laboratory requires time and great attention to detail. The same work that is done in a BSL-2 laboratory with non-high-consequence pathogens will take significantly longer in a BSL-4 setting. This increased time requirement is due to a multitude of factors that are aimed at protecting the researcher from laboratory-acquired infections, the work environment from potential contamination and the local community from possible release of high-consequence pathogens. Inside the laboratory, movement is restricted due to air hoses attached to the mandatory full-body safety suits. In addition, disinfection of every item that is removed from Class II biosafety cabinets (BSCs) is required. Laboratory specialists must be trained in the practices of the BSL-4 laboratory and must show high proficiency in the skills they are performing. The focus of this article is to outline proper procedures and techniques to ensure laboratory biosafety and experimental accuracy using a standard viral plaque assay as an example procedure. In particular, proper techniques to work safely in a BSL-4 environment when performing an experiment will be visually emphasized. These techniques include: setting up a Class II BSC for experiments, proper cleaning of the Class II BSC when finished working, waste management and safe disposal of waste generated inside a BSL-4 laboratory, and the removal of inactivated samples from inside a BSL-4 laboratory to the BSL-2 laboratory.
Generating High-Brightness Ion Beams for Inertial Confinement Fusion
NASA Astrophysics Data System (ADS)
Cuneo, M. E.
1997-11-01
The generation of high current density ion beams with applied-B ion diodes showed promise in the late-1980's as an efficient, rep-rate, focusable driver for inertial confinement fusion. These devices use several Tesla insulating magnetic fields to restrict electron motion across anode-cathode gaps of order 1-2 cm, while accelerating ions to generate ≈ 1 kA/cm^2, 5 - 15 MeV beams. These beams have been used to heat hohlraums to about 65 eV. However, meeting the ICF driver requirements for low-divergence and high-brightness lithium ion beams has been more technically challenging than initially thought. Experimental and theoretical work over the last 5 years shows that high-brightness beams meeting the requirements for inertial confinement fusion are possible. The production of these beams requires the simultaneous integration of at least four conditions: 1) rigorous vacuum cleaning techniques for control of undesired anode, cathode, ion source and limiter plasma formation from electrode contaminants to control impurity ions and impedance collapse; 2) carefully tailored insulating magnetic field geometry for uniform beam generation; 3) high magnetic fields (V_crit/V > 2) and other techniques to control the electron sheath and the onset of a high divergence electromagnetic instability that couples strongly to the ion beam; and 4) an active, pre-formed, uniform lithium plasma for low source divergence which is compatible with the above electron-sheath control techniques. These four conditions have never been simultaneously present in any lithium beam experiment, but simulations and experimental tests of individual conditions have been done. The integration of these conditions is a goal of the present ion beam generation program at Sandia. This talk will focus on the vacuum cleaning techniques for ion diodes and pulsed power devices in general, including experimental results obtained on the SABRE and PBFA-II accelerators over the last 3 years. The current status of integration of the other key physics and technologies required to demonstrate high-brightness ion beams will also be presented.
Development and verification of global/local analysis techniques for laminated composites
NASA Technical Reports Server (NTRS)
Thompson, Danniella Muheim; Griffin, O. Hayden, Jr.
1991-01-01
A two-dimensional to three-dimensional global/local finite element approach was developed, verified, and applied to a laminated composite plate of finite width and length containing a central circular hole. The resulting stress fields for axial compression loads were examined for several symmetric stacking sequences and hole sizes. Verification was based on comparison of the displacements and the stress fields with those accepted trends from previous free edge investigations and a complete three-dimensional finite element solution of the plate. The laminates in the compression study included symmetric cross-ply, angle-ply and quasi-isotropic stacking sequences. The entire plate was selected as the global model and analyzed with two-dimensional finite elements. Displacements along a region identified as the global/local interface were applied in a kinematically consistent fashion to independent three-dimensional local models. Local areas of interest in the plate included a portion of the straight free edge near the hole, and the immediate area around the hole. Interlaminar stress results obtained from the global/local analyses compares well with previously reported trends, and some new conclusions about interlaminar stress fields in plates with different laminate orientations and hole sizes are presented for compressive loading. The effectiveness of the global/local procedure in reducing the computational effort required to solve these problems is clearly demonstrated through examination of the computer time required to formulate and solve the linear, static system of equations which result for the global and local analyses to those required for a complete three-dimensional formulation for a cross-ply laminate. Specific processors used during the analyses are described in general terms. The application of this global/local technique is not limited software system, and was developed and described in as general a manner as possible.
Traumatic injury to the portal vein.
Mattox, K L; Espada, R; Beall, A R
1975-01-01
Traumatic injuries to the upper abdominal vasculature pose difficult management problems related to both exposure and associated injuries. Among those injuries that are more difficult to manage are those involving the portal vein. While occurring rarely, portal vein injuries require specific therapeutic considerations. Between January, 1968, and July, 1974, over 2000 patients were treated operatively for abdominal trauma at the Ben Taub General Hospital. Among these patients, 22 had injury to the portal vein. Seventeen portal vein injuries were secondary to gunshot wounds, 3 to stab wounds, and 2 to blunt trauma. Associated injuries to the inferior vena cava, pancreas, liver and bile ducts were common. Three patients had associated abdominal aortic injuries, two with acute aorto-caval fistulae. Nine patients died from from failure to control hemorrhage. Eleven were long-term survivors, including two who required pancreataico-duodenectomy as well as portal venorrhaphy. Late complications were rare. The operative approach to patients with traumatic injuries to multiple organs in the upper abdomen, including the portal vein, requires aggressive management and predetermined sequential methods of repair. In spite of innumerable associated injuries, portal vein injuries can be successfully managed in a significant number of patients using generally available surgical techniques and several adjunctive maneuvers. PMID:1130870
Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis
NASA Technical Reports Server (NTRS)
Cox, C. F.; Cinnella, P.; Westmoreland, S.
1996-01-01
The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.
A General Method for Targeted Quantitative Cross-Linking Mass Spectrometry.
Chavez, Juan D; Eng, Jimmy K; Schweppe, Devin K; Cilia, Michelle; Rivera, Keith; Zhong, Xuefei; Wu, Xia; Allen, Terrence; Khurgel, Moshe; Kumar, Akhilesh; Lampropoulos, Athanasios; Larsson, Mårten; Maity, Shuvadeep; Morozov, Yaroslav; Pathmasiri, Wimal; Perez-Neut, Mathew; Pineyro-Ruiz, Coriness; Polina, Elizabeth; Post, Stephanie; Rider, Mark; Tokmina-Roszyk, Dorota; Tyson, Katherine; Vieira Parrine Sant'Ana, Debora; Bruce, James E
2016-01-01
Chemical cross-linking mass spectrometry (XL-MS) provides protein structural information by identifying covalently linked proximal amino acid residues on protein surfaces. The information gained by this technique is complementary to other structural biology methods such as x-ray crystallography, NMR and cryo-electron microscopy[1]. The extension of traditional quantitative proteomics methods with chemical cross-linking can provide information on the structural dynamics of protein structures and protein complexes. The identification and quantitation of cross-linked peptides remains challenging for the general community, requiring specialized expertise ultimately limiting more widespread adoption of the technique. We describe a general method for targeted quantitative mass spectrometric analysis of cross-linked peptide pairs. We report the adaptation of the widely used, open source software package Skyline, for the analysis of quantitative XL-MS data as a means for data analysis and sharing of methods. We demonstrate the utility and robustness of the method with a cross-laboratory study and present data that is supported by and validates previously published data on quantified cross-linked peptide pairs. This advance provides an easy to use resource so that any lab with access to a LC-MS system capable of performing targeted quantitative analysis can quickly and accurately measure dynamic changes in protein structure and protein interactions.
Tao, Feifei; Ngadi, Michael
2018-06-13
Conventional methods for determining fat content and fatty acids (FAs) composition are generally based on the solvent extraction and gas chromatography techniques, respectively, which are time consuming, laborious, destructive to samples and require use of hazard solvents. These disadvantages make them impossible for large-scale detection or being applied to the production line of meat factories. In this context, the great necessity of developing rapid and nondestructive techniques for fat and FAs analyses has been highlighted. Measurement techniques based on near-infrared spectroscopy, Raman spectroscopy, nuclear magnetic resonance and hyperspectral imaging have provided interesting and promising results for fat and FAs prediction in varieties of foods. Thus, the goal of this article is to give an overview of the current research progress in application of the four important techniques for fat and FAs analyses of muscle foods, which consist of pork, beef, lamb, chicken meat, fish and fish oil. The measurement techniques are described in terms of their working principles, features, and application advantages. Research advances for these techniques for specific food are summarized in detail and the factors influencing their modeling results are discussed. Perspectives on the current situation, future trends and challenges associated with the measurement techniques are also discussed.
Field of Psychiatry: Current Trends and Future Directions: An Indian Perspective.
Dave, Kishore P
2016-01-01
Attempting to predict future is dangerous. This is particularly true in medical science where change is a result of chance discoveries. Currently, practicing psychiatrists are aware of deficiencies in psychiatric practice. However, we have a number of genuine reasons for optimism and excitement. Genetics, novel treatment approaches, new investigative techniques, large-scale treatment trials, and research in general medicine and neurology will give better insights in psychiatric disorders and its management. Psychiatric services in rural India can be reached by telemedicine. There are some threat perceptions which require solving and remedying. Subspecialties in psychiatry are the need of the hour. There is also a requirement for common practice guidelines. Mental Health Care Bill, 2013, requires suitable amendments before it is passed in the Indian Parliament. Research in psychiatry is yet to be developed as adequate resources are not available.
Optical Processing Techniques For Pseudorandom Sequence Prediction
NASA Astrophysics Data System (ADS)
Gustafson, Steven C.
1983-11-01
Pseudorandom sequences are series of apparently random numbers generated, for example, by linear or nonlinear feedback shift registers. An important application of these sequences is in spread spectrum communication systems, in which, for example, the transmitted carrier phase is digitally modulated rapidly and pseudorandomly and in which the information to be transmitted is incorporated as a slow modulation in the pseudorandom sequence. In this case the transmitted information can be extracted only by a receiver that uses for demodulation the same pseudorandom sequence used by the transmitter, and thus this type of communication system has a very high immunity to third-party interference. However, if a third party can predict in real time the probable future course of the transmitted pseudorandom sequence given past samples of this sequence, then interference immunity can be significantly reduced.. In this application effective pseudorandom sequence prediction techniques should be (1) applicable in real time to rapid (e.g., megahertz) sequence generation rates, (2) applicable to both linear and nonlinear pseudorandom sequence generation processes, and (3) applicable to error-prone past sequence samples of limited number and continuity. Certain optical processing techniques that may meet these requirements are discussed in this paper. In particular, techniques based on incoherent optical processors that perform general linear transforms or (more specifically) matrix-vector multiplications are considered. Computer simulation examples are presented which indicate that significant prediction accuracy can be obtained using these transforms for simple pseudorandom sequences. However, the useful prediction of more complex pseudorandom sequences will probably require the application of more sophisticated optical processing techniques.
Laser Doppler flowmetry for measurement of laminar capillary blood flow in the horse
NASA Astrophysics Data System (ADS)
Adair, Henry S., III
1998-07-01
Current methods for in vivo evaluation of digital hemodynamics in the horse include angiography, scintigraphy, Doppler ultrasound, electromagnetic flow and isolated extracorporeal pump perfused digit preparations. These techniques are either non-quantifiable, do not allow for continuous measurement, require destruction of the horse orare invasive, inducing non- physiologic variables. In vitro techniques have also been reported for the evaluation of the effects of vasoactive agents on the digital vessels. The in vitro techniques are non-physiologic and have evaluated the vasculature proximal to the coronary band. Lastly, many of these techniques require general anesthesia or euthanasia of the animal. Laser Doppler flowmetry is a non-invasive, continuous measure of capillary blood flow. Laser Doppler flowmetry has been used to measure capillary blood flow in many tissues. The principle of this method is to measure the Doppler shift, that is, the frequency change that light undergoes when reflected by moving objects, such as red blood cells. Laser Doppler flowmetry records a continuous measurement of the red cell motion in the outer layer of the tissue under study, with little or no influence on physiologic blood flow. This output value constitutes the flux of red cells and is reported as capillary perfusion units. No direct information concerning oxygen, nutrient or waste metabolite exchange in the surrounding tissue is obtained. The relationship between the flowmeter output signal and the flux of red blood cells is linear. The principles of laser Doppler flowmetry will be discussed and the technique for laminar capillary blood flow measurements will be presented.
Fibla, Juan J; Molins, Laureano; Moradiellos, Javier; Rodríguez, Pedro; Heras, Félix; Canalis, Emili; Bolufer, Sergio; Martínez, Pablo; Aragón, Javier; Arroyo, Andrés; Pérez, Javier; León, Pablo; Canela, Mercedes
2016-01-01
Although the Nuss technique revolutionized the surgical treatment of pectus excavatum, its use has not become widespread in our country. The aim of this study was to analyze the current use of this technique in a sample of Thoracic Surgery Departments in Spain. Observational rectrospective multicentric study analyzing the main epidemiological aspects and clinical results of ten years experience using the Nuss technique. Between 2001 and 2010 a total of 149 patients were operated on (mean age 21.2 years), 74% male. Initial aesthetic results were excellent or good in 93.2%, mild in 4.1% and bad in 2.7%. After initial surgery there were complications in 45 patients (30.6%). The most frequent were wound seroma, bar displacement, stabilizer break, pneumothorax, haemothorax, wound infection, pneumonia, pericarditis and cardiac tamponade that required urgent bar removal. Postoperative pain appeared in all patients. In 3 cases (2%) it was so intense that it required bar removal. After a mean follow-up of 39.2 months, bar removal had been performed in 72 patients (49%), being difficult in 5 cases (7%). After a 1.6 year follow-up period good results persisted in 145 patients (98.7%). Nuss technique in adults has had good results in Spanish Thoracic Surgery Departments, however its use has not been generalized. The risk of complications must be taken into account and its indication must be properly evaluated. The possibility of previous conservative treatment is being analyzed in several departments at present. Copyright © 2015 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
Sugiura, Yoshinori; Sugiura, Tomoko
2015-08-01
While research based on the emotion dysregulation model indicates a positive relationship between intense emotions and generalized anxiety disorder (GAD) symptoms, emotion-focused intervention involves the use of techniques to enhance emotional experiences, based on the notion that GAD patients are engaging in avoidance strategies. To reveal the conditions under which intense emotions lead to reduced GAD symptoms, we designed a longitudinal study to monitor changes in GAD symptoms among students (N = 129) over 3 months. Our focus was on possible moderators of the effect of emotional intensity. Results indicated that when fear of emotions and negative appraisals about problem solving were low, negative emotional intensity reduced later GAD symptoms. Moreover, under the condition of high responsibility to continue thinking, emotional intensity tended to reduce later GAD symptoms. Results suggest that reduced fear of emotions and reduced negative appraisals about problem solving may enhance the use of emotional processing techniques (e.g., emotional exposure). The interaction between responsibility to continue thinking and emotional intensity requires further examination. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
An introduction to digital modulation and OFDM techniques
NASA Astrophysics Data System (ADS)
Maddocks, M. C. D.
This report differs from most BBC Research Department reports in that it does not contain details of a specific project undertaken at Kingswood Warren. While there has been a continuing development of aspects of digital modulation systems by BBC research engineers over many years, the purpose of this report is to be tutorial. That is, digital transmission techniques need to be explained in a general way if full advantage is to be obtained from other reports concerning digital broadcasting transmission systems. There are, however, references to other specialized publications if particular details are required. The text of this report is based on a paper which was prepared for an Institution of Electrical Engineers' vacation school on new broadcast standards and systems. It discusses, at a general level, the various issues and trade-offs that must be considered in the design of a digital modulation system for broadcast use. It particularly concentrates on giving a simple description of the use and benefits of OFDM systems. The particular issues can be applied to various future broadcast systems which are under development at the BBC and as part of collaborative work in international projects.
NASA Technical Reports Server (NTRS)
Khorram, S.
1977-01-01
Results are presented of a study intended to develop a general location-specific remote-sensing procedure for watershed-wide estimation of water loss to the atmosphere by evaporation and transpiration. The general approach involves a stepwise sequence of required information definition (input data), appropriate sample design, mathematical modeling, and evaluation of results. More specifically, the remote sensing-aided system developed to evaluate evapotranspiration employs a basic two-stage two-phase sample of three information resolution levels. Based on the discussed design, documentation, and feasibility analysis to yield timely, relatively accurate, and cost-effective evapotranspiration estimates on a watershed or subwatershed basis, work is now proceeding to implement this remote sensing-aided system.
Multitime correlation functions in nonclassical stochastic processes
NASA Astrophysics Data System (ADS)
Krumm, F.; Sperling, J.; Vogel, W.
2016-06-01
A general method is introduced for verifying multitime quantum correlations through the characteristic function of the time-dependent P functional that generalizes the Glauber-Sudarshan P function. Quantum correlation criteria are derived which identify quantum effects for an arbitrary number of points in time. The Magnus expansion is used to visualize the impact of the required time ordering, which becomes crucial in situations when the interaction problem is explicitly time dependent. We show that the latter affects the multi-time-characteristic function and, therefore, the temporal evolution of the nonclassicality. As an example, we apply our technique to an optical parametric process with a frequency mismatch. The resulting two-time-characteristic function yields full insight into the two-time quantum correlation properties of such a system.
Optimal parameter estimation with a fixed rate of abstention
NASA Astrophysics Data System (ADS)
Gendra, B.; Ronco-Bonvehi, E.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.
2013-07-01
The problems of optimally estimating a phase, a direction, and the orientation of a Cartesian frame (or trihedron) with general pure states are addressed. Special emphasis is put on estimation schemes that allow for inconclusive answers or abstention. It is shown that such schemes enable drastic improvements, up to the extent of attaining the Heisenberg limit in some cases, and the required amount of abstention is quantified. A general mathematical framework to deal with the asymptotic limit of many qubits or large angular momentum is introduced and used to obtain analytical results for all the relevant cases under consideration. Parameter estimation with abstention is also formulated as a semidefinite programming problem, for which very efficient numerical optimization techniques exist.
Structure-based characterization of multiprotein complexes.
Wiederstein, Markus; Gruber, Markus; Frank, Karl; Melo, Francisco; Sippl, Manfred J
2014-07-08
Multiprotein complexes govern virtually all cellular processes. Their 3D structures provide important clues to their biological roles, especially through structural correlations among protein molecules and complexes. The detection of such correlations generally requires comprehensive searches in databases of known protein structures by means of appropriate structure-matching techniques. Here, we present a high-speed structure search engine capable of instantly matching large protein oligomers against the complete and up-to-date database of biologically functional assemblies of protein molecules. We use this tool to reveal unseen structural correlations on the level of protein quaternary structure and demonstrate its general usefulness for efficiently exploring complex structural relationships among known protein assemblies. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Qin, Jin; Tang, Siqi; Han, Congying; Guo, Tiande
2018-04-01
Partial fingerprint identification technology which is mainly used in device with small sensor area like cellphone, U disk and computer, has taken more attention in recent years with its unique advantages. However, owing to the lack of sufficient minutiae points, the conventional method do not perform well in the above situation. We propose a new fingerprint matching technique which utilizes ridges as features to deal with partial fingerprint images and combines the modified generalized Hough transform and scoring strategy based on machine learning. The algorithm can effectively meet the real-time and space-saving requirements of the resource constrained devices. Experiments on in-house database indicate that the proposed algorithm have an excellent performance.
Some methods for blindfolded record linkage.
Churches, Tim; Christen, Peter
2004-06-28
The linkage of records which refer to the same entity in separate data collections is a common requirement in public health and biomedical research. Traditionally, record linkage techniques have required that all the identifying data in which links are sought be revealed to at least one party, often a third party. This necessarily invades personal privacy and requires complete trust in the intentions of that party and their ability to maintain security and confidentiality. Dusserre, Quantin, Bouzelat and colleagues have demonstrated that it is possible to use secure one-way hash transformations to carry out follow-up epidemiological studies without any party having to reveal identifying information about any of the subjects - a technique which we refer to as "blindfolded record linkage". A limitation of their method is that only exact comparisons of values are possible, although phonetic encoding of names and other strings can be used to allow for some types of typographical variation and data errors. A method is described which permits the calculation of a general similarity measure, the n-gram score, without having to reveal the data being compared, albeit at some cost in computation and data communication. This method can be combined with public key cryptography and automatic estimation of linkage model parameters to create an overall system for blindfolded record linkage. The system described offers good protection against misdeeds or security failures by any one party, but remains vulnerable to collusion between or simultaneous compromise of two or more parties involved in the linkage operation. In order to reduce the likelihood of this, the use of last-minute allocation of tasks to substitutable servers is proposed. Proof-of-concept computer programmes written in the Python programming language are provided to illustrate the similarity comparison protocol. Although the protocols described in this paper are not unconditionally secure, they do suggest the feasibility, with the aid of modern cryptographic techniques and high speed communication networks, of a general purpose probabilistic record linkage system which permits record linkage studies to be carried out with negligible risk of invasion of personal privacy.
Some methods for blindfolded record linkage
Churches, Tim; Christen, Peter
2004-01-01
Background The linkage of records which refer to the same entity in separate data collections is a common requirement in public health and biomedical research. Traditionally, record linkage techniques have required that all the identifying data in which links are sought be revealed to at least one party, often a third party. This necessarily invades personal privacy and requires complete trust in the intentions of that party and their ability to maintain security and confidentiality. Dusserre, Quantin, Bouzelat and colleagues have demonstrated that it is possible to use secure one-way hash transformations to carry out follow-up epidemiological studies without any party having to reveal identifying information about any of the subjects – a technique which we refer to as "blindfolded record linkage". A limitation of their method is that only exact comparisons of values are possible, although phonetic encoding of names and other strings can be used to allow for some types of typographical variation and data errors. Methods A method is described which permits the calculation of a general similarity measure, the n-gram score, without having to reveal the data being compared, albeit at some cost in computation and data communication. This method can be combined with public key cryptography and automatic estimation of linkage model parameters to create an overall system for blindfolded record linkage. Results The system described offers good protection against misdeeds or security failures by any one party, but remains vulnerable to collusion between or simultaneous compromise of two or more parties involved in the linkage operation. In order to reduce the likelihood of this, the use of last-minute allocation of tasks to substitutable servers is proposed. Proof-of-concept computer programmes written in the Python programming language are provided to illustrate the similarity comparison protocol. Conclusion Although the protocols described in this paper are not unconditionally secure, they do suggest the feasibility, with the aid of modern cryptographic techniques and high speed communication networks, of a general purpose probabilistic record linkage system which permits record linkage studies to be carried out with negligible risk of invasion of personal privacy. PMID:15222890
An audit of the use of intra-septal local anaesthesia in a dental practice in the South of England.
Doman, Stephen M
2011-04-01
The aim of this audit was to evaluate the efficacy, when used by the author, of the intra-septal local anaesthetic technique for cavity preparation in mandibular molar and premolar teeth. One hundred and thirteen consecutive patients who required local anaesthesia (LA) for cavity preparation in lower molar and premolar teeth in a general dental practice took part in the audit sample. Articaine 4%, with 1:100,000 adrenaline (epinephrine), was administered using the intra-septal technique. Visual analogue scales (VAS) were used to record pain experienced on injection and the quality of anaesthesia obtained. Any side-effects reported were recorded. The standards set were that at least 70% should find the administration of the LA pain-free and that at least 80% should experience no pain during cavity preparation. Sixty-nine (62%) patients reported the injection technique to be completely pain-free and a further 23 (20%) reported very minor pain on injection. Eighty (71%) patients reported pain-free treatment and 18 (16%) experienced very minor pain during treatment. No side-effects were reported. Patients aged under 40 years and those who had cavities prepared in first premolar teeth appeared more likely to experience pain during cavity preparation. The intra-septal injection technique requires no specialist equipment, is easily administered, rapid in onset and provides a level of anaesthesia equivalent to that produced by an inferior dental nerve block and with fewer side-effects. The injection is relatively painless to administer.
622-Mbps Orthogonal Frequency Division Multiplexing (OFDM) Digital Modem Implemented
NASA Technical Reports Server (NTRS)
Kifle, Muli; Bizon, Thomas P.; Nguyen, Nam T.; Tran, Quang K.; Mortensen, Dale J.
2002-01-01
Future generation space communications systems feature significantly higher data rates and relatively smaller frequency spectrum allocations than systems currently deployed. This requires the application of bandwidth- and power-efficient signal transmission techniques. There are a number of approaches to implementing such techniques, including analog, digital, mixed-signal, single-channel, or multichannel systems. In general, the digital implementations offer more advantages; however, a fully digital implementation is very difficult because of the very high clock speeds required. Multichannel techniques are used to reduce the sampling rate. One such technique, multicarrier modulation, divides the data into a number of low-rate channels that are stacked in frequency. Orthogonal frequency division multiplexing (OFDM), a form of multicarrier modulation, is being proposed for numerous systems, including mobile wireless and digital subscriber link communication systems. In response to this challenge, NASA Glenn Research Center's Communication Technology Division has developed an OFDM digital modem (modulator and demodulator) with an aggregate information throughput of 622 Mbps. The basic OFDM waveform is constructed by dividing an incoming data stream into four channels, each using either 16- ary quadrature amplitude modulation (16-QAM) or 8-phase shift keying (8-PSK). An efficient implementation for an OFDM architecture is being achieved using the combination of a discrete Fourier transform (DFT) at the transmitter to digitally stack the individual carriers, inverse DFT at the receiver to perform the frequency translations, and a polyphase filter to facilitate the pulse shaping.
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2018-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
Beland, Laurent Karim; Osetskiy, Yury N.; Stoller, Roger E.; ...
2015-02-07
Here, we present a comparison of the Kinetic Activation–Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly Kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation–Relaxation Technique nouveau provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16,000-atom boxes. Generally speaking, k-ART’s treatment ofmore » geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC’s, while the later’s concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not accessible if not localizing calculations.« less
Matching of electron beams for conformal therapy of target volumes at moderate depths.
Zackrisson, B; Karlsson, M
1996-06-01
The basic requirements for conformal electron therapy are an accelerator with a wide range of energies and field shapes. The beams should be well characterised in a full 3-D dose planning system which has been verified for the geometries of the current application. Differences in the basic design of treatment units have been shown to have a large influence on beam quality and dosimetry. Modern equipment can deliver electron beams of good quality with a high degree of accuracy. A race-track microtron with minimised electron scattering and a multi-leaf collimator (MLC) for electron collimating will facilitate the isocentric technique as a general treatment technique for electrons. This will improve the possibility of performing combined electron field techniques in order to conform the dose distribution with no or minimal use of a bolus. Furthermore, the isocentric technique will facilitate multiple field arrangements that decrease the problems with distortion of the dose distribution due to inhomogeneities, etc. These situations are demonstrated by clinical examples where isocentric, matched electron fields for treatment of the nose, thyroid and thoracic wall have been used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandes, Justin L.; Rappaport, Carey M.; Sheen, David M.
2011-05-01
The cylindrical millimeter-wave imaging technique, developed at Pacific Northwest National Laboratory (PNNL) and commercialized by L-3 Communications/Safeview in the ProVision system, is currently being deployed in airports and other high security locations to meet person-borne weapon and explosive detection requirements. While this system is efficient and effective in its current form, there are a number of areas in which the detection performance may be improved through using different reconstruction algorithms and sensing configurations. PNNL and Northeastern University have teamed together to investigate higher-order imaging artifacts produced by the current cylindrical millimeter-wave imaging technique using full-wave forward modeling and laboratory experimentation.more » Based on imaging results and scattered field visualizations using the full-wave forward model, a new imaging system is proposed. The new system combines a multistatic sensor configuration with the generalized synthetic aperture focusing technique (GSAFT). Initial results show an improved ability to image in areas of the body where target shading, specular and higher-order reflections cause images produced by the monostatic system difficult to interpret.« less
Gelfusa, M; Gaudio, P; Malizia, A; Murari, A; Vega, J; Richetta, M; Gonzalez, S
2014-06-01
Recently, surveying large areas in an automatic way, for early detection of both harmful chemical agents and forest fires, has become a strategic objective of defence and public health organisations. The Lidar and Dial techniques are widely recognized as a cost-effective alternative to monitor large portions of the atmosphere. To maximize the effectiveness of the measurements and to guarantee reliable monitoring of large areas, new data analysis techniques are required. In this paper, an original tool, the Universal Multi Event Locator, is applied to the problem of automatically identifying the time location of peaks in Lidar and Dial measurements for environmental physics applications. This analysis technique improves various aspects of the measurements, ranging from the resilience to drift in the laser sources to the increase of the system sensitivity. The method is also fully general, purely software, and can therefore be applied to a large variety of problems without any additional cost. The potential of the proposed technique is exemplified with the help of data of various instruments acquired during several experimental campaigns in the field.
Do, Thi Kieu Tiên; Hadji-Minaglou, Francis; Antoniotti, Sylvain; Fernandez, Xavier
2014-01-17
Chemical investigations on secondary metabolites in natural products chemistry require efficient isolation techniques for characterization purpose as well as for the evaluation of their biological properties. In the case of phytochemical studies, the performance of the techniques is critical (resolution and yield) since the products generally present a narrow range of polarity and physicochemical properties. Several techniques are currently available, but HPLC (preparative and semipreparative) is the most widely used. To compare the performance of semipreparative HPLC and HPTLC for the isolation of secondary metabolites in different types of extracts, we have chosen carvone from spearmint essential oil (Mentha spicata L.), resveratrol from Fallopia multiflora (Thunb.) Haraldson, and rosmarinic acid from rosemary (Rosmarinus officinalis L.) extracts. The comparison was based on the chromatographic separation, the purity and quantity of isolated compounds, the solvent consumption, the duration and the cost of the isolation operations. The results showed that semipreparative HPTLC can in some case offer some advantages over conventional semipreparative HPLC. Copyright © 2013 Elsevier B.V. All rights reserved.
Is cepstrum averaging applicable to circularly polarized electric-field data?
NASA Astrophysics Data System (ADS)
Tunnell, T.
1990-04-01
In FY 1988 a cepstrum averaging technique was developed to eliminate the ground reflections from charged particle beam (CPB) electromagnetic pulse (EMP) data. The work was done for the Los Alamos National Laboratory Project DEWPOINT at SST-7. The technique averages the cepstra of horizontally and vertically polarized electric field data (i.e., linearly polarized electric field data). This cepstrum averaging technique was programmed into the FORTRAN codes CEP and CEPSIM. Steve Knox, the principal investigator for Project DEWPOINT, asked the authors to determine if the cepstrum averaging technique could be applied to circularly polarized electric field data. The answer is, Yes, but some modifications may be necessary. There are two aspects to this answer that we need to address, namely, the Yes and the modifications. First, regarding the Yes, the technique is applicable to elliptically polarized electric field data in general: circular polarization is a special case of elliptical polarization. Secondly, regarding the modifications, greater care may be required in computing the phase in the calculation of the complex logarithm. The calculation of the complex logarithm is the most critical step in cepstrum-based analysis. This memorandum documents these findings.
Adaptive wall technology for minimization of wall interferences in transonic wind tunnels
NASA Technical Reports Server (NTRS)
Wolf, Stephen W. D.
1988-01-01
Modern experimental techniques to improve free air simulations in transonic wind tunnels by use of adaptive wall technology are reviewed. Considered are the significant advantages of adaptive wall testing techniques with respect to wall interferences, Reynolds number, tunnel drive power, and flow quality. The application of these testing techniques relies on making the test section boundaries adjustable and using a rapid wall adjustment procedure. A historical overview shows how the disjointed development of these testing techniques, since 1938, is closely linked to available computer support. An overview of Adaptive Wall Test Section (AWTS) designs shows a preference for use of relatively simple designs with solid adaptive walls in 2- and 3-D testing. Operational aspects of AWTS's are discussed with regard to production type operation where adaptive wall adjustments need to be quick. Both 2- and 3-D data are presented to illustrate the quality of AWTS data over the transonic speed range. Adaptive wall technology is available for general use in 2-D testing, even in cryogenic wind tunnels. In 3-D testing, more refinement of the adaptive wall testing techniques is required before more widespread use can be planned.
General technique for discrete retardation-modulation polarimetry
NASA Technical Reports Server (NTRS)
Saxena, Indu
1993-01-01
The general theory and rigorous solutions of the Stokes parameters of light of a new technique in time-resolved ellipsometry are outlined. In this technique the phase of the linear retarder is stepped over three discrete values over a time interval for which the Stokes vector is determined. The technique has an advantage over synchronous detection techniques, as it can be implemented as a digitizable system.
Infrastructure stability surveillance with high resolution InSAR
NASA Astrophysics Data System (ADS)
Balz, Timo; Düring, Ralf
2017-02-01
The construction of new infrastructure in largely unknown and difficult environments, as it is necessary for the construction of the New Silk Road, can lead to a decreased stability along the construction site, leading to an increase in landslide risk and deformation caused by surface motion. This generally requires a thorough pre-analysis and consecutive surveillance of the deformation patterns to ensure the stability and safety of the infrastructure projects. Interferometric SAR (InSAR) and the derived techniques of multi-baseline InSAR are very powerful tools for a large area observation of surface deformation patterns. With InSAR and deriver techniques, the topographic height and the surface motion can be estimated for large areas, making it an ideal tool for supporting the planning, construction, and safety surveillance of new infrastructure elements in remote areas.
CD process control through machine learning
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2016-10-01
For the specific requirements of the 14nm and 20nm site applications a new CD map approach was developed at the AMTC. This approach relies on a well established machine learning technique called recursive partitioning. Recursive partitioning is a powerful technique which creates a decision tree by successively testing whether the quantity of interest can be explained by one of the supplied covariates. The test performed is generally a statistical test with a pre-supplied significance level. Once the test indicates significant association between the variable of interest and a covariate a split performed at a threshold value which minimizes the variation within the newly attained groups. This partitioning is recurred until either no significant association can be detected or the resulting sub group size falls below a pre-supplied level.
NASA Technical Reports Server (NTRS)
Miller, James G.
1990-01-01
An ultrasonic measurement system employed in the experimental interrogation of the anisotropic properties (through the measurement of the elastic stiffness constants) of the uniaxial graphite-epoxy composites is presented. The continuing effort for the development of improved visualization techniques for physical parameters is discussed. The background is set for the understanding and visualization of the relationship between the phase and energy/group velocity for propagation in high-performance anisotropic materials by investigating the general requirements imposed by the classical wave equation. The consequences are considered when the physical parameters of the anisotropic material are inserted into the classical wave equation by a linear elastic model. The relationship is described between the phase velocity and the energy/group velocity three dimensional surfaces through graphical techniques.
Preparation of PEMFC Electrodes from Milligram-Amounts of Catalyst Powder
Yarlagadda, Venkata; McKinney, Samuel E.; Keary, Cristin L.; ...
2017-06-03
Development of electrocatalysts with higher activity and stability is one of the highest priorities in enabling cost-competitive hydrogen-air fuel cells. Although the rotating disk electrode (RDE) technique is widely used to study new catalyst materials, it has been often shown to be an unreliable predictor of catalyst performance in actual fuel cell operation. Fabrication of membrane electrode assemblies (MEA) for evaluation which are more representative of actual fuel cells generally requires relatively large amounts (>1 g) of catalyst material which are often not readily available in early stages of development. In this study, we present two MEA preparation techniques usingmore » as little as 30 mg of catalyst material, providing methods to conduct more meaningful MEA-based tests using research-level catalysts amounts.« less
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Godoy, Jorge; Martínez-Álvarez, Antonio
2017-01-01
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle. PMID:29137137
The general 2-D moments via integral transform method for acoustic radiation and scattering
NASA Astrophysics Data System (ADS)
Smith, Jerry R.; Mirotznik, Mark S.
2004-05-01
The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.
Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.; ...
2016-02-10
We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less
Inductive System Health Monitoring
NASA Technical Reports Server (NTRS)
Iverson, David L.
2004-01-01
The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS uses nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. IMS is able to monitor the system by comparing real time operational data with these classes. We present a description of learning and monitoring method used by IMS and summarize some recent IMS results.
Staniec, Kamil; Habrych, Marcin
2016-07-19
The importance of constructing wide-area sensor networks for holistic environmental state evaluation has been demonstrated. A general structure of such a network has been presented with distinction of three segments: local (based on ZigBee, Ethernet and ModBus techniques), core (base on cellular technologies) and the storage/application. The implementation of these techniques requires knowledge of their technical limitations and electromagnetic compatibility issues. The former refer to ZigBee performance degradation in multi-hop transmission, whereas the latter are associated with the common electromagnetic spectrum sharing with other existing technologies or with undesired radiated emissions generated by the radio modules of the sensor network. In many cases, it is also necessary to provide a measurement station with autonomous energy source, such as solar. As stems from measurements of the energetic efficiency of these sources, one should apply them with care and perform detailed power budget since their real performance may turn out to be far from expected. This, in turn, may negatively affect-in particular-the operation of chemical sensors implemented in the network as they often require additional heating.
Pattern Recognition Using Artificial Neural Network: A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, artificial neural network techniques theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system using ANN and identify research topics and applications which are at the forefront of this exciting and challenging field.
Staniec, Kamil; Habrych, Marcin
2016-01-01
The importance of constructing wide-area sensor networks for holistic environmental state evaluation has been demonstrated. A general structure of such a network has been presented with distinction of three segments: local (based on ZigBee, Ethernet and ModBus techniques), core (base on cellular technologies) and the storage/application. The implementation of these techniques requires knowledge of their technical limitations and electromagnetic compatibility issues. The former refer to ZigBee performance degradation in multi-hop transmission, whereas the latter are associated with the common electromagnetic spectrum sharing with other existing technologies or with undesired radiated emissions generated by the radio modules of the sensor network. In many cases, it is also necessary to provide a measurement station with autonomous energy source, such as solar. As stems from measurements of the energetic efficiency of these sources, one should apply them with care and perform detailed power budget since their real performance may turn out to be far from expected. This, in turn, may negatively affect—in particular—the operation of chemical sensors implemented in the network as they often require additional heating. PMID:27447633
Computationally efficient stochastic optimization using multiple realizations
NASA Astrophysics Data System (ADS)
Bayer, P.; Bürger, C. M.; Finkel, M.
2008-02-01
The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.
Investigation of direct solar-to-microwave energy conversion techniques
NASA Technical Reports Server (NTRS)
Chatterton, N. E.; Mookherji, T. K.; Wunsch, P. K.
1978-01-01
Identification of alternative methods of producing microwave energy from solar radiation for purposes of directing power to the Earth from space is investigated. Specifically, methods of conversion of optical radiation into microwave radiation by the most direct means are investigated. Approaches based on demonstrated device functioning and basic phenomenologies are developed. There is no system concept developed, that is competitive with current baseline concepts. The most direct methods of conversion appear to require an initial step of production of coherent laser radiation. Other methods generally require production of electron streams for use in solid-state or cavity-oscillator systems. Further development is suggested to be worthwhile for suggested devices and on concepts utilizing a free-electron stream for the intraspace station power transport mechanism.
A study and experiment plan for digital mobile communication via satellite
NASA Technical Reports Server (NTRS)
Jones, J. J.; Craighill, E. J.; Evans, R. G.; Vincze, A. D.; Tom, N. N.
1978-01-01
The viability of mobile communications is examined within the context of a frequency division multiple access, single channel per carrier satellite system emphasizing digital techniques to serve a large population of users. The intent is to provide the mobile users with a grade of service consistant with the requirements for remote, rural (perhaps emergency) voice communications, but which approaches toll quality speech. A traffic model is derived on which to base the determination of the required maximum number of satellite channels to provide the anticipated level of service. Various voice digitalization and digital modulation schemes are reviewed along with a general link analysis of the mobile system. Demand assignment multiple access considerations and analysis tradeoffs are presented. Finally, a completed configuration is described.
Laser diode combining for free space optical communication
NASA Technical Reports Server (NTRS)
Mecherle, G. Stephen
1986-01-01
The maximization of photon delivery to a distant collector in free space optical communications systems calls for a laser diode-combining technique employing wavelength and/or polarization as the bases of its operation. Design considerations for such a combiner encompass high throughput efficiency, diffraction-limited angular divergence, and reasonable volume constraints. Combiners are presently found to require a generalized Strehl ratio concept which includes relative source misalignment; diffraction grating combiners may have a limited number of laser sources which can meet spectral requirements. Methods for the incorporation of a combiner into a communication system are compared. Power combining is concluded to be the best tradeoff of performance and complexity for all systems, except those that are severely limited by either background radiation or component bandwidth.
Arc-evaporated carbon films: optical properties and electron mean free paths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, M.W.; Arakawa, E.T.; Dolfini, S.M.
1984-01-01
This paper describes briefly a method which can be used to calculate inelastic mean free paths for electrons with energies in the range of interest for the interpretation of surface phenomena. This method requires a knowledge of the optical properties of the material for the photon energies associated with the oscillator strength of the valence electrons. However, in general it is easier to obtain accurate values of the required properties than it is to measure the electron attenuation lengths in the energy region of interest. This technique, demonstrated here for arc-evaporated carbon, can be used for any material for whichmore » the optical properties can be measured over essentially the whole energy range corresponding to the valence electron response.« less
Contribution of concentrator photovoltaic installations to grid stability and power quality
NASA Astrophysics Data System (ADS)
del Toro García, Xavier; Roncero-Sánchez, Pedro; Torres, Alfonso Parreño; Vázquez, Javier
2012-10-01
Large-scale integration of Photovoltaic (PV) generation systems, including Concentrator Photovoltaic (CPV) technologies, will require the contribution and support of these technologies to the management and stability of the grid. New regulations and grid codes for PV installations in countries such as Spain have recently included dynamic voltage control support during faults. The PV installation must stay connected to the grid during voltage dips and inject reactive power in order to enhance the stability of the system. The existing PV inverter technologies based on the Voltage-Source Converter (VSC) are in general well suited to provide advanced grid-support characteristics. Nevertheless, new advanced control schemes and monitoring techniques will be necessary to meet the most demanding requirements.
Perturbations and 3R in carbon management.
Pant, Deepak; Sharma, Virbala; Singh, Pooja; Kumar, Manoj; Giri, Anand; Singh, M P
2017-02-01
Perturbations in various carbon pools like biological, geological, oceanic, and missing carbon sink affect its global data, which are generally neglected or ignored in routine calculations. These natural and anthropogenic events need to be considered before projecting a sustainable carbon management plan. These plans have both general and experimental aspects. General plans should focus on (a) minimizing emission; (b) maximizing environmentally sound reuse, reduce, and recycling; (c) effective treatment; and (d) converting carbon into valuable products with atom economy. Experimental carbon management plans involving various biological and chemical techniques with limitation in terms of research level and economic feasibility. Chemical options have benefits of higher productivity and wider product range, but it suffers from its higher-energy requirements and environmental unfriendliness. In contrast to this, biological options are more selective and less energy intensive, but their productivity is very low. Hence, there is a requirement of hybrid process where the benefits of both the options, i.e., biological and chemical, can be reaped. In view of above, the proposed review targets to highlight the various perturbations in the global carbon cycle and their effects; study the currently practiced options of carbon management, specifically in light of 3R principle; and propose various new hybrid methods by compatible combinations of chemical and biological processes to develop better and safer carbon management. These methods are hypothetical so they may require further research and validations but may provide a comprehensive base for developing such management methods.
Preparing the NDE engineers of the future: Education, training, and diversity
NASA Astrophysics Data System (ADS)
Holland, Stephen D.
2017-02-01
As quantitative NDE has matured and entered the mainstream, it has created an industry need for engineers who can select, evaluate, and qualify NDE techniques to satisfy quantitative engineering requirements. NDE as a field is cross-disciplinary with major NDE techniques relying on a broad spectrum of physics disciplines including fluid mechanics, electromagnetics, mechanical waves, and high energy physics. An NDE engineer needs broad and deep understanding of the measurement physics across modalities, a general engineering background, and familiarity with shop-floor practices and tools. While there are a wide range of certification and training programs worldwide for NDE technicians, there are few programs aimed at engineers. At the same time, substantial demographic shifts are underway with many experienced NDE engineers and technicians nearing retirement, and with new generations coming from much more diverse backgrounds. There is a need for more and better education opportunities for NDE engineers. Both teaching and learning NDE engineering are inherently challenging because of the breadth and depth of knowledge required. At the same time, sustaining the field in a more diverse era will require broadening participation of previously underrepresented groups. The QNDE 2016 conference in Atlanta, GA included a session on NDE education, training, and diversity. This paper summarizes the outcomes and discussion from this session.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
Phase estimation for magnetic resonance imaging near metal prostheses
NASA Astrophysics Data System (ADS)
Bones, Philip J.; King, Laura J.; Millane, Rick P.
2015-09-01
Magnetic resonance imaging (MRI) has the potential to be the best technique for assessing complications in patients with metal orthopedic implants. The presence of fat can obscure definition of the other soft tissues in MRI images, so fat suppression is often required. However, the performance of existing fat suppression techniques is inadequate near implants, due to very significant magnetic field perturbations induced by the metal. The three-point Dixon technique is potentially a method of choice as it is able to suppress fat in the presence of inhomogeneities, but the success of this technique depends on being able to accurately calculate the phase shift. This is generally done using phase unwrapping and/or iterative reconstruction algorithms. Most current phase unwrapping techniques assume that the phase function is slowly varying and phase differences between adjacent points are limited to less than π radians in magnitude. Much greater phase differences can be present near metal implants. We present our experience with two phase unwrapping techniques which have been adapted to use prior knowledge of the implant. The first method identifies phase discontinuities before recovering the phase along paths through the image. The second method employs a transform to find the least squares solution to the unwrapped phase. Simulation results indicate that the methods show promise.
Roumeliotis, M; Long, K; Phan, T; Graham, D; Quirk, S
2018-06-05
The aim of this study was to understand the international standard practice for radiation therapy treatment techniques and clinical priorities for institutions including the internal mammary lymph nodes (IMLNs) in the target volume for patients with synchronous bilateral breast cancer. An international survey was developed to include questions that would provide awareness of favored treatment techniques, treatment planning and delivery resource requirements, and the clinical priorities that may lead to the utilization of preferred treatment techniques. Of the 135 respondents, 82 indicated that IMLNs are regularly included in the target volume for radiation therapy (IMLN-inclusion) when the patient is otherwise generally indicated for regional nodal irradiation. Of the 82 respondents that regularly include IMLNs, five were removed as those respondents do not treat this population synchronously. Of the 77 respondents, institutional standard of care varied significantly, though VMAT (34%) and combined static photon and electron fields (21%) were the most commonly utilized techniques. Respondents did preferentially select target volume coverage (70%) as the most important clinical priority, followed by normal tissue sparing (25%). The results of the survey indicate that the IMLN-inclusion for radiation therapy has not yet been comprehensively adopted. As well, no consensus on best practice for radiation therapy treatment techniques has been reached.
Investigation of laser Doppler anemometry in developing a velocity-based measurement technique
NASA Astrophysics Data System (ADS)
Jung, Ki Won
2009-12-01
Acoustic properties, such as the characteristic impedance and the complex propagation constant, of porous materials have been traditionally characterized based on pressure-based measurement techniques using microphones. Although the microphone techniques have evolved since their introduction, the most general form of the microphone technique employs two microphones in characterizing the acoustic field for one continuous medium. The shortcomings of determining the acoustic field based on only two microphones can be overcome by using numerous microphones. However, the use of a number of microphones requires a careful and intricate calibration procedure. This dissertation uses laser Doppler anemometry (LDA) to establish a new measurement technique which can resolve issues that microphone techniques have: First, it is based on a single sensor, thus the calibration is unnecessary when only overall ratio of the acoustic field is required for the characterization of a system. This includes the measurements of the characteristic impedance and the complex propagation constant of a system. Second, it can handle multiple positional measurements without calibrating the signal at each position. Third, it can measure three dimensional components of velocity even in a system with a complex geometry. Fourth, it has a flexible adaptability which is not restricted to a certain type of apparatus only if the apparatus is transparent. LDA is known to possess several disadvantages, such as the requirement of a transparent apparatus, high cost, and necessity of seeding particles. The technique based on LDA combined with a curvefitting algorithm is validated through measurements on three systems. First, the complex propagation constant of the air is measured in a rigidly terminated cylindrical pipe which has very low dissipation. Second, the radiation impedance of an open-ended pipe is measured. These two parameters can be characterized by the ratio of acoustic field measured at multiple locations. Third, the power dissipated in a variable RLC load is measured. The three experiments validate the LDA technique proposed. The utility of the LDA method is then extended to the measurement of the complex propagation constant of the air inside a 100 ppi reticulated vitreous carbon (RVC) sample. Compared to measurements in the available studies, the measurement with the 100 ppi RVC sample supports the LDA technique in that it can achieve a low uncertainty in the determined quantity. This dissertation concludes with using the LDA technique for modal decomposition of the plane wave mode and the (1,1) mode that are driven simultaneously. This modal decomposition suggests that the LDA technique surpasses microphone-based techniques, because they are unable to determine the acoustic field based on an acoustic model with unconfined propagation constants for each modal component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takao, S; Matsuzaki, Y; Matsuura, T
Purpose: Spot-scanning technique has been utilized to achieve conformal dose distribution to large and complicated tumors. This technique generally does not require patient-specific devices such as aperture and compensator. The commercially available spot-scanning proton therapy (SSPT) systems, however, cannot deliver proton beams to the region shallower than 4 g/cm2. Therefore some range compensation device is required to treat superficial tumors with SSPT. This study shows dosimetric comparison of the following treatment techniques: (i) with a tabletop bolus, (ii) with a nozzle-mounted applicator, and (iii) without any devices and using intensity-modulated proton therapy (IMPT) technique. Methods: The applicator composed of amore » combination of a mini-ridge filter and a range shifter has been manufactured by Hitachi, Ltd., and the tabletop bolus was made by .decimal, Inc. Both devices have been clinically implemented in our facility. Three patients with liver tumors close to the skin surface were examined in this study. Each treatment plan was optimized so that the prescription dose of 76 Gy(RBE) or 66 Gy(RBE) would be delivered to 99% of the clinical target volume in 20 fractions. Three beams were used for tabletop bolus plan and IMPT plan, whereas two beams were used in the applicator plan because the gantry angle available was limited due to potential collision to patient and couch. The normal liver, colon, and skin were considered as organs at risk (OARs). Results: The target heterogeneity index (HI = D{sub 5}/D{sub 95}) was 1.03 on average in each planning technique. The mean dose to the normal liver was considerably less than 20 Gy(RBE) in all cases. The dose to the skin could be reduced by 20 Gy(RBE) on average in the IMPT plan compared to the applicator plan. Conclusion: It has been confirmed that all treatment techniques met the dosimetric criteria for the OARs and could be implemented clinically.« less
SSAGES: Software Suite for Advanced General Ensemble Simulations.
Sidky, Hythem; Colón, Yamil J; Helfferich, Julian; Sikora, Benjamin J; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S; Reid, Daniel R; Sevgen, Emre; Thapar, Vikram; Webb, Michael A; Whitmer, Jonathan K; de Pablo, Juan J
2018-01-28
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques-including adaptive biasing force, string methods, and forward flux sampling-that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.
A workstation-based evaluation of a far-field route planner for helicopters
NASA Technical Reports Server (NTRS)
Warner, David N., Jr.; Moran, Francis J.
1991-01-01
Helicopter flight missions at very low, nap of the Earth, altitudes place a heavy workload on the pilot. To aid in reducing this workload, Ames Research Center has been investigating various types of automated route planners. As part of an automated preflight mission planner, a route planner algorithm aids in selecting the overall (far-field) route to be flown. During the mission, the route planner can be used to replan a new route in case of unexpected threats or change in mission requirements. An evaluation of a candidate route planning algorithm, based on dynamic programming techniques is described. This algorithm meets most of the requirements for route planning, both preflight and during the mission. In general, the requirements are to minimize the distance and/or fuel and the deviation from a flight time schedule, and must be flyable within the constraints of available fuel and time.
Hay, L.; Knapp, L.
1996-01-01
Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.
Networking and AI systems: Requirements and benefits
NASA Technical Reports Server (NTRS)
1988-01-01
The price performance benefits of network systems is well documented. The ability to share expensive resources sold timesharing for mainframes, department clusters of minicomputers, and now local area networks of workstations and servers. In the process, other fundamental system requirements emerged. These have now been generalized with open system requirements for hardware, software, applications and tools. The ability to interconnect a variety of vendor products has led to a specification of interfaces that allow new techniques to extend existing systems for new and exciting applications. As an example of the message passing system, local area networks provide a testbed for many of the issues addressed by future concurrent architectures: synchronization, load balancing, fault tolerance and scalability. Gold Hill has been working with a number of vendors on distributed architectures that range from a network of workstations to a hypercube of microprocessors with distributed memory. Results from early applications are promising both for performance and scalability.
Pedreira, Denise A L; Zanon, Nelci; de Sá, Renato A M; Acacio, Gregório L; Ogeda, Edilson; Belem, Teresa M L O U; Chmait, Ramen H; Kontopoulos, Eftichia; Quintero, Ruben A
2014-11-01
To report our preliminary clinical experience in the antenatal correction of open spina bifida (OSB) using a fetoscopic approach and a simplified closure technique. Four fetuses with lumbar-sacral defects were operated in utero from 25 to 27 weeks. Surgeries were performed percutaneously under general anesthesia using three trocars and partial carbon dioxide insufflation. After dissection of the neural placode, the surrounding skin was closed over a cellulose patch using a single continuous stitch. Surgical closure was successful in three of the four cases. All successful cases showed improvement of the hindbrain herniation and no neonatal neurosurgical repair was required in two cases. Delivery occurred between 31 and 33 weeks, and no fetal or neonatal deaths occurred. Ventriculoperitoneal shunting was not needed in two out of the 3 successful cases. Our preliminary experience suggests that definitive fetoscopic repair of OSB is feasible using our innovative surgical technique. A phase I trial for the fetoscopic correction of OSB with this technique is currently being conducted.
The Role of 3 Tesla MRA in the Detection of Intracranial Aneurysms
Kapsalaki, Eftychia Z.; Rountas, Christos D.; Fountas, Kostas N.
2012-01-01
Intracranial aneurysms constitute a common pathological entity, affecting approximately 1–8% of the general population. Their early detection is essential for their prompt treatment. Digital subtraction angiography is considered the imaging method of choice. However, other noninvasive methodologies such as CTA and MRA have been employed in the investigation of patients with suspected aneurysms. MRA is a noninvasive angiographic modality requiring no radiation exposure. However, its sensitivity and diagnostic accuracy were initially inadequate. Several MRA techniques have been developed for overcoming all these drawbacks and for improving its sensitivity. 3D TOF MRA and contrast-enhanced MRA are the most commonly employed techniques. The introduction of 3 T magnetic field further increased MRA's sensitivity, allowing detection of aneurysms smaller than 3 mm. The development of newer MRA techniques may provide valuable information regarding the flow characteristics of an aneurysm. Meticulous knowledge of MRA's limitations and pitfalls is of paramount importance for avoiding any erroneous interpretation of its findings. PMID:22292121
Anticipation of the landing shock phenomenon in flight simulation
NASA Technical Reports Server (NTRS)
Mcfarland, Richard E.
1987-01-01
An aircraft landing may be described as a controlled crash because a runway surface is intercepted. In a simulation model the transition from aerodynamic flight to weight on wheels involves a single computational cycle during which stiff differential equations are activated; with a significant probability these initial conditions are unrealistic. This occurs because of the finite cycle time, during which large restorative forces will accompany unrealistic initial oleo compressions. This problem was recognized a few years ago at Ames Research Center during simulation studies of a supersonic transport. The mathematical model of this vehicle severely taxed computational resources, and required a large cycle time. The ground strike problem was solved by a described technique called anticipation equations. This extensively used technique has not been previously reported. The technique of anticipating a significant event is a useful tool in the general field of discrete flight simulation. For the differential equations representing a landing gear model stiffness, rate of interception and cycle time may combine to produce an unrealistic simulation of the continuum.
Strange quark contribution to the nucleon
NASA Astrophysics Data System (ADS)
Darnell, Dean F.
The strangeness contribution to the electric and magnetic properties of the nucleon has been under investigation experimentally for many years. Lattice Quantum Chromodynamics (LQCD) gives theoretical predictions of these measurements by implementing the continuum gauge theory on a discrete, mathematical Euclidean space-time lattice which provides a cutoff removing the ultra-violet divergences. In this dissertation we will discuss effective methods using LQCD that will lead to a better determination of the strangeness contribution to the nucleon properties. Strangeness calculations are demanding technically and computationally. Sophisticated techniques are required to carry them to completion. In this thesis, new theoretical and computational methods for this calculation such as twisted mass fermions, perturbative subtraction, and General Minimal Residual (GMRES) techniques which have proven useful in the determination of these form factors will be investigated. Numerical results of the scalar form factor using these techniques are presented. These results give validation to these methods in future calculations of the strange quark contribution to the electric and magnetic form factors.
Continuous welding of unidirectional fiber reinforced thermoplastic tape material
NASA Astrophysics Data System (ADS)
Schledjewski, Ralf
2017-10-01
Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.
Botulinum toxin type B micromechanosensor
Liu, W.; Montana, Vedrana; Chapman, Edwin R.; Mohideen, U.; Parpura, Vladimir
2003-01-01
Botulinum neurotoxin (BoNT) types A, B, E, and F are toxic to humans; early and rapid detection is essential for adequate medical treatment. Presently available tests for detection of BoNTs, although sensitive, require hours to days. We report a BoNT-B sensor whose properties allow detection of BoNT-B within minutes. The technique relies on the detection of an agarose bead detachment from the tip of a micromachined cantilever resulting from BoNT-B action on its substratum, the synaptic protein synaptobrevin 2, attached to the beads. The mechanical resonance frequency of the cantilever is monitored for the detection. To suspend the bead off the cantilever we use synaptobrevin's molecular interaction with another synaptic protein, syntaxin 1A, that was deposited onto the cantilever tip. Additionally, this bead detachment technique is general and can be used in any displacement reaction, such as in receptor-ligand pairs, where the introduction of one chemical leads to the displacement of another. The technique is of broad interest and will find uses outside toxicology. PMID:14573702
MALDI mass spectrometry imaging, from its origins up to today: the state of the art.
Francese, Simona; Dani, Francesca R; Traldi, Pietro; Mastrobuoni, Guido; Pieraccini, Giuseppe; Moneti, Gloriano
2009-02-01
Mass Spectrometry (MS) has a number of features namely sensitivity, high dynamic range, high resolution, and versatility which make it a very powerful analytical tool for a wide spectrum of applications spanning all the life science fields. Among all the MS techniques, MALDI Imaging mass spectrometry (MALDI MSI) is currently one of the most exciting both for its rapid technological improvements, and for its great potential in high impact bioscience fields. Here, MALDI MSI general principles are described along with technical and instrumental details as well as application examples. Imaging MS instruments and imaging mass spectrometric techniques other than MALDI, are presented along with examples of their use. As well as reporting MSI successes in several bioscience fields, an attempt is made to take stock of what has been achieved so far with this technology and to discuss the analytical and technological advances required for MSI to be applied as a routine technique in clinical diagnostics, clinical monitoring and in drug discovery.
Hamiltonian Analysis of Subcritical Stochastic Epidemic Dynamics
2017-01-01
We extend a technique of approximation of the long-term behavior of a supercritical stochastic epidemic model, using the WKB approximation and a Hamiltonian phase space, to the subcritical case. The limiting behavior of the model and approximation are qualitatively different in the subcritical case, requiring a novel analysis of the limiting behavior of the Hamiltonian system away from its deterministic subsystem. This yields a novel, general technique of approximation of the quasistationary distribution of stochastic epidemic and birth-death models and may lead to techniques for analysis of these models beyond the quasistationary distribution. For a classic SIS model, the approximation found for the quasistationary distribution is very similar to published approximations but not identical. For a birth-death process without depletion of susceptibles, the approximation is exact. Dynamics on the phase plane similar to those predicted by the Hamiltonian analysis are demonstrated in cross-sectional data from trachoma treatment trials in Ethiopia, in which declining prevalences are consistent with subcritical epidemic dynamics. PMID:28932256
Piédrola Maroto, David; Franco Sánchez, Javier; Reyes Eldblom, Robin; Monje Vega, Elena; Conde Jiménez, Manuel; Ortiz Rueda, Manuel
2008-01-01
To evaluate the benefits and disadvantages of the endoscopic endonasal versus transcanalicular approaches using diode laser, and to compare their clinical outcomes. A total of 127 patients were operated on, 80 of them with the endonasal approach (Group I) and 47 with the transcanalicular technique (Group II). Epiphora improved completely in 67 patients in Group I (83.7 %) while the other 13 (16.2 %) continued to present the same symptoms. In Group II, a successful result was achieved in 39 patients (82.9 %) and 8 (17 %) of them had to be re-operated because of the persistence of epiphora. The surgical outcomes are similar with both laser techniques. The main advantages of using diode laser are that it does not require general anaesthesia, the lower intra- and peri-operative morbidity, the lack of nasal packing and the greater ease of performing additional interventions if it fails. The only real disadvantage of laser procedures is the high cost.
Chen, Qi; Zhou, Huanping; Song, Tze-Bin; Luo, Song; Hong, Ziruo; Duan, Hsin-Sheng; Dou, Letian; Liu, Yongsheng; Yang, Yang
2014-07-09
To improve the performance of the polycrystalline thin film devices, it requires a delicate control of its grain structures. As one of the most promising candidates among current thin film photovoltaic techniques, the organic/inorganic hybrid perovskites generally inherit polycrystalline nature and exhibit compositional/structural dependence in regard to their optoelectronic properties. Here, we demonstrate a controllable passivation technique for perovskite films, which enables their compositional change, and allows substantial enhancement in corresponding device performance. By releasing the organic species during annealing, PbI2 phase is presented in perovskite grain boundaries and at the relevant interfaces. The consequent passivation effects and underlying mechanisms are investigated with complementary characterizations, including scanning electron microscopy (SEM), X-ray diffraction (XRD), time-resolved photoluminescence decay (TRPL), scanning Kelvin probe microscopy (SKPM), and ultraviolet photoemission spectroscopy (UPS). This controllable self-induced passivation technique represents an important step to understand the polycrystalline nature of hybrid perovskite thin films and contributes to the development of perovskite solar cells judiciously.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2001-01-01
The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.
Inquiry-based experiments for large-scale introduction to PCR and restriction enzyme digests.
Johanson, Kelly E; Watt, Terry J
2015-01-01
Polymerase chain reaction and restriction endonuclease digest are important techniques that should be included in all Biochemistry and Molecular Biology laboratory curriculums. These techniques are frequently taught at an advanced level, requiring many hours of student and faculty time. Here we present two inquiry-based experiments that are designed for introductory laboratory courses and combine both techniques. In both approaches, students must determine the identity of an unknown DNA sequence, either a gene sequence or a primer sequence, based on a combination of PCR product size and restriction digest pattern. The experimental design is flexible, and can be adapted based on available instructor preparation time and resources, and both approaches can accommodate large numbers of students. We implemented these experiments in our courses with a combined total of 584 students and have an 85% success rate. Overall, students demonstrated an increase in their understanding of the experimental topics, ability to interpret the resulting data, and proficiency in general laboratory skills. © 2015 The International Union of Biochemistry and Molecular Biology.
Botulinum toxin type B micromechanosensor.
Liu, W; Montana, Vedrana; Chapman, Edwin R; Mohideen, U; Parpura, Vladimir
2003-11-11
Botulinum neurotoxin (BoNT) types A, B, E, and F are toxic to humans; early and rapid detection is essential for adequate medical treatment. Presently available tests for detection of BoNTs, although sensitive, require hours to days. We report a BoNT-B sensor whose properties allow detection of BoNT-B within minutes. The technique relies on the detection of an agarose bead detachment from the tip of a micromachined cantilever resulting from BoNT-B action on its substratum, the synaptic protein synaptobrevin 2, attached to the beads. The mechanical resonance frequency of the cantilever is monitored for the detection. To suspend the bead off the cantilever we use synaptobrevin's molecular interaction with another synaptic protein, syntaxin 1A, that was deposited onto the cantilever tip. Additionally, this bead detachment technique is general and can be used in any displacement reaction, such as in receptor-ligand pairs, where the introduction of one chemical leads to the displacement of another. The technique is of broad interest and will find uses outside toxicology.
Frequency-noise measurements of optical frequency combs by multiple fringe-side discriminator
Coluccelli, Nicola; Cassinerio, Marco; Gambetta, Alessio; Laporta, Paolo; Galzerano, Gianluca
2015-01-01
The frequency noise of an optical frequency comb is routinely measured through the hetherodyne beat of one comb tooth against a stable continuous-wave laser. After frequency-to-voltage conversion, the beatnote is sent to a spectrum analyzer to retrive the power spectral density of the frequency noise. Because narrow-linewidth continuous-wave lasers are available only at certain wavelengths, heterodyning the comb tooth can be challenging. We present a new technique for direct characterization of the frequency noise of an optical frequency comb, requiring no supplementary reference lasers and easily applicable in all spectral regions from the terahertz to the ultraviolet. The technique is based on the combination of a low finesse Fabry-Perot resonator and the so-called “fringe-side locking” method, usually adopted to characterize the spectral purity of single-frequency lasers, here generalized to optical frequency combs. The effectiveness of this technique is demonstrated with an Er-fiber comb source across the wavelength range from 1 to 2 μm. PMID:26548900
Non-contact method for characterization of small size thermoelectric modules.
Manno, Michael; Yang, Bao; Bar-Cohen, Avram
2015-08-01
Conventional techniques for characterization of thermoelectric performance require bringing measurement equipment into direct contact with the thermoelectric device, which is increasingly error prone as device size decreases. Therefore, the novel work presented here describes a non-contact technique, capable of accurately measuring the maximum ΔT and maximum heat pumping of mini to micro sized thin film thermoelectric coolers. The non-contact characterization method eliminates the measurement errors associated with using thermocouples and traditional heat flux sensors to test small samples and large heat fluxes. Using the non-contact approach, an infrared camera, rather than thermocouples, measures the temperature of the hot and cold sides of the device to determine the device ΔT and a laser is used to heat to the cold side of the thermoelectric module to characterize its heat pumping capacity. As a demonstration of the general applicability of the non-contact characterization technique, testing of a thin film thermoelectric module is presented and the results agree well with those published in the literature.
Canavese, Federico; Botnari, Alexei; Dimeglio, Alain; Samba, Antoine; Pereira, Bruno; Gerst, Adeline; Granier, Marie; Rousset, Marie; Dubousset, Jean
2016-02-01
Juvenile scoliosis (JS), among different types of spinal deformity, remains still a challenge for orthopedic surgeons. Elongation, derotation and flexion (EDF) casting technique is a custom-made thoracolumbar cast based on a three-dimensional correction concept. The primary objective of the present study was to measure changes on plain radiographs of patients with JS treated with EDF plaster technique. The second aim was to evaluate the effectiveness of the EDF plaster technique realized under general anesthesia (GA) and neuromuscular blocking drugs, i.e. curare, on the radiological curve correction. A retrospective comparative case series study was performed in which were included forty-four skeletally immature patients. Three patient groups were selected. Group 1: EDF cast applied with patients awaken and no anesthesia; Group 2: EDF cast applied under GA without neuromuscular blocking drugs; Group 3: EDF cast applied under GA with neuromuscular blocking drugs. All the patients were treated with two serial EDF casts by 2 months and a half each. All measurements were taken from the radiographic exams. Cobb's angle; RVAD and Nash and Moe grade of rotation were assessed before and after applying the cast. Thirty-four (77.3 %) patients were followed up at least 24 months after removal of last EDF cast. Eighteen patients (3 males, 15 females) were included in Group 1, 12 (2 males, 10 females) in Group 2 and 14 (5 males, 9 females) in Group 3. Serial EDF casting was more effective at initial curve reduction and in preventing curve progression when applied under GA with neuromuscular blocking drugs, i.e. curare. RVAD and Nash and Moe score improved significantly in all groups of patients treated according to principles of EDF technique. During follow-up period, six patients required surgery in Group 1 (6/18; 33.3 %), 3 patients required surgery in Group 2 (3/12; 25 %) and 2 patients underwent surgery in Group 3 (2/14; 15 %). Preliminary results show EDF casting is effective in controlling the curve in both frontal (Cobb's angle) and transverse plane (rib vertebral angle and apical vertebral rotation degree).
Electronics for Piezoelectric Smart Structures
NASA Technical Reports Server (NTRS)
Warkentin, D. J.; Tani, J.
1997-01-01
This paper briefly presents work addressing some of the basic considerations for the electronic components used in smart structures incorporating piezoelectric elements. After general remarks on the application of piezoelectric elements to the problem of structural vibration control, three main topics are described. Work to date on the development of techniques for embedding electronic components within structural parts is presented, followed by a description of the power flow and dissipation requirements of those components. Finally current work on the development of electronic circuits for use in an 'active wall' for acoustic noise is introduced.
Nursing concerns and hospital product sterilization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rock, R.B. Jr.; Anderson, N.A.
Nurses and other health care professionals must be aware of the rationale and methodology for in-hospital health care product standardization, including consideration of the hospital standardization committee composition, pilot-study prerequisites, and general evaluation criteria. They must be familiar with the techniques of product sterilization, their effectiveness, and the materials required to maintain sterile product shelf-life until a product is used. Hospital standardization committees can assist in the product-use decisionmaking process. Product evaluation criteria should include considerations pertaining to cost, quality, service, and comparison to similar products.
Green's function calculations for semi-infinite carbon nanotubes
NASA Astrophysics Data System (ADS)
John, D. L.; Pulfrey, D. L.
2006-02-01
In the modeling of nanoscale electronic devices, the non-equilibrium Green's function technique is gaining increasing popularity. One complication in this method is the need for computation of the self-energy functions that account for the interactions between the active portion of a device and its leads. In the one-dimensional case, these functions may be computed analytically. In higher dimensions, a numerical approach is required. In this work, we generalize earlier methods that were developed for tight-binding Hamiltonians, and present results for the case of a carbon nanotube.
Polymeric materials science in the microgravity environment
NASA Technical Reports Server (NTRS)
Coulter, Daniel R.
1989-01-01
The microgravity environment presents some interesting possibilities for the study of polymer science. Properties of polymeric materials depend heavily on their processing history and environment. Thus, there seem to be some potentially interesting and useful new materials that could be developed. The requirements for studying polymeric materials are in general much less rigorous than those developed for studying metals, for example. Many of the techniques developed for working with other materials, including heat sources, thermal control hardware and noncontact temperature measurement schemes should meet the needs of the polymer scientist.
Meta-analysis in applied ecology.
Stewart, Gavin
2010-02-23
This overview examines research synthesis in applied ecology and conservation. Vote counting and pooling unweighted averages are widespread despite the superiority of syntheses based on weighted combination of effects. Such analyses allow exploration of methodological uncertainty in addition to consistency of effects across species, space and time, but exploring heterogeneity remains controversial. Meta-analyses are required to generalize in ecology, and to inform evidence-based decision-making, but the more sophisticated statistical techniques and registers of research used in other disciplines must be employed in ecology to fully realize their benefits.
Anyanwu, A. C.; Hossain, S.; Williams, A.; Montgomery, A. C.
1998-01-01
Asymmetrical excision of sacrococcygeal pilonidal sinus has been shown to be associated with low recurrence rates. We report our experience with an asymmetric technique--the Karydakis operation. Of 28 patients who had the operation over a 4-year period, no recurrences were observed in 27 patients available for follow-up (median follow-up 3 years). Three patients had complications requiring surgical intervention. The operation is easy to teach and learn and is worth considering by both specialist and non-specialist surgeons. PMID:9682644
NASA Technical Reports Server (NTRS)
Megie, G.; Menzies, R. T.
1980-01-01
An analysis of the potential capabilities of a spectrally diversified DIAL technique for monitoring atmospheric species is presented assuming operation from an earth-orbiting platform. Emphasis is given to the measurement accuracies and spatial and temporal resolutions required to meet present atmospheric science objectives. The discussion points out advantages of spectral diversity to perform comprehensive studies of the atmosphere; in general it is shown that IR systems have an advantage in lower atmospheric measurements, while UV systems are superior for middle and upper atmospheric measurements.
Human-computer dialogue: Interaction tasks and techniques. Survey and categorization
NASA Technical Reports Server (NTRS)
Foley, J. D.
1983-01-01
Interaction techniques are described. Six basic interaction tasks, requirements for each task, requirements related to interaction techniques, and a technique's hardware prerequisites affective device selection are discussed.
Optimal Output Trajectory Redesign for Invertible Systems
NASA Technical Reports Server (NTRS)
Devasia, S.
1996-01-01
Given a desired output trajectory, inversion-based techniques find input-state trajectories required to exactly track the output. These inversion-based techniques have been successfully applied to the endpoint tracking control of multijoint flexible manipulators and to aircraft control. The specified output trajectory uniquely determines the required input and state trajectories that are found through inversion. These input-state trajectories exactly track the desired output; however, they might not meet acceptable performance requirements. For example, during slewing maneuvers of flexible structures, the structural deformations, which depend on the required state trajectories, may be unacceptably large. Further, the required inputs might cause actuator saturation during an exact tracking maneuver, for example, in the flight control of conventional takeoff and landing aircraft. In such situations, a compromise is desired between the tracking requirement and other goals such as reduction of internal vibrations and prevention of actuator saturation; the desired output trajectory needs to redesigned. Here, we pose the trajectory redesign problem as an optimization of a general quadratic cost function and solve it in the context of linear systems. The solution is obtained as an off-line prefilter of the desired output trajectory. An advantage of our technique is that the prefilter is independent of the particular trajectory. The prefilter can therefore be precomputed, which is a major advantage over other optimization approaches. Previous works have addressed the issue of preshaping inputs to minimize residual and in-maneuver vibrations for flexible structures; Since the command preshaping is computed off-line. Further minimization of optimal quadratic cost functions has also been previously use to preshape command inputs for disturbance rejection. All of these approaches are applicable when the inputs to the system are known a priori. Typically, outputs (not inputs) are specified in tracking problems, and hence the input trajectories have to be computed. The inputs to the system are however, difficult to determine for non-minimum phase systems like flexible structures. One approach to solve this problem is to (1) choose a tracking controller (the desired output trajectory is now an input to the closed-loop system and (2) redesign this input to the closed-loop system. Thus we effectively perform output redesign. These redesigns are however, dependent on the choice of the tracking controllers. Thus the controller optimization and trajectory redesign problems become coupled; this coupled optimization is still an open problem. In contrast, we decouple the trajectory redesign problem from the choice of feedback-based tracking controller. It is noted that our approach remains valid when a particular tracking controller is chosen. In addition, the formulation of our problem not only allows for the minimization of residual vibration as in available techniques but also allows for the optimal reduction fo vibrations during the maneuver, e.g., the altitude control of flexible spacecraft. We begin by formulating the optimal output trajectory redesign problem and then solve it in the context of general linear systems. This theory is then applied to an example flexible structure, and simulation results are provided.
Maintenance Audit through Value Analysis Technique: A Case Study
NASA Astrophysics Data System (ADS)
Carnero, M. C.; Delgado, S.
2008-11-01
The increase in competitiveness, technological changes and the increase in the requirements of quality and service have forced a change in the design and application of maintenance, as well as the way in which it is considered within the managerial strategy. There are numerous maintenance activities that must be developed in a service company. As a result the maintenance functions as a whole have to be outsourced. Nevertheless, delegating this subject to specialized personnel does not exempt the company from responsibilities, but rather leads to the need for control of each maintenance activity. In order to achieve this control and to evaluate the efficiency and effectiveness of the company it is essential to carry out an audit that diagnoses the problems that could develop. In this paper a maintenance audit applied to a service company is developed. The methodology applied is based on the expert systems. The expert system by means of rules uses the weighting technique SMART and value analysis to obtain the weighting between the decision functions and between the alternatives. The expert system applies numerous rules and relations between different variables associated with the specific maintenance functions, to obtain the maintenance state by sections and the general maintenance state of the enterprise. The contributions of this paper are related to the development of a maintenance audit in a service enterprise, in which maintenance is not generally considered a strategic subject and to the integration of decision-making tools such as the weighting technique SMART with value analysis techniques, typical in the design of new products, in the area of the rule-based expert systems.
Pattern optimizing verification of self-align quadruple patterning
NASA Astrophysics Data System (ADS)
Yamato, Masatoshi; Yamada, Kazuki; Oyama, Kenichi; Hara, Arisa; Natori, Sakurako; Yamauchi, Shouhei; Koike, Kyohei; Yaegashi, Hidetami
2017-03-01
Lithographic scaling continues to advance by extending the life of 193nm immersion technology, and spacer-type multi-patterning is undeniably the driving force behind this trend. Multi-patterning techniques such as self-aligned double patterning (SADP) and self-aligned quadruple patterning (SAQP) have come to be used in memory devices, and they have also been adopted in logic devices to create constituent patterns in the formation of 1D layout designs. Multi-patterning has consequently become an indispensible technology in the fabrication of all advanced devices. In general, items that must be managed when using multi-patterning include critical dimension uniformity (CDU), line edge roughness (LER), and line width roughness (LWR). Recently, moreover, there has been increasing focus on judging and managing pattern resolution performance from a more detailed perspective and on making a right/wrong judgment from the perspective of edge placement error (EPE). To begin with, pattern resolution performance in spacer-type multi-patterning is affected by the process accuracy of the core (mandrel) pattern. Improving the controllability of CD and LER of the mandrel is most important, and to reduce LER, an appropriate smoothing technique should be carefully selected. In addition, the atomic layer deposition (ALD) technique is generally used to meet the need for high accuracy in forming the spacer film. Advances in scaling are accompanied by stricter requirements in the controllability of fine processing. In this paper, we first describe our efforts in improving controllability by selecting the most appropriate materials for the mandrel pattern and spacer film. Then, based on the materials selected, we present experimental results on a technique for improving etching selectivity.
Achondroplasia: anaesthetic challenges for caesarean section.
Dubiel, L; Scott, G A; Agaram, R; McGrady, E; Duncan, A; Litchfield, K N
2014-08-01
Pregnancy in women with achondroplasia presents major challenges for anaesthetists and obstetricians. We report the case of a woman with achondroplasia who underwent general anaesthesia for an elective caesarean section. She was 99cm in height and her condition was further complicated by severe kyphoscoliosis and previous back surgery. She was reviewed in the first trimester at the anaesthetic high-risk clinic. A multidisciplinary team was convened to plan her peripartum care. Because of increasing dyspnoea caesarean section was performed at 32weeks of gestation. She received a general anaesthetic using a modified rapid-sequence technique with remifentanil and rocuronium. The intraoperative period was complicated by desaturation and high airway pressures. The woman's postoperative care was complicated by respiratory compromise requiring high dependency care. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dodin, I. Y.; Zhmoginov, A. I.; Ruiz, D. E.
2017-02-24
Applications of variational methods are typically restricted to conservative systems. Some extensions to dissipative systems have been reported too but require ad hoc techniques such as the artificial doubling of the dynamical variables. We propose a different approach. Here, we show that for a broad class of dissipative systems of practical interest, variational principles can be formulated using constant Lagrange multipliers and Lagrangians nonlocal in time, which allow treating reversible and irreversible dynamics on the same footing. A general variational theory of linear dispersion is formulated as an example. Particularly, we present a variational formulation for linear geometrical optics inmore » a general dissipative medium, which is allowed to be nonstationary, inhomogeneous, anisotropic, and exhibit both temporal and spatial dispersion simultaneously.« less
Generation of development environments for the Arden Syntax.
Bång, M.; Eriksson, H.
1997-01-01
Providing appropriate development environments for specialized languages requires a significant development and maintenance effort. Specialized environments are therefore expensive when compared to their general-language counterparts. The Arden Syntax for Medical Logic Modules (MLM) is a standardized language for representing medical knowledge. We have used PROTEGE-II, a knowledge-engineering environment, to generate a number of experimental development environments for the Arden Syntax. MEDAILLE is the resulting MLM editor, which provides a user-friendly environment that allows users to create and modify MLM definitions. Although MEDAILLE is a generated editor, it has similar functionality, while reducing the programming effort, as compared to other MLM editors developed using traditional programming techniques. We discuss how developers can use PROTEGE-II to generate development environments for other standardized languages and for general programming languages. PMID:9357639
Revision of loop colostomy under regional anaesthesia and sedation.
Ng, Oriana; Thong, Sze Ying; Chia, Claramae Shulyn; Teo, Melissa Ching Ching
2015-05-01
Patients presenting for emergency abdominal procedures often have medical issues that cause both general anaesthesia and central neuraxial blockade to pose significant risks. Regional anaesthetic techniques are often used adjunctively for abdominal procedures under general anaesthesia, but there is limited published data on procedures done under peripheral nerve or plexus blocks. We herein report the case of a patient with recent pulmonary embolism and supraventricular tachycardia who required colostomy refashioning. Ultrasonography-guided regional anaesthesia was administered using a combination of ilioinguinal-iliohypogastric, rectus sheath and transversus abdominis plane blocks. This was supplemented with propofol and dexmedetomidine sedation as well as intermittent fentanyl and ketamine boluses to cover for visceral stimulation. We discuss the anatomical rationale for the choice of blocks and compare the anaesthetic conduct with similar cases that were previously reported.
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Microbioassay of Antimicrobial Agents
Simon, Harold J.; Yin, E. Jong
1970-01-01
A previously described agar-diffusion technique for microbioassay of antimicrobial agents has been modified to increase sensitivity of the technique and to extend the range of antimicrobial agents to which it is applicable. This microtechnique requires only 0.02 ml of an unknown test sample for assay, and is capable of measuring minute concentrations of antibiotics in buffer, serum, and urine. In some cases, up to a 20-fold increase in sensitivity is gained relative to other published standardized methods and the error of this method is less than ±5%. Buffer standard curves have been established for this technique, concurrently with serum standard curves, yielding information on antimicrobial serum-binding and demonstrating linearity of the data points compared to the estimated regression line for the microconcentration ranges covered by this technique. This microassay technique is particularly well suited for pediatric research and for other investigations where sample volumes are small and quantitative accuracy is desired. Dilution of clinical samples to attain concentrations falling with the range of this assay makes the technique readily adaptable and suitable for general clinical pharmacological studies. The microassay technique has been standardized in buffer solutions and in normal human serum pools for the following antimicrobials: ampicillin, methicillin, penicillin G, oxacillin, cloxacillin, dicloxacillin, cephaloglycin, cephalexin, cephaloridine, cephalothin, erythromycin, rifamycin amino methyl piperazine, kanamycin, neomycin, streptomycin, colistin, polymyxin B, doxycycline, minocycline, oxytetracycline, tetracycline, and chloramphenicol. PMID:4986725
New levels of language processing complexity and organization revealed by granger causation.
Gow, David W; Caplan, David N
2012-01-01
Granger causation analysis of high spatiotemporal resolution reconstructions of brain activation offers a new window on the dynamic interactions between brain areas that support language processing. Premised on the observation that causes both precede and uniquely predict their effects, this approach provides an intuitive, model-free means of identifying directed causal interactions in the brain. It requires the analysis of all non-redundant potentially interacting signals, and has shown that even "early" processes such as speech perception involve interactions of many areas in a strikingly large network that extends well beyond traditional left hemisphere perisylvian cortex that play out over hundreds of milliseconds. In this paper we describe this technique and review several general findings that reframe the way we think about language processing and brain function in general. These include the extent and complexity of language processing networks, the central role of interactive processing dynamics, the role of processing hubs where the input from many distinct brain regions are integrated, and the degree to which task requirements and stimulus properties influence processing dynamics and inform our understanding of "language-specific" localized processes.
Zawadzki, Paweł J; Perkowski, Konrad; Starościak, Bohdan; Baltaza, Wanda; Padzik, Marcin; Pionkowski, Krzysztof; Chomicz, Lidia
2016-12-23
This study presents the results of comparative investigations aimed to determine microbiota that can occur in the oral environment in different human populations. The objective of the research was to identify pathogenic oral microbiota, the potential cause of health complications in patients of different population groups. The study included 95 patients requiring dental or surgical treatment; their oral cavity environment microbiota as risk factors of local and general infections were assessed. In clinical assessment, differences occurred in oral cavity conditions between patients with malformations of the masticatory system, kidney allograft recipients and individuals without indications for surgical procedures. The presence of various pathogenic and opportunistic bacterial strains in oral cavities were revealed by direct microscopic and in vitro culture techniques. Colonization of oral cavities of patients requiring surgical treatment by the potentially pathogenic bacteria constitutes the threat of their spread, and development of general infections. Assessment of oral cavity infectious microbiota should be performed as a preventive measure against peri-surgical complications.
CAVE3: A general transient heat transfer computer code utilizing eigenvectors and eigenvalues
NASA Technical Reports Server (NTRS)
Palmieri, J. V.; Rathjen, K. A.
1978-01-01
The method of solution is a hybrid analytical numerical technique which utilizes eigenvalues and eigenvectors. The method is inherently stable, permitting large time steps even with the best of conductors with the finest of mesh sizes which can provide a factor of five reduction in machine time compared to conventional explicit finite difference methods when structures with small time constants are analyzed over long time periods. This code will find utility in analyzing hypersonic missile and aircraft structures which fall naturally into this class. The code is a completely general one in that problems involving any geometry, boundary conditions and materials can be analyzed. This is made possible by requiring the user to establish the thermal network conductances between nodes. Dynamic storage allocation is used to minimize core storage requirements. This report is primarily a user's manual for CAVE3 code. Input and output formats are presented and explained. Sample problems are included which illustrate the usage of the code as well as establish the validity and accuracy of the method.
Improvement of structural models using covariance analysis and nonlinear generalized least squares
NASA Technical Reports Server (NTRS)
Glaser, R. J.; Kuo, C. P.; Wada, B. K.
1992-01-01
The next generation of large, flexible space structures will be too light to support their own weight, requiring a system of structural supports for ground testing. The authors have proposed multiple boundary-condition testing (MBCT), using more than one support condition to reduce uncertainties associated with the supports. MBCT would revise the mass and stiffness matrix, analytically qualifying the structure for operation in space. The same procedure is applicable to other common test conditions, such as empty/loaded tanks and subsystem/system level tests. This paper examines three techniques for constructing the covariance matrix required by nonlinear generalized least squares (NGLS) to update structural models based on modal test data. The methods range from a complicated approach used to generate the simulation data (i.e., the correct answer) to a diagonal matrix based on only two constants. The results show that NGLS is very insensitive to assumptions about the covariance matrix, suggesting that a workable NGLS procedure is possible. The examples also indicate that the multiple boundary condition procedure more accurately reduces errors than individual boundary condition tests alone.
Interactive archives of scientific data
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.
1994-01-01
A focus on qualitative methods of presenting data shows that visualization provides a mechanism for browsing independent of the source of data and is an effective alternative to traditional image-based browsing of image data. To be generally applicable, such visualization methods, however, must be based upon an underlying data model with support for a broad class of data types and structures. Interactive, near-real-time browsing for data sets of interesting size today requires a browse server of considerable power. A symmetric multi-processor with very high internal and external bandwidth demonstrates the feasibility of this concept. Although this technology is likely to be available on the desktop within a few years, the increase in the size and complexity of achieved data will continue to exceed the capacity of 'worksation' systems. Hence, a higher class of performance, especially in bandwidth, will generally be required for on-demand browsing. A few experiments with differing digital compression techniques indicates that a MPEG-1 implementation within the context of a high-performance browse server (i.e., parallized) is a practical method of converting a browse product to a form suitable for network or CD-ROM distribution.
Rui, Jia-bai; Zheng, Chuan-xian; Zeng, Qing-tang
2002-12-01
Objective. To test and demonstrate embryonic form of our future space station ECLSS, which will also form an advanced research and test ground facility. Method. The following functions of the system were tested and demonstrated: integrated solid amine CO2 collection and concentration, Sabatier CO2 reduction, urine processing thermoelectric integrated membrane evaporation, solid polymer water electrolysis O2 generation, concentrated ventilation, temperature and humidity control, the measurement and control system, and other non-regenerative techniques. All of these were demonstrated in a sealed adiabatic module, and passed the proof-tests. Result. The principal technical requirements of the system and each regenerative subsystem were met. The integration of system general and each subsystem was successful, and the partial closed loop of the system's integration has been realized basically. Conclusion. The reasonableness of the project design was verified, and the major system technical requirements were satisfied. The suitability and harmonization among system general and each subsystem were good, the system operated normally, and the parameters measured were correct.
Root canal treatment and general health: a review of the literature.
Murray, C A; Saunders, W P
2000-01-01
The focal infection theory was prominent in the medical literature during the early 1900s and curtailed the progress of endodontics. This theory proposed that microorganisms, or their toxins, arising from a focus of circumscribed infection within a tissue could disseminate systemically, resulting in the initiation or exacerbation of systemic illness or the damage of a distant tissue site. For example, during the focal infection era rheumatoid arthritis (RA) was identified as having a close relationship with dental health. The theory was eventually discredited because there was only anecdotal evidence to support its claims and few scientifically controlled studies. There has been a renewed interest in the influence that foci of infection within the oral tissues may have on general health. Some current research suggests a possible relationship between dental health and cardiovascular disease and published case reports have cited dental sources as causes for several systemic illnesses. Improved laboratory procedures employing sophisticated molecular biological techniques and enhanced culturing techniques have allowed researchers to confirm that bacteria recovered from the peripheral blood during root canal treatment originated in the root canal. It has been suggested that the bacteraemia, or the associated bacterial endotoxins, subsequent to root canal treatment, may cause potential systemic complications. Further research is required, however, using current sampling and laboratory methods from scientifically controlled population groups to determine if a significant relationship between general health and periradicular infection exists.
Transbronchial cryobiopsy in interstitial lung disease: experience in 106 cases – how to do it
Bango-Álvarez, Antonio; Torres-Rivas, Hector; Fernández-Fernández, Luis; Prieto, Amador; Sánchez, Inmaculada; Gil, Maria; Pando-Sandoval, Ana
2017-01-01
Transbronchial biopsy using forceps (TBB) is the first diagnostic technique performed on patients with interstitial lung disease (ILD). However, the small size of the samples and the presence of artefacts in the tissue obtained make the yield variable. Our objectives were 1) to attempt to reproduce transbronchial cryobiopsy under the same conditions with which we performed conventional TBB, that is, in the bronchoscopy unit without intubating the patient and without fluoroscopy or general anaesthesia; 2) to describe the method used for its execution; and 3) to analyse the diagnostic yield and its complications. We carried out a prospective study that included 106 patients with clinical and radiological features suggestive of ILD who underwent cryo-transbronchial lung biopsy (cryo-TBB) under moderate sedation without endotracheal intubation, general anaesthesia or use of fluoroscopy. We performed the procedure using two flexible bronchoscopes connected to two video processors, which we alternated until obtaining the number of desired samples. A definitive diagnosis was obtained in 91 patients (86%). As for complications, there were five pneumothoraces (4.7%) and in no case was there severe haemorrhage or exacerbation of the underlying interstitial disease. Cryo-TBB following our method is a minimally invasive, rapid, safe and economic technique that can be performed in a bronchoscopy suite under moderate sedation without the need for intubating the patient or using fluoroscopy and without requiring general anaesthesia. PMID:28344982
PREFERED SURGICAL TECHNIQUE USED BY ORTHOPEDISTS IN ACUTE ACROMIOCLAVICULAR DISLOCATION
NISHIMI, ALEXANDRE YUKIO; ARBEX, DEMETRIO SIMÃO; MARTINS, DIOGO LUCAS CAMPOS; GUSMÃO, CARLOS VINICIUS BUARQUE DE; BONGIOVANNI, ROBERTO RANGEL; PASCARELLI, LUCIANO
2016-01-01
ABSTRACT Objective: To determine whether training on shoulder and elbow surgery influences the orthopedist surgeons' preferred technique to address acute acromioclavicular joint dislocation (ACD). Methods: A survey was conducted with shoulder and elbow specialists and general orthopedists on their preferred technique to address acute ACD. Results: Thirty specialists and forty-five general orthopedists joined the study. Most specialists preferred the endobutton technique, while most general orthopedists preferred the modified Phemister procedure for coracoclavicular ligament repair using anchors. We found no difference between specialists and general orthopedists in the number of tunnels used to repair the coracoclavicular ligament; preferred method for wire insertion through the clavicular tunnels; buried versus unburied Kirschner wire insertion for acromioclavicular temporary fixation; and time for its removal; and regarding the suture thread used for deltotrapezoidal fascia closure. Conclusion: Training on shoulder and elbow surgery influences the surgeons' preferred technique to address acute ACD. Level of Evidence V, Expert Opinion. PMID:28149190
Resummed memory kernels in generalized system-bath master equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu
2014-08-07
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between themore » two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.« less
Laparoscopic entry: a review of Canadian general surgical practice
Compeau, Christopher; McLeod, Natalie T.; Ternamian, Artin
2011-01-01
Background Laparoscopic surgery has gained popularity over open conventional surgery as it offers benefits to both patients and health care practitioners. Although the overall risk of complications during laparoscopic surgery is recognized to be lower than during laparotomy, inadvertent serious complications still occur. Creation of the pneumoperitoneum and placement of laparoscopic ports remain a critical first step during endoscopic surgery. It is estimated that up to 50% of laparoscopic complications are entry-related, and most injury-related litigations are trocar-related. We sought to evaluate the current practice of laparoscopic entry among Canadian general surgeons. Methods We conducted a national survey to identify general surgeon preferences for laparoscopic entry. Specifically, we sought to survey surgeons using the membership database from the Canadian Association of General Surgeons (CAGS) with regards to entry methods, access instruments, port insertion sites and patient safety profiles. Laparoscopic cholecystectomy was used as a representative general surgical procedure. Results The survey was completed by 248 of 1000 (24.8%) registered members of CAGS. Respondents included both community and academic surgeons, with and without formal laparoscopic fellowship training. The demographic profile of respondents was consistent nationally. A substantial proportion of general surgeons (> 80%) prefer the open primary entry technique, use the Hasson trocar and cannula and favour the periumbilical port site, irrespective of patient weight or history of peritoneal adhesions. One-third of surgeons surveyed use Veress needle insufflation in their surgical practices. More than 50% of respondents witnessed complications related to primary laparoscopic trocar insertion. Conclusion General surgeons in Canada use the open primary entry technique, with the Hasson trocar and cannula applied periumbilically to establish a pneumoperitoneum for laparoscopic surgery. This surgical approach is remarkably consistent nationally, although considerably variant across other surgical subspecialties. Peritoneal entry remains an important patient safety issue that requires ongoing evaluation and study to ensure translation into safe contemporary clinical practice. PMID:21774882
A hazard control system for robot manipulators
NASA Technical Reports Server (NTRS)
Carter, Ruth Chiang; Rad, Adrian
1991-01-01
A robot for space applications will be required to complete a variety of tasks in an uncertain, harsh environment. This fact presents unusual and highly difficult challenges to ensuring the safety of astronauts and keeping the equipment they depend on from becoming damaged. The systematic approach being taken to control hazards that could result from introducing robotics technology in the space environment is described. First, system safety management and engineering principles, techniques, and requirements are discussed as they relate to Shuttle payload design and operation in general. The concepts of hazard, hazard category, and hazard control, as defined by the Shuttle payload safety requirements, is explained. Next, it is shown how these general safety management and engineering principles are being implemented on an actual project. An example is presented of a hazard control system for controlling one of the hazards identified for the Development Test Flight (DTF-1) of NASA's Flight Telerobotic Servicer, a teleoperated space robot. How these schemes can be applied to terrestrial robots is discussed as well. The same software monitoring and control approach will insure the safe operation of a slave manipulator under teleoperated or autonomous control in undersea, nuclear, or manufacturing applications where the manipulator is working in the vicinity of humans or critical hardware.
One- and two-dimensional dopant/carrier profiling for ULSI
NASA Astrophysics Data System (ADS)
Vandervorst, W.; Clarysse, T.; De Wolf, P.; Trenkler, T.; Hantschel, T.; Stephenson, R.; Janssens, T.
1998-11-01
Dopant/carrier profiles constitute the basis of the operation of a semiconductor device and thus play a decisive role in the performance of a transistor and are subjected to the same scaling laws as the other constituents of a modern semiconductor device and continuously evolve towards shallower and more complex configurations. This evolution has increased the demands on the profiling techniques in particular in terms of resolution and quantification such that a constant reevaluation and improvement of the tools is required. As no single technique provides all the necessary information (dopant distribution, electrical activation,..) with the requested spatial and depth resolution, the present paper attempts to provide an assessment of those tools which can be considered as the main metrology technologies for ULSI-applications. For 1D-dopant profiling secondary ion mass spectrometry (SIMS) has progressed towards a generally accepted tool meeting the requirements. For 1D-carrier profiling spreading resistance profiling and microwave surface impedance profiling are envisaged as the best choices but extra developments are required to promote them to routinely applicable methods. As no main metrology tool exist for 2D-dopant profiling, main emphasis is on 2D-carrier profiling tools based on scanning probe microscopy. Scanning spreading resistance (SSRM) and scanning capacitance microscopy (SCM) are the preferred methods although neither of them already meets all the requirements. Complementary information can be extracted from Nanopotentiometry which samples the device operation in more detail. Concurrent use of carrier profiling tools, Nanopotentiometry, analysis of device characteristics and simulations is required to provide a complete characterization of deep submicron devices.
Anaesthesia Management for Awake Craniotomy: Systematic Review and Meta-Analysis
Rossaint, Rolf; Veldeman, Michael
2016-01-01
Background Awake craniotomy (AC) renders an expanded role in functional neurosurgery. Yet, evidence for optimal anaesthesia management remains limited. We aimed to summarise the latest clinical evidence of AC anaesthesia management and explore the relationship of AC failures on the used anaesthesia techniques. Methods Two authors performed independently a systematic search of English articles in PubMed and EMBASE database 1/2007-12/2015. Search included randomised controlled trials (RCTs), observational trials, and case reports (n>4 cases), which reported anaesthetic approach for AC and at least one of our pre-specified outcomes: intraoperative seizures, hypoxia, arterial hypertension, nausea and vomiting, neurological dysfunction, conversion into general anaesthesia and failure of AC. Random effects meta-analysis was used to estimate event rates for four outcomes. Relationship with anaesthesia technique was explored using logistic meta-regression, calculating the odds ratios (OR) and 95% confidence intervals [95%CI]. Results We have included forty-seven studies. Eighteen reported asleep-awake-asleep technique (SAS), twenty-seven monitored anaesthesia care (MAC), one reported both and one used the awake-awake-awake technique (AAA). Proportions of AC failures, intraoperative seizures, new neurological dysfunction and conversion into general anaesthesia (GA) were 2% [95%CI:1–3], 8% [95%CI:6–11], 17% [95%CI:12–23] and 2% [95%CI:2–3], respectively. Meta-regression of SAS and MAC technique did not reveal any relevant differences between outcomes explained by the technique, except for conversion into GA. Estimated OR comparing SAS to MAC for AC failures was 0.98 [95%CI:0.36–2.69], 1.01 [95%CI:0.52–1.88] for seizures, 1.66 [95%CI:1.35–3.70] for new neurological dysfunction and 2.17 [95%CI:1.22–3.85] for conversion into GA. The latter result has to be interpreted cautiously. It is based on one retrospective high-risk of bias study and significance was abolished in a sensitivity analysis of only prospectively conducted studies. Conclusion SAS and MAC techniques were feasible and safe, whereas data for AAA technique are limited. Large RCTs are required to prove superiority of one anaesthetic regime for AC. PMID:27228013
Anaesthesia Management for Awake Craniotomy: Systematic Review and Meta-Analysis.
Stevanovic, Ana; Rossaint, Rolf; Veldeman, Michael; Bilotta, Federico; Coburn, Mark
2016-01-01
Awake craniotomy (AC) renders an expanded role in functional neurosurgery. Yet, evidence for optimal anaesthesia management remains limited. We aimed to summarise the latest clinical evidence of AC anaesthesia management and explore the relationship of AC failures on the used anaesthesia techniques. Two authors performed independently a systematic search of English articles in PubMed and EMBASE database 1/2007-12/2015. Search included randomised controlled trials (RCTs), observational trials, and case reports (n>4 cases), which reported anaesthetic approach for AC and at least one of our pre-specified outcomes: intraoperative seizures, hypoxia, arterial hypertension, nausea and vomiting, neurological dysfunction, conversion into general anaesthesia and failure of AC. Random effects meta-analysis was used to estimate event rates for four outcomes. Relationship with anaesthesia technique was explored using logistic meta-regression, calculating the odds ratios (OR) and 95% confidence intervals [95%CI]. We have included forty-seven studies. Eighteen reported asleep-awake-asleep technique (SAS), twenty-seven monitored anaesthesia care (MAC), one reported both and one used the awake-awake-awake technique (AAA). Proportions of AC failures, intraoperative seizures, new neurological dysfunction and conversion into general anaesthesia (GA) were 2% [95%CI:1-3], 8% [95%CI:6-11], 17% [95%CI:12-23] and 2% [95%CI:2-3], respectively. Meta-regression of SAS and MAC technique did not reveal any relevant differences between outcomes explained by the technique, except for conversion into GA. Estimated OR comparing SAS to MAC for AC failures was 0.98 [95%CI:0.36-2.69], 1.01 [95%CI:0.52-1.88] for seizures, 1.66 [95%CI:1.35-3.70] for new neurological dysfunction and 2.17 [95%CI:1.22-3.85] for conversion into GA. The latter result has to be interpreted cautiously. It is based on one retrospective high-risk of bias study and significance was abolished in a sensitivity analysis of only prospectively conducted studies. SAS and MAC techniques were feasible and safe, whereas data for AAA technique are limited. Large RCTs are required to prove superiority of one anaesthetic regime for AC.
Wood, Michael
2011-01-01
To add to the evidence base for safe and effective paediatric conscious sedation techniques in primary dental care. To consider the safety and effectiveness of an alternative sedation technique for facilitating dental treatment in anxious children, thereby avoiding dental general anaesthetic. Leagrave Dental Sedation Clinic. A primary care-based general and referral clinic for anxious patients, special care dentistry and oral surgery. This is a prospective service evaluation of 114 selected anxious children requiring invasive dental treatment. Each child was administered 0.25 mg/kg intranasal midazolam using a concentrated 40 mg/ml midazolam (INM) in 2% lignocaine solution. Successful completion of intended dental treatment with a child who is co-operative and who meets the UK accepted definition of conscious sedation. 57% of the children found the administration of the new formulation acceptable. Of the 114 patients who received INM, 104 completed the treatment (91%). The 10 children who could not complete the treatment with INM were converted to intravenous sedation and treatment was completed successfully at the same appointment. During treatment there was no desaturation and only one patient desaturated briefly in the recovery area. Parents rated the technique acceptable in 76% of cases and would have the procedure repeated in 83% of cases. Parents rated this technique as having 8.3 out of 10 with only 5 parents awarding a score of less than 7 out of 10. Side effects included blurred vision, sneezing, headaches, restlessness with one patient having post-operative nausea and vomiting. In selected cases intranasal sedation provides a safe and effective alternative for dental GA in short invasive procedures limited to one or two quadrants in children. Other techniques, e.g., oral and intravenous sedation, appear to have a much higher acceptability of administration. This technique may be useful if inhalation sedation, oral sedation or intravenous sedation is considered and the child is still unco-operative, either as a technique on its own or to facilitate cannulation for intravenous sedation. It is recommended that this technique should only be used by dentists skilled in intravenous paediatric sedation with midazolam with the appropriate staff training and equipment at their disposal.
47 CFR 8.13 - General pleading requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false General pleading requirements. 8.13 Section 8.13 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRESERVING THE OPEN INTERNET § 8.13 General pleading requirements. (a) General pleading requirements. All written submissions, both...