How generation affects source memory.
Geghman, Kindiya D; Multhaup, Kristi S
2004-07-01
Generation effects (better memory for self-produced items than for provided items) typically occur in item memory. Jurica and Shimamura (1999) reported a negative generation effect in source memory, but their procedure did not test participants on the items they had generated. In Experiment 1, participants answered questions and read statements made by a face on a computer screen. The target word was unscrambled, or letters were filled in. Generation effects were found for target recall and source recognition (which person did which task). Experiment 2 extended these findings to a condition in which the external sources were two different faces. Generation had a positive effect on source memory, supporting an overlap in the underlying mechanisms of item and source memory.
NASA Astrophysics Data System (ADS)
Zhang, Yi; Xu, Yue; Ma, Kun
2016-08-01
In this paper, the variable-coefficient Kadomtsev-Petviashvili (vcKP) equation with self-consistent sources is presented by two different methods, one is the source generation procedure, the other is the Pfaffianization procedure, and the solutions for the two new coupled systems are given through Grammian-type Pfaffian determinants.
Development of dynamic calibration methods for POGO pressure transducers. [for space shuttle
NASA Technical Reports Server (NTRS)
Hilten, J. S.; Lederer, P. S.; Vezzetti, C. F.; Mayo-Wells, J. F.
1976-01-01
Two dynamic pressure sources are described for the calibration of pogo pressure transducers used to measure oscillatory pressures generated in the propulsion system of the space shuttle. Rotation of a mercury-filled tube in a vertical plane at frequencies below 5 Hz generates sinusoidal pressures up to 48 kPa, peak-to-peak; vibrating the same mercury-filled tube sinusoidally in the vertical plane extends the frequency response from 5 Hz to 100 Hz at pressures up to 140 kPa, peak-to-peak. The sinusoidal pressure fluctuations can be generated by both methods in the presence of high pressures (bias) up to 55 MPa. Calibration procedures are given in detail for the use of both sources. The dynamic performance of selected transducers was evaluated using these procedures; the results of these calibrations are presented. Calibrations made with the two sources near 5 Hz agree to within 3% of each other.
New Source Performance Standards
ERIC Educational Resources Information Center
Jenkins, Richard E.; McCutchen, Gary D.
1972-01-01
This feature article outlines the concept and procedures followed in establishing performance standards for new emission sources and summarizes the standards that have been established to date. Five source catagories are enumerated: fossil fuel-fired steam generators, municipal incinerators, Portland cement plants, nitric acid plants, and sulfuric…
Natural language generation of surgical procedures.
Wagner, J C; Rogers, J E; Baud, R H; Scherrer, J R
1999-01-01
A number of compositional Medical Concept Representation systems are being developed. Although these provide for a detailed conceptual representation of the underlying information, they have to be translated back to natural language for used by end-users and applications. The GALEN programme has been developing one such representation and we report here on a tool developed to generate natural language phrases from the GALEN conceptual representations. This tool can be adapted to different source modelling schemes and to different destination languages or sublanguages of a domain. It is based on a multilingual approach to natural language generation, realised through a clean separation of the domain model from the linguistic model and their link by well defined structures. Specific knowledge structures and operations have been developed for bridging between the modelling 'style' of the conceptual representation and natural language. Using the example of the scheme developed for modelling surgical operative procedures within the GALEN-IN-USE project, we show how the generator is adapted to such a scheme. The basic characteristics of the surgical procedures scheme are presented together with the basic principles of the generation tool. Using worked examples, we discuss the transformation operations which change the initial source representation into a form which can more directly be translated to a given natural language. In particular, the linguistic knowledge which has to be introduced--such as definitions of concepts and relationships is described. We explain the overall generator strategy and how particular transformation operations are triggered by language-dependent and conceptual parameters. Results are shown for generated French phrases corresponding to surgical procedures from the urology domain.
Modular Engine Noise Component Prediction System (MCP) Technical Description and Assessment Document
NASA Technical Reports Server (NTRS)
Herkes, William H.; Reed, David H.
2005-01-01
This report describes an empirical prediction procedure for turbofan engine noise. The procedure generates predicted noise levels for several noise components, including inlet- and aft-radiated fan noise, and jet-mixing noise. This report discusses the noise source mechanisms, the development of the prediction procedures, and the assessment of the accuracy of these predictions. Finally, some recommendations for future work are presented.
An experimental study was conducted to determine the reliability of the Method 5 procedure for providing particulate emission data from an oil-fired steam generator. The study was concerned with determining whether any 'false' particulate resulted from the collection process of f...
Patwardhan, Anil
2010-01-01
The cut and sew Cox's maze III procedure is time-consuming. Therefore, various energy sources have been used for ablation, to replace the cut and sew technique. We have used the bipolar radiofrequency output of a standard electrosurgical generator with re-usable forceps for replicating the Cox's maze III procedure from August 1996. In addition, re-usable nitrous oxide based cryoprobe has been used at the atrioventricular valve annuli and on the coronary sinus. The 85.7% cure rate at one year compares with that for surgical ablation of atrial fibrillation using alternative energy sources.
NASA Technical Reports Server (NTRS)
Tibbetts, J. G.
1979-01-01
Methods for predicting noise at any point on an aircraft while the aircraft is in a cruise flight regime are presented. Developed for use in laminar flow control (LFC) noise effects analyses, they can be used in any case where aircraft generated noise needs to be evaluated at a location on an aircraft while under high altitude, high speed conditions. For each noise source applicable to the LFC problem, a noise computational procedure is given in algorithm format, suitable for computerization. Three categories of noise sources are covered: (1) propulsion system, (2) airframe, and (3) LFC suction system. In addition, procedures are given for noise modifications due to source soundproofing and the shielding effects of the aircraft structure wherever needed. Sample cases, for each of the individual noise source procedures, are provided to familiarize the user with typical input and computed data.
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
Generation and characterization of diesel exhaust in a facility for controlled human exposures
An idling medium-duty diesel truck operated on ultralow sulfur diesel fuel was used as an emission source to generate diesel exhaust for controlled human exposure. Repeat tests were conducted on the Federal Test Procedure using a chassis dynamometer to demonstrate the reproducibi...
Conditioning Procedure for Spent Cs-137 Sealed Sources in Egypt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohamed, Y.T.; Hasan, M.A.; Lasheen, Y.F.
2006-07-01
It is the duty of the Hot Laboratories and Waste Management Center, Egyptian Atomic Energy Authority to mange the radioactive waste generated from any user for radioactive materials in Egypt. The most hazardous or dangerous radioactive waste we collect is spent radioactive sealed sources that have to be managed safely to protect human, workers and environment from any undue burden for radiation. Through the Integrated Management Program Of Radioactive Sealed Sources In Egypt, IMPRSS all spent Cs-137 sources with low activity will be retrievable conditioned in 200 L drum with special lead shield to keep the surface dose rate lowermore » than 200 merm/h according to US regulations and IAEA guidelines. Using this procedure the EAEA will condition about 243 sources in 9 drums. (authors)« less
Destination memory for self-generated actions.
El Haj, Mohamad
2016-10-01
There is a substantial body of literature showing memory enhancement for self-generated information in normal aging. The present paper investigated this outcome for destination memory or memory for outputted information. In Experiment 1, younger adults and older adults had to place (self-generated actions) and observe an experimenter placing (experiment-generated actions) items into two different destinations (i.e., a black circular box and a white square box). On a subsequent recognition task, the participants had to decide into which box each item had originally been placed. These procedures showed better destination memory for self- than experimenter-generated actions. In Experiment 2, destination and source memory were assessed for self-generated actions. Younger adults and older adults had to place items into the two boxes (self-generated actions), take items out of the boxes (self-generated actions), and observe an experimenter taking items out of the boxes (experiment-generated actions). On a subsequent recognition task, they had to decide into which box (destination memory)/from which box (source memory) each item had originally been placed/taken. For both populations, source memory was better than destination memory for self-generated actions, and both were better than source memory for experimenter-generated actions. Taken together, these findings highlight the beneficial effect of self-generation on destination memory in older adults.
The FORTRAN static source code analyzer program (SAP) system description
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.
1982-01-01
A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.
ERIC Educational Resources Information Center
Applied Management Sciences, Inc., Silver Spring, MD.
This report presents results of a project to assess the adequacy of existing data sources on the supply of 21 allied health occupations in order to develop improved data collection strategies and improved procedures for estimation of manpower needs. Following an introduction, chapter 2 provides a discussion of the general phases of the project and…
Price Transparency in the Online Age.
Kaplan, Jonathan L; Mills, Parker H
2016-05-01
Plastic surgeons are sometimes hesitant to provide their pricing information online, due to several concerns. However, if implemented right, price transparency can be used as a lead generation tool that provides consumers with the pricing information they want and gives the physician the consumer's contact information for follow-up. This study took place during the author's first year in private practice in a new city. An interactive price transparency platform (ie, cost estimator) was integrated into his website, allowing consumers to submit a "wishlist" of procedures to check pricing on these procedures of interest. However, the consumer must submit their contact information to receive the desired breakdown of costs that are tailored based on the author's medical fees. During that first year, without any advertising expenditure, the author's website received 412 wishlists from 208 unique consumers. Consumers (17.8%) that submitted a wishlist came in for a consultation and 62% of those booked a procedure. The average value of a booked procedure was over US $4000 and cumulatively, all of the leads from this one lead source in that first year generated over US $92,000 in revenue. When compared with non-price-aware patients, price-aware patients were 41% more likely to book a procedure. Price transparency led to greater efficiency and reduced consultations that ended in "sticker shock." When prudently integrated into a medical practice, price transparency can be a great lead generation source for patients that are (1) paying out of pocket for medically necessary services due to a high-deductible health plan or (2) paying for services not typically covered by insurance, such as cosmetic services.
Specific and non-specific match effects in negative priming.
Labossière, Danielle I; Leboe-McGowan, Jason P
2018-01-01
The negative priming effect occurs when withholding a response to a stimulus impairs generation of subsequent responding to a same or a related stimulus. Our goal was to use the negative priming procedure to obtain insights about the memory representations generated by ignoring vs. attending/responding to a prime stimulus. Across three experiments we observed that ignoring a prime stimulus tends to generate higher identity-independent, non-specific repetition effects, owing to an overlap in the coarse perceptual form of a prime distractor and a probe target. By contrast, attended repetition effects generate predominantly identity-specific sources of facilitation. We use these findings to advocate for using laboratory phenomena to illustrate general principles that can be of practical use to non-specialists. In the case of the negative priming procedure, we propose that the procedure provides a useful means for investigating attention/memory interactions, even if the specific cause (or causes) of negative priming effects remain unresolved. Copyright © 2017 Elsevier B.V. All rights reserved.
Effect of conductor geometry on source localization: Implications for epilepsy studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlitt, H.; Heller, L.; Best, E.
1994-07-01
We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less
The U.S. Environmental Protection Industry: The Technical Document (1995)
This 1995 report provides a detailed explanation of the data sources and procedures used to generate in the estimates presented in The U.S. Environmental Protection Industry: A Proposed Framework for Assessment.
Freight Transportation Energy Use : Volume 3. Freight Network and Operations Database.
DOT National Transportation Integrated Search
1979-07-01
The data sources, procedures, and assumptions used to generate the TSC national freight network and operations database are documented. National rail, highway, waterway, and pipeline networks are presented, and estimates of facility capacity, travel ...
A DATABASE FOR TRACKING TOXICOGENOMIC SAMPLES AND PROCEDURES
Reproductive toxicogenomic studies generate large amounts of toxicological and genomic data. On the toxicology side, a substantial quantity of data accumulates from conventional endpoints such as histology, reproductive physiology and biochemistry. The largest source of genomics...
The Choreography of Accountability
ERIC Educational Resources Information Center
Webb, P. Taylor
2006-01-01
The prevailing performance discourse in education claims school improvements can be achieved through transparent accountability procedures. The article identifies how teachers generate performances of their work in order to satisfy accountability demands. By identifying sources of teachers' knowledge that produce choreographed performances, I…
Legitimacy and Justice Perceptions
ERIC Educational Resources Information Center
Mueller, Charles W.; Landsman, Miriam J.
2004-01-01
Consistent with the theoretical argument of Hegtvedt and Johnson, we empirically examine the relationship between collectivity-generated legitimacy of reward procedures and individual-level justice perceptions about reward distributions. Using data from a natural setting, we find that collectivity sources of validity (authorization and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodgson, A.T.; Apte, M.G.; Shendell, D.G.
Detailed studies of a new manufactured house and four new industrialized relocatable school classrooms were conducted to determine the emission sources of formaldehyde and other VOCs and to identify and implement source reduction practices. Procedures were developed to generate VOC emission factors that allowed reasonably accurate predictions of indoor air VOC concentrations. Based on the identified sources of formaldehyde and other aldehydes, practices were developed to reduce the concentrations of these compounds in new house construction. An alternate ceiling panel reduced formaldehyde concentrations in the classrooms. Overall, the classrooms had relatively low VOC concentrations.
Trends and drivers of the aesthetic market during a turbulent economy.
Wilson, Stelios C; Soares, Marc A; Reavey, Patrick L; Saadeh, Pierre B
2014-06-01
Aesthetic procedures are significant sources of revenue for plastic surgeons. With the popularity of nonsurgical aesthetic procedures, many plastic surgeons question how to best tailor their aesthetic practice. Revenue generated from surgical and minimally invasive aesthetic procedures performed in the United States between 2000 and 2011 was calculated from the American Society of Plastic Surgeons' annual reports. Regression analysis was performed against six commonly cited economic indicators. In 2011, revenue from minimally invasive procedures increased from $3.0 billion to $5.7 billion (90 percent growth), whereas revenue from surgical procedures decreased from $6.6 billion to $6.0 billion (10 percent decline). Between 2000 and 2011, minimally invasive procedure market share grew from 30 percent to nearly 50 percent. Linear regression analysis revealed significant correlations between surgical procedure revenue and indicators of macroeconomic climate: Dow Jones Industrial Average (R = 0.72; p < 0.01), Standard & Poor's 500 Index (R = 0.64, p < 0.05), and unemployment rate (R = -0.81; p < 0.001). Minimally invasive procedure revenue was significantly correlated with indicators related to microeconomic decision trends: disposable income per capita (R = 0.93; p < 0.001), real gross domestic product per capita (R = 0.88; p < 0.001), and home price index (R = 0.63; p < 0.05). No economic indicator in this study was found to be significantly correlated with both surgical and minimally invasive revenue. Despite economic turbulence, minimally invasive procedures are the most rapidly growing source of revenue and are poised to be the dominant source of revenue in the aesthetic market.
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
Collecting and Animating Online Satellite Images.
ERIC Educational Resources Information Center
Irons, Ralph
1995-01-01
Describes how to generate automated classroom resources from the Internet. Topics covered include viewing animated satellite weather images using file transfer protocol (FTP); sources of images on the Internet; shareware available for viewing images; software for automating image retrieval; procedures for animating satellite images; and storing…
Is it safe to use local anesthesia with adrenaline in hand surgery? WALANT technique.
Pires Neto, Pedro José; Moreira, Leonardo de Andrade; Las Casas, Priscilla Pires de
2017-01-01
In the past it was taught that local anesthetic should not be used with adrenaline for procedures in the extremities. This dogma is transmitted from generation to generation. Its truth has not been questioned, nor the source of the doubt. In many situations the benefit of use was not understood, because it was often thought that it was not necessary to prolong the anesthetic effect, since the procedures were mostly of short duration. After the disclosure of studies of Canadian surgeons, came to understand that the benefits went beyond the time of anesthesia. The WALANT technique allows a surgical field without bleeding, possibility of information exchange with the patient during the procedure, reduction of waste material, reduction of costs, and improvement of safety. Thus, after passing through the initial phase of the doubts in the use of this technique, the authors verified its benefits and the patients' satisfaction in being able to immediately return home after the procedures.
Noniterative three-dimensional grid generation using parabolic partial differential equations
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1985-01-01
A new algorithm for generating three-dimensional grids has been developed and implemented which numerically solves a parabolic partial differential equation (PDE). The solution procedure marches outward in two coordinate directions, and requires inversion of a scalar tridiagonal system in the third. Source terms have been introduced to control the spacing and angle of grid lines near the grid boundaries, and to control the outer boundary point distribution. The method has been found to generate grids about 100 times faster than comparable grids generated via solution of elliptic PDEs, and produces smooth grids for finite-difference flow calculations.
Next-Generation MDAC Discrimination Procedure Using Multi-Dimensional Spectral Analyses
2007-09-01
explosions near the Lop Nor, Novaya Zemlya, Semipalatinsk , Nevada, and Indian test sites . We have computed regional phase spectra and are correcting... test sites as mainly due to differences in explosion P and S corner frequencies. Fisk (2007) used source model fits to estimate Pn, Pg, and Lg corner...frequencies for Nevada Test Site (NTS) explosions and found that Lg corner frequencies exhibit similar scaling with source size as for Pn and Pg
General purpose computer program for interacting supersonic configurations: Programmer's manual
NASA Technical Reports Server (NTRS)
Crill, W.; Dale, B.
1977-01-01
The program ISCON (Interacting Supersonic Configuration) is described. The program is in support of the problem to generate a numerical procedure for determining the unsteady dynamic forces on interacting wings and tails in supersonic flow. Subroutines are presented along with the complete FORTRAN source listing.
Mapping algorithm for freeform construction using non-ideal light sources
NASA Astrophysics Data System (ADS)
Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.
2015-09-01
Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.
Financing future power generation in Italy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esposito, P.
1998-07-01
Under Italian law, independent power generation fueled by renewable and so-called ``assimilated'' sources must be given incentives. To implement this provision, a resolution known as ``CIP 6'' and a decree setting forth the procedure to sell such electricity to ENEL were issued. CIP 6 has recently been revoked and new incentives have been announced. In the meantime, CIP 6 continues to apply to various projects which have been approved but not yet constructed.
Structural Group-based Auditing of Missing Hierarchical Relationships in UMLS
Chen, Yan; Gu, Huanying(Helen); Perl, Yehoshua; Geller, James
2009-01-01
The Metathesaurus of the UMLS was created by integrating various source terminologies. The inter-concept relationships were either integrated into the UMLS from the source terminologies or specially generated. Due to the extensive size and inherent complexity of the Metathesaurus, the accidental omission of some hierarchical relationships was inevitable. We present a recursive procedure which allows a human expert, with the support of an algorithm, to locate missing hierarchical relationships. The procedure starts with a group of concepts with exactly the same (correct) semantic type assignments. It then partitions the concepts, based on child-of hierarchical relationships, into smaller, singly rooted, hierarchically connected subgroups. The auditor only needs to focus on the subgroups with very few concepts and their concepts with semantic type reassignments. The procedure was evaluated by comparing it with a comprehensive manual audit and it exhibits a perfect error recall. PMID:18824248
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Sekiguchi, H.
2011-12-01
We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.
Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)
NASA Astrophysics Data System (ADS)
Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.
2016-06-01
We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.
Auditory event perception: the source-perception loop for posture in human gait.
Pastore, Richard E; Flint, Jesse D; Gaston, Jeremy R; Solomon, Matthew J
2008-01-01
There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.
X-ray absorption radiography for high pressure shock wave studies
NASA Astrophysics Data System (ADS)
Antonelli, L.; Atzeni, S.; Batani, D.; Baton, S. D.; Brambrink, E.; Forestier-Colleoni, P.; Koenig, M.; Le Bel, E.; Maheut, Y.; Nguyen-Bui, T.; Richetta, M.; Rousseaux, C.; Ribeyre, X.; Schiavi, A.; Trela, J.
2018-01-01
The study of laser compressed matter, both warm dense matter (WDM) and hot dense matter (HDM), is relevant to several research areas, including materials science, astrophysics, inertial confinement fusion. X-ray absorption radiography is a unique tool to diagnose compressed WDM and HDM. The application of radiography to shock-wave studies is presented and discussed. In addition to the standard Abel inversion to recover a density map from a transmission map, a procedure has been developed to generate synthetic radiographs using density maps produced by the hydrodynamics code DUED. This procedure takes into account both source-target geometry and source size (which plays a non negligible role in the interpretation of the data), and allows to reproduce transmission data with a good degree of accuracy.
HangOut: generating clean PSI-BLAST profiles for domains with long insertions.
Kim, Bong-Hyun; Cong, Qian; Grishin, Nick V
2010-06-15
Profile-based similarity search is an essential step in structure-function studies of proteins. However, inclusion of non-homologous sequence segments into a profile causes its corruption and results in false positives. Profile corruption is common in multidomain proteins, and single domains with long insertions are a significant source of errors. We developed a procedure (HangOut) that, for a single domain with specified insertion position, cleans erroneously extended PSI-BLAST alignments to generate better profiles. HangOut is implemented in Python 2.3 and runs on all Unix-compatible platforms. The source code is available under the GNU GPL license at http://prodata.swmed.edu/HangOut/. Supplementary data are available at Bioinformatics online.
Modelling of auctioning mechanism for solar photovoltaic capacity
NASA Astrophysics Data System (ADS)
Poullikkas, Andreas
2016-10-01
In this work, a modified optimisation model for the integration of renewable energy sources for power-generation (RES-E) technologies in power-generation systems on a unit commitment basis is developed. The purpose of the modified optimisation procedure is to account for RES-E capacity auctions for different solar photovoltaic (PV) capacity electricity prices. The optimisation model developed uses a genetic algorithm (GA) technique for the calculation of the required RES-E levy (or green tax) in the electricity bills. Also, the procedure enables the estimation of the level of the adequate (or eligible) feed-in-tariff to be offered to future RES-E systems, which do not participate in the capacity auctioning procedure. In order to demonstrate the applicability of the optimisation procedure developed the case of PV capacity auctioning for commercial systems is examined. The results indicated that the required green tax, in order to promote the use of RES-E technologies, which is charged to the electricity customers through their electricity bills, is reduced with the reduction in the final auctioning price. This has a significant effect related to the reduction of electricity bills.
Source positions from VLBI combined solution
NASA Astrophysics Data System (ADS)
Bachmann, S.; Thaller, D.; Engelhardt, G.
2014-12-01
The IVS Combination Center at BKG is primarily responsible for combined Earth Orientation Parameter (EOP) products and the generation of a terrestrial reference frame based on VLBI observations (VTRF). The procedure is based on the combination of normal equations provided by six IVS Analysis Centers (AC). Since more and more ACs also provide source positions in the normal equations - beside EOPs and station coordinates - an estimation of these parameters is possible and should be investigated. In the past, the International Celestial Reference Frame (ICRF) was not generated as a combined solution from several individual solutions, but was based on a single solution provided by one AC. The presentation will give an overview on the combination strategy and the possibilities for combined source position determination. This includes comparisons with existing catalogs, quality estimation and possibilities of rigorous combination of EOP, TRF and CRF in one combination process.
McBride, Dawn M; Anne Dosher, Barbara
2002-09-01
Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.
[Medical image compression: a review].
Noreña, Tatiana; Romero, Eduardo
2013-01-01
Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.
Pollution prevention and control procedure case study: an application for petroleum refineries.
Rodríguez, Encarnación; Martínez, Jose-Luis
2005-06-01
There is global environmental concern about the pollution from industries and other organizations that should not only be controlled but also prevented. Many alternatives are available to those in charge of environmental protection, but they should be able to draw on a systematic procedure to help implement prevention and control measures. At present, there are three immediate tasks: defining the objective of any environmental study, identifying the potential pollution sources, and selecting alternatives to these sources. However, it is necessary to evaluate these alternatives by using as large a number of criteria as possible and making them cumulative so as to enable the classification and selection of the best available techniques for each pollution source. The petroleum refining industry plays an important role in the developed economies and also has a potential for pollution generation that must be controlled. The best solution for all (i.e., petroleum companies, the public, and the environment) is pollution prevention, because this option will protect all of them and will also reduce costs in terms of lower raw materials consumption as well as reducing potential fines. The procedure we have presented in this article has been applied successfully.
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
40 CFR 246.200-5 - Recommended procedures: Methods of separation and collection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... designed to recover high grades of office paper at the source of generation, i.e., the desk, are the... recommended system is the desk-top system because it is designed to maximize recovery of high value material... desk-top system has been designed to minimize these problems. (d) The precise method of separation and...
Multi-Sensor Triangulation of Multi-Source Spatial Data
NASA Technical Reports Server (NTRS)
Habib, Ayman; Kim, Chang-Jae; Bang, Ki-In
2007-01-01
The introduced methodologies are successful in: a) Ising LIDAR features for photogrammetric geo-refererncing; b) Delivering a geo-referenced imagery of the same quality as point-based geo-referencing procedures; c) Taking advantage of the synergistic characteristics of spatial data acquisition systems. The triangulation output can be used for the generation of 3-D perspective views.
NASA Technical Reports Server (NTRS)
Wey, Thomas; Liu, Nan-Suey
2013-01-01
This paper summarizes the procedures of generating a polyhedral mesh derived from hanging-node elements as well as presents sample results from its application to the numerical solution of a single element lean direct injection (LDI) combustor using an open-source version of the National Combustion Code (NCC).
ERIC Educational Resources Information Center
Grammatikopoulos, Vasilis; Zachopoulou, Evridiki; Tsangaridou, Niki; Liukkonen, Jarmo; Pickup, Ian
2008-01-01
The body of research relating to assessment in education suggests that professional developers and seminar administrators have generally paid little attention to evaluation procedures. Scholars have also been critical of evaluations which use a single data source and have favoured the use of a multiple method design to generate a complete picture…
Price, Elliott J.; Wilkin, Paul; Sarasan, Viswambharan; Fraser, Paul D.
2016-01-01
Yams (Dioscorea spp.) are a multispecies crop with production in over 50 countries generating ~50 MT of edible tubers annually. The long-term storage potential of these tubers is vital for food security in developing countries. Furthermore, many species are important sources of pharmaceutical precursors. Despite these attributes as staple food crops and sources of high-value chemicals, Dioscorea spp. remain largely neglected in comparison to other staple tuber crops of tropical agricultural systems such as cassava (Manihot esculenta) and sweet potato (Ipomoea batatas). To date, studies have focussed on the tubers or rhizomes of Dioscorea, neglecting the foliage as waste. In the present study metabolite profiling procedures, using GC-MS approaches, have been established to assess biochemical diversity across species. The robustness of the procedures was shown using material from the phylogenetic clades. The resultant data allowed separation of the genotypes into clades, species and morphological traits with a putative geographical origin. Additionally, we show the potential of foliage material as a renewable source of high-value compounds. PMID:27385275
Mr-Moose: An advanced SED-fitting tool for heterogeneous multi-wavelength datasets
NASA Astrophysics Data System (ADS)
Drouart, G.; Falkendal, T.
2018-04-01
We present the public release of Mr-Moose, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from an heterogeneous dataset (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, Mr-Moose handles upper-limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly-versatile fitting tool fro handling increasing source complexity when combining multi-wavelength datasets with fully customisable filter/model databases. The complete control of the user is one advantage, which avoids the traditional problems related to the "black box" effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of Python and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially-generated datasets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA and VLA data) in the context of extragalactic SED fitting, makes Mr-Moose a particularly-attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.
MR-MOOSE: an advanced SED-fitting tool for heterogeneous multi-wavelength data sets
NASA Astrophysics Data System (ADS)
Drouart, G.; Falkendal, T.
2018-07-01
We present the public release of MR-MOOSE, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from a heterogeneous data set (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, MR-MOOSE handles upper limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly versatile fitting tool for handling increasing source complexity when combining multi-wavelength data sets with fully customisable filter/model data bases. The complete control of the user is one advantage, which avoids the traditional problems related to the `black box' effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of PYTHON and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially generated data sets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA, and VLA data) in the context of extragalactic SED fitting makes MR-MOOSE a particularly attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.
Inflight IFR procedures simulator
NASA Technical Reports Server (NTRS)
Parker, L. C. (Inventor)
1984-01-01
An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.
Users guide for the Water Resources Division bibliographic retrieval and report generation system
Tamberg, Nora
1983-01-01
The WRDBIB Retrieval and Report-generation system has been developed by applying Multitrieve (CSD 1980, Reston) software to bibliographic data files. The WRDBIB data base includes some 9 ,000 records containing bibliographic citations and descriptors of WRD reports released for publication during 1968-1982. The data base is resident in the Reston Multics computer and may be accessed by registered Multics users in the field. The WRDBIB Users Guide provides detailed procedures on how to run retrieval programs using WRDBIB library files, and how to prepare custom bibliographic reports and author indexes. Users may search the WRDBIB data base on the following variable fields as described in the Data Dictionary: Authors, organizational source, title, citation, publication year, descriptors, and the WRSIC (accession) number. The Users Guide provides ample examples of program runs illustrating various retrieval and report generation aspects. Appendices include Multics access and file manipulation procedures; a ' Glossary of Selected Terms'; and a complete ' Retrieval Session ' with step-by-step outlines. (USGS)
Active System for Electromagnetic Perturbation Monitoring in Vehicles
NASA Astrophysics Data System (ADS)
Matoi, Adrian Marian; Helerea, Elena
Nowadays electromagnetic environment is rapidly expanding in frequency domain and wireless services extend in terms of covered area. European electromagnetic compatibility regulations refer to limit values regarding emissions, as well as procedures for determining susceptibility of the vehicle. Approval procedure for a series of cars is based on determining emissions/immunity level for a few vehicles picked randomly from the entire series, supposing that entire vehicle series is compliant. During immunity assessment, the vehicle is not subjected to real perturbation sources, but exposed to electric/magnetic fields generated by laboratory equipment. Since current approach takes into account only partially real situation regarding perturbation sources, this paper proposes an active system for determining electromagnetic parameters of vehicle's environment, that implements a logical diagram for measurement, satisfying the imposed requirements. This new and original solution is useful for EMC assessment of hybrid and electrical vehicles.
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
Bellmann, Barbara; Lin, Tina; Ruppersberg, Peter; Zettwitz, Marit; Guttmann, Selma; Tscholl, Verena; Nagel, Patrick; Roser, Mattias; Landmesser, Ulf; Rillig, Andreas
2018-05-09
The optimal ablation approach for the treatment of persistent atrial fibrillation (AF) is still under debate; however, the identification and elimination of AF sources is thought to play a key role. Currently available technologies for the identification of AF sources are not able to differentiate between active rotors or focal impulse (FI) and passive circular turbulences as generated by the interaction of a wave front with a functional obstacle such as fibrotic tissue. This study introduces electrographic flow (EGF) mapping as a novel technology for the identification and characterization of AF sources in humans. Twenty-five patients with AF (persistent: n = 24, long-standing persistent: n = 1; mean age 70.0 ± 8.3 years, male: n = 17) were included in this prospective study. Focal impulse and Rotor-Mapping (FIRM) was performed in addition to pulmonary vein isolation using radiofrequency in conjunction with a 3D-mapping-system. One-minute epochs were exported from the EP-recording-system and re-analyzed using EGF mapping after the procedure. 44 potential AF sources (43 rotors and one FI) were identified with FIRM and 39 of these rotors were targeted for ablation. EGF mapping verified 40 of these patterns and identified 24/40 (60%) as active sources while 16/40 (40%) were classified as passive circular turbulences. Four rotors were not identified by EGF mapping. EGF is the first method to identify active AF sources during AF ablation procedures in humans and discriminate them from passive rotational phenomena, which occur if the excitation wavefront passes conduction bariers. EGF mapping may allow improved guidance of AF ablation procedures.
The modified semi-discrete two-dimensional Toda lattice with self-consistent sources
NASA Astrophysics Data System (ADS)
Gegenhasi
2017-07-01
In this paper, we derive the Grammian determinant solutions to the modified semi-discrete two-dimensional Toda lattice equation, and then construct the semi-discrete two-dimensional Toda lattice equation with self-consistent sources via source generation procedure. The algebraic structure of the resulting coupled modified differential-difference equation is clarified by presenting its Grammian determinant solutions and Casorati determinant solutions. As an application of the Grammian determinant and Casorati determinant solution, the explicit one-soliton and two-soliton solution of the modified semi-discrete two-dimensional Toda lattice equation with self-consistent sources are given. We also construct another form of the modified semi-discrete two-dimensional Toda lattice equation with self-consistent sources which is the Bäcklund transformation for the semi-discrete two-dimensional Toda lattice equation with self-consistent sources.
CFD Analysis of Turbo Expander for Cryogenic Refrigeration and Liquefaction Cycles
NASA Astrophysics Data System (ADS)
Verma, Rahul; Sam, Ashish Alex; Ghosh, Parthasarathi
Computational Fluid Dynamics analysis has emerged as a necessary tool for designing of turbomachinery. It helps to understand the various sources of inefficiency through investigation of flow physics of the turbine. In this paper, 3D turbulent flow analysis of a cryogenic turboexpander for small scale air separation was performed using Ansys CFX®. The turboexpander has been designed following assumptions based on meanlineblade generation procedure provided in open literature and good engineering judgement. Through analysis of flow field, modifications and further analysis required to evolve a more robust design procedure, have been suggested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schock, A.; Noravian, H.; Or, C.
1997-12-31
This paper extends the analytical procedure described in another paper in these proceedings to analyze a variety of compact and light-weight OSC-designed radioisotope-heated generators. Those generators employed General Purpose Heat Source (GPHS) modules and a converter containing sixteen AMTEC cells of OSC`s revised five-tube design with enhanced cell wall reflectivity described in a companion paper in these proceedings. OSC found that the performance of the generator is primarily a function of the thermal insulation between the outside of the generator`s 16 cells and the inside of its wall. After examining a variety of insulation options, it was found that themore » generator`s performance is optimized by employing a hybrid insulation system, in which the space between the cells is filled with fibrous Min-K insulation, and the generator walls are lined with tapered (i.e., graded-length) multifoil insulation. The OSC design results in a very compact generator, with eight AMTEC cells on each end of the heat source stack. The choice of the five-tube cells makes it possible to expand the BASE tube diameter without increasing the cell diameter. This is important because the eight cells mate well with the stacked GPHS modules. The OSC generator design includes a compliant heat source support and preload arrangement, to hold the heat source modules together during launch, and to maintain thermal contact conductance at the generator`s interfaces despite creep relaxation of its housing. The BOM and EOM (up to 15 years) performances of the revised generators were analyzed for two and three GPHS modules, both for fresh fuel and for aged fuel left over from a spare RTG (Radioisotope Thermoelectric Generator) fueled in 1982. The resulting power outputs were compared with JPL`s latest EOM power demand goals for the Pluto Express and Europa Orbiter missions, and with the generic goals of DOE`s Advanced Radioisotope Power System (ARPS) study. The OSC AMTEC designs yielded system efficiencies three to four times as high as present-generation RTGs.« less
1993-12-01
72 D. MINES AND THE MILITARY-TECHNOLOGICAL REVOLUTION ...................................... 74 E. CUSTOMIZING THE TDD PROLIFERATION MARKET M...Data Storage & Peripherals - Systems Managmnt Technologies 4. Passive Sensors - Sensors and Signal Processing 5. Photonics - Electronic and...a reproducible procedure to allow customization of the model, provides the "guts" of the method. 18 Third, because they are not optimized for
Spectral Topography Generation for Arbitrary Grids
NASA Astrophysics Data System (ADS)
Oh, T. J.
2015-12-01
A new topography generation tool utilizing spectral transformation technique for both structured and unstructured grids is presented. For the source global digital elevation data, the NASA Shuttle Radar Topography Mission (SRTM) 15 arc-second dataset (gap-filling by Jonathan de Ferranti) is used and for land/water mask source, the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) 30 arc-second land water mask dataset v5 is used. The original source data is coarsened to a intermediate global 2 minute lat-lon mesh. Then, spectral transformation to the wave space and inverse transformation with wavenumber truncation is performed for isotropic topography smoothness control. Target grid topography mapping is done by bivariate cubic spline interpolation from the truncated 2 minute lat-lon topography. Gibbs phenomenon in the water region can be removed by overwriting ocean masked target coordinate grids with interpolated values from the intermediate 2 minute grid. Finally, a weak smoothing operator is applied on the target grid to minimize the land/water surface height discontinuity that might have been introduced by the Gibbs oscillation removal procedure. Overall, the new topography generation approach provides spectrally-derived, smooth topography with isotropic resolution and minimum damping, enabling realistic topography forcing in the numerical model. Topography is generated for the cubed-sphere grid and tested on the KIAPS Integrated Model (KIM).
The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm
NASA Astrophysics Data System (ADS)
Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero
1999-10-01
We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.
Broadband Fan Noise Generated by Small Scale Turbulence
NASA Technical Reports Server (NTRS)
Glegg, Stewart A. L.
1998-01-01
This report describes the development of prediction methods for broadband fan noise from aircraft engines. First, experimental evidence of the most important source mechanisms is reviewed. It is found that there are a number of competing source mechanism involved and that there is no single dominant source to which noise control procedures can be applied. Theoretical models are then developed for: (1) ducted rotors and stator vanes interacting with duct wall boundary layers, (2) ducted rotor self noise, and (3) stator vanes operating in the wakes of rotors. All the turbulence parameters required for these models are based on measured quantities. Finally the theoretical models are used to predict measured fan noise levels with some success.
NASA Technical Reports Server (NTRS)
Palosz, W.
2003-01-01
Residual gases present in closed ampoules may affect different crystal growth processes. Their presence may affect techniques requiring low pressures and affect the crystal quality in different ways. For that reason a good understanding and control of formation of residual gases may be important for an optimum design and meaningful interpretation of crystal growth experiments. Our extensive experimental and theoretical study includes degassing of silica glass and generation of gases from various source materials. Different materials processing conditions, like outgassing under vacuum, annealing in hydrogen, resublimation, different material preparation procedures, multiple annealings, different processing times, and others were applied and their effect on the amount and composition of gas were analyzed. The experimental results were interpreted based on theoretical calculations on diffusion in silica glass and source materials and thermochemistry of the system. Procedures for a reduction of the amount of gas are also discussed.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Procedural Modeling for Rapid-Prototyping of Multiple Building Phases
NASA Astrophysics Data System (ADS)
Saldana, M.; Johanson, C.
2013-02-01
RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.
Naber, Christoph K; Ghanem, Alexander; Abizaid, Alexander A; Wolf, Alexander; Sinning, Jan-Malte; Werner, Nikos; Nickenig, Georg; Schmitz, Thomas; Grube, Eberhard
2012-05-15
We describe the first-in-human experience with a novel cerebral embolic protection device used during transcatheter aortic valve implantation (TAVI). One current challenge of TAVI is the reduction of procedural stroke. Procedural mobilisation of debris is a known source of cerebral embolisation. Mechanical protection by transient filtration of cerebral blood flow might reduce the embolic burden during TAVI. We aimed to evaluate the feasibility and safety of the Claret CE Pro™ cerebral protection device in patients undergoing TAVI. Patients scheduled for TAVI were prospectively enrolled at three centres. The Claret CE Pro™ (Claret Medical, Inc. Santa Rosa, CA, USA) cerebral protection device was placed via the right radial/brachial artery prior to TAVI and was removed after the procedure. The primary endpoint was technical success rate. Secondary endpoints encompassed procedural and 30-day stroke rates, as well as device-related complications. Deployment of the Claret CE Pro™ cerebral protection device was intended for use in 40 patients, 35 devices were implanted into the aortic arch. Technical success rate with delivery of the proximal and distal filter was 60% for the first generation device and 87% for the second-generation device. Delivery times for the first-generation device were 12.4±12.1 minutes and 4.4 ± 2.5 minutes for the second-generation device (p<0.05). The quantity of contrast used related to the Claret CE Pro System was 19.6 ± 3.8 ml. Captured debris was documented in at least 19 of 35 implanted devices (54.3%). No procedural transient ischaemic attacks, minor strokes or major strokes occurred. Thirty-day follow-up showed one minor stroke occurring 30 days after the procedure, and two major strokes both occurring well after the patient had completed TAVI. The use of the Claret CE Pro™ system is feasible and safe. Capture of debris in more than half of the patients provides evidence for the potential to reduce the procedural cerebral embolic burden utilising this dedicated filter system during TAVI.
Portable light source unit for simulating fires having an adjustable aperture
NASA Technical Reports Server (NTRS)
Youngquist, Robert C. (Inventor); Moerk, John S. (Inventor); Strobel, James P. (Inventor)
1997-01-01
A portable, hand held light source unit is employed to check operation of fire detectors, such as hydrogen fire detectors. The unit emits radiation in a narrow band of wavelengths which are generated by the type of fire to be tested, but not by other light sources such as the sun or incandescent lamps. The unit can test fire detectors at different distances, and of different sensitivities. The intensity of the radiation emitted by the unit is adjustable for this purpose by means of a rotatable disk having a plurality of different sized apertures for selective placement between the light source and an output lens. The disk can also be rotated to a calibration position which causes a microprocessor circuit in the unit to initiate a calibration procedure. During this procedure, the lamp intensity is measured by a photodetector contained within the unit, and the microprocessor adjusts the lamp current to insure that its intensity remains within a preset range of values. A green and a red LED are mounted on the unit which indicate to an operator whether the calibration is successful, as well as the condition of the unit's battery power supply.
Improved techniques for thermomechanical testing in support of deformation modeling
NASA Technical Reports Server (NTRS)
Castelli, Michael G.; Ellis, John R.
1992-01-01
The feasibility of generating precise thermomechanical deformation data to support constitutive model development was investigated. Here, the requirement is for experimental data that is free from anomalies caused by less than ideal equipment and procedures. A series of exploratory tests conducted on Hastelloy X showed that generally accepted techniques for strain controlled tests were lacking in at least three areas. Specifically, problems were encountered with specimen stability, thermal strain compensation, and temperature/mechanical strain phasing. The source of these difficulties was identified and improved thermomechanical testing techniques to correct them were developed. These goals were achieved by developing improved procedures for measuring and controlling thermal gradients and by designing a specimen specifically for thermomechanical testing. In addition, innovative control strategies were developed to correctly proportion and phase the thermal and mechanical components of strain. Subsequently, the improved techniques were used to generate deformation data for Hastelloy X over the temperature range, 200 to 1000 C.
Calibration strategy and optics for ARGOS at the LBT
NASA Astrophysics Data System (ADS)
Schwab, Christian; Peter, Diethard; Aigner, Simon
2010-07-01
Effective calibration procedures play an important role for the efficiency and performance of astronomical instrumentation. We report on the calibration scheme for ARGOS, the Laser Guide Star (LGS) facility at the LBT. An artificial light source is used to feign the real laser beacons and perform extensive testing of the system, independent of the time of day and weather conditions, thereby greatly enhancing the time available for engineering. Fibre optics and computer generated holograms (CGHs) are used to generate the necessary wavefront. We present the optomechanical design, and discuss the expected accuracy, as well as tolerances in assembly and alignment.
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2015-04-01
Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.
Reepolmaha, Somporn; Limtrakarn, Wiroj; Uthaisang-Tanechpongtamb, Wanlaya; Dechaumphai, Pramote
2010-01-01
The purpose of this study was to estimate and compare the temperatures of two different anterior chamber solutions at the corneal endothelial level during phacoemulsification. An ophthalmic viscosurgical device (OVD) and balanced salt solution (BSS) were compared using the finite element method (FEM). The thermal properties of an OVD (IAL-F) and BSS were studied in an experimental setting. A computer-aided design model of ocular anatomy was created in two dimensions. The phaco needle was considered to be the only source of heat generation. Then, the FEM was used to demonstrate the transient temperature distribution in the two ocular models at 10, 20, 30, 40, 50 and 60 s. In these models, the anterior chamber was filled with IAL-F (IAL-F model) or BSS (BSS model). The heat generation rate of the phaco needle was 0.0004 cal/s/mm(2). The maximum corneal endothelial temperatures for the two models at 60 s were 52.67 and 41.57 degrees C, respectively. The experimental IAL-F model showed fewer changes in temperature for any given time and location. At larger distances from the heat source, less temperature variation was detected. Phacoemulsification is a potential heat-generating procedure performed between the delicate anterior chamber structures. During this procedure, IAL-F protects the endothelium against heat better than BSS. Copyright 2009 S. Karger AG, Basel.
Venkata Mohan, S; Lalit Babu, V; Sarma, P N
2008-01-01
Influence of different pretreatment methods applied on anaerobic mixed inoculum was evaluated for selectively enriching the hydrogen (H(2)) producing mixed culture using dairy wastewater as substrate. The experimental data showed the feasibility of molecular biohydrogen generation utilizing dairy wastewater as primary carbon source through metabolic participation. However, the efficiency of H(2) evolution and substrate removal efficiency were found to be dependent on the type of pretreatment procedure adopted on the parent inoculum. Among the studied pretreatment methods, chemical pretreatment (2-bromoethane sulphonic acid sodium salt (0.2 g/l); 24 h) procedure enabled higher H(2) yield along with concurrent substrate removal efficiency. On the contrary, heat-shock pretreatment (100 degrees C; 1 h) procedure resulted in relatively low H(2) yield. Compared to control experiments all the adopted pretreatment methods documented higher H(2) generation efficiency. In the case of combination experiments, integration of pH (pH 3; adjusted with ortho-phosphoric acid; 24 h) and chemical pretreatment evidenced higher H(2) production. Data envelopment analysis (DEA), a frontier analysis technique model was successfully applied to enumerate the relative efficiency of different pretreatment methods studied by considered pretreatment procedures as input and cumulative H(2) production rate and substrate degradation rate as corresponding two outputs.
Probabilistic tsunami hazard analysis: Multiple sources and global applications
Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël; Parsons, Thomas E.; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie
2017-01-01
Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.
Probabilistic Tsunami Hazard Analysis: Multiple Sources and Global Applications
NASA Astrophysics Data System (ADS)
Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël.; Parsons, Tom; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie
2017-12-01
Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.
Cost drivers in total hip arthroplasty: effects of procedure volume and implant selling price.
Kelly, Michael P; Bozic, Kevin J
2009-01-01
Total hip arthroplasty (THA), though a highly effective procedure for patients with end-stage hip disease, has become increasingly costly, both because of increasing procedure volume and because of the introduction and widespread use of new technologies. Data regarding procedure volume and procedure costs for THA were obtained from the National Inpatient Sample and other published sources for the years 1995 through 2005. Procedure volume increased 61% over the period studied. When adjusted for inflation, using the medical consumer price index, the average selling price of THA implants increased 24%. The selling price of THA implants as a percentage of total procedure costs increased from 29% to 60% during the period under study. The increasing cost of THA in the United States is a result of both increased procedure volume and increased cost of THA implants. No long-term outcome studies related to use of new implant technologies are available, and short-term results have been similar to those obtained with previous generations of THA implants. This study reinforces the need for a US total joint arthroplasty registry and for careful clinical and economic analyses of new technologies in orthopedics.
Source-Device-Independent Ultrafast Quantum Random Number Generation.
Marangon, Davide G; Vallone, Giuseppe; Villoresi, Paolo
2017-02-10
Secure random numbers are a fundamental element of many applications in science, statistics, cryptography and more in general in security protocols. We present a method that enables the generation of high-speed unpredictable random numbers from the quadratures of an electromagnetic field without any assumption on the input state. The method allows us to eliminate the numbers that can be predicted due to the presence of classical and quantum side information. In particular, we introduce a procedure to estimate a bound on the conditional min-entropy based on the entropic uncertainty principle for position and momentum observables of infinite dimensional quantum systems. By the above method, we experimentally demonstrated the generation of secure true random bits at a rate greater than 1.7 Gbit/s.
NASA Astrophysics Data System (ADS)
Jones, A. A.; Holt, R. M.
2017-12-01
Image capturing in flow experiments has been used for fluid mechanics research since the early 1970s. Interactions of fluid flow between the vadose zone and permanent water table are of great interest because this zone is responsible for all recharge waters, pollutant transport and irrigation efficiency for agriculture. Griffith, et al. (2011) developed an approach where constructed reproducible "geologically realistic" sand configurations are deposited in sandfilled experimental chambers for light-transmitted flow visualization experiments. This method creates reproducible, reverse graded, layered (stratified) thin-slab sand chambers for point source experiments visualizing multiphase flow through porous media. Reverse-graded stratification of sand chambers mimic many naturally occurring sedimentary deposits. Sandfilled chambers use light as nonintrusive tools for measuring water saturation in two-dimensions (2-D). Homogeneous and heterogeneous sand configurations can be produced to visualize the complex physics of the unsaturated zone. The experimental procedure developed by Griffith, et al. (2011) was designed using now outdated and obsolete equipment. We have modernized this approach with new Parker Deadel linear actuator and programed projects/code for multiple configurations. We have also updated the Roper CCD software and image processing software with the latest in industry standards. Modernization of transmitted-light source, robotic equipment, redesigned experimental chambers, and newly developed analytical procedures have greatly reduced time and cost per experiment. We have verified the ability of the new equipment to generate reproducible heterogeneous sand-filled chambers and demonstrated the functionality of the new equipment and procedures by reproducing several gravity-driven fingering experiments conducted by Griffith (2008).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitworth, J.; Pearson, M.; Feldman, A.
2006-07-01
The Offsite Source Recovery (OSR) Project at Los Alamos National Laboratory is now shipping transuranic (TRU) waste containers to the Waste Isolation Pilot Plant (WIPP) in New Mexico for disposal. Sealed source waste disposal has become possible in part because OSR personnel were able to obtain Environmental Protection Agency (EPA) and DOE-CBFO approval for an alternative radiological characterization procedure relying on acceptable knowledge (AK) and modeling, rather than on non-destructive assay (NDA) of each container. This is the first successful qualification of an 'alternate methodology' under the radiological characterization requirements of the WIPP Waste Acceptance Criteria (WAC) by any TRUmore » waste generator site. This paper describes the approach OSR uses to radiologically characterize its sealed source waste and the process by which it obtained certification of this approach. (authors)« less
Seismic noise frequency dependent P and S wave sources
NASA Astrophysics Data System (ADS)
Stutzmann, E.; Schimmel, M.; Gualtieri, L.; Farra, V.; Ardhuin, F.
2013-12-01
Seismic noise in the period band 3-10 sec is generated in the oceans by the interaction of ocean waves. Noise signal is dominated by Rayleigh waves but body waves can be extracted using a beamforming approach. We select the TAPAS array deployed in South Spain between June 2008 and September 2009 and we use the vertical and horizontal components to extract noise P and S waves, respectively. Data are filtered in narrow frequency bands and we select beam azimuths and slownesses that correspond to the largest continuous sources per day. Our procedure automatically discard earthquakes which are localized during short time durations. Using this approach, we detect many more noise P-waves than S-waves. Source locations are determined by back-projecting the detected slowness/azimuth. P and S waves are generated in nearby areas and both source locations are frequency dependent. Long period sources are dominantly in the South Atlantic and Indian Ocean whereas shorter period sources are rather in the North Atlantic Ocean. We further show that the detected S-waves are dominantly Sv-waves. We model the observed body waves using an ocean wave model that takes into account all possible wave interactions including coastal reflection. We use the wave model to separate direct and multiply reflected phases for P and S waves respectively. We show that in the South Atlantic the complex source pattern can be explained by the existence of both coastal and pelagic sources whereas in the North Atlantic most body wave sources are pelagic. For each detected source, we determine the equivalent source magnitude which is compared to the model.
48 CFR 715.370 - Alternative source selection procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... selection procedures. 715.370 Section 715.370 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 715.370 Alternative source selection procedures. The following selection procedures may be used, when...
Quantum-key-distribution protocol with pseudorandom bases
NASA Astrophysics Data System (ADS)
Trushechkin, A. S.; Tregubov, P. A.; Kiktenko, E. O.; Kurochkin, Y. V.; Fedorov, A. K.
2018-01-01
Quantum key distribution (QKD) offers a way for establishing information-theoretical secure communications. An important part of QKD technology is a high-quality random number generator for the quantum-state preparation and for post-processing procedures. In this work, we consider a class of prepare-and-measure QKD protocols, utilizing additional pseudorandomness in the preparation of quantum states. We study one of such protocols and analyze its security against the intercept-resend attack. We demonstrate that, for single-photon sources, the considered protocol gives better secret key rates than the BB84 and the asymmetric BB84 protocols. However, the protocol strongly requires single-photon sources.
AtomicJ: An open source software for analysis of force curves
NASA Astrophysics Data System (ADS)
Hermanowicz, Paweł; Sarna, Michał; Burda, Kvetoslava; Gabryś, Halina
2014-06-01
We present an open source Java application for analysis of force curves and images recorded with the Atomic Force Microscope. AtomicJ supports a wide range of contact mechanics models and implements procedures that reduce the influence of deviations from the contact model. It generates maps of mechanical properties, including maps of Young's modulus, adhesion force, and sample height. It can also calculate stacks, which reveal how sample's response to deformation changes with indentation depth. AtomicJ analyzes force curves concurrently on multiple threads, which allows for high speed of analysis. It runs on all popular operating systems, including Windows, Linux, and Macintosh.
[From idea to standard care-a field report].
Herberz, Chantal; Steidl, Ralph; Werner, Pascal; Hagen, Julia
2018-03-01
Digital health products and services have started to fundamentally change healthcare and prevention. Products intended for a medical use require CE-marking and potentially certification (ISO 13485) of the company. Startups play an important role in the development of new digital products and services. Two startups share their experience with these processes. Becoming a part of standard care and hence being reimbursed is a challenge for startups. For this reason, startups pursue alternative sources of income, too. The statutory health insurance's procedures for assessing new products and services are perceived as long. Startups are required to provide evidence of the benefit of their product at an early stage in the procedure. This requires time-consuming and costly studies. Startups would therefore appreciate support in generating this evidence, e. g. through adequate procedures for testing.
Simpao, Allan; Heitz, James W; McNulty, Stephen E; Chekemian, Beth; Brenn, B Randall; Epstein, Richard H
2011-02-01
Residents in anesthesia training programs throughout the world are required to document their clinical cases to help ensure that they receive adequate training. Current systems involve self-reporting, are subject to delayed updates and misreported data, and do not provide a practicable method of validation. Anesthesia information management systems (AIMS) are being used increasingly in training programs and are a logical source for verifiable documentation. We hypothesized that case logs generated automatically from an AIMS would be sufficiently accurate to replace the current manual process. We based our analysis on the data reporting requirements of the American College of Graduate Medical Education (ACGME). We conducted a systematic review of ACGME requirements and our AIMS record, and made modifications after identifying data element and attribution issues. We studied 2 methods (parsing of free text procedure descriptions and CPT4 procedure code mapping) to automatically determine ACGME case categories and generated AIMS-based case logs and compared these to assignments made by manual inspection of the anesthesia records. We also assessed under- and overreporting of cases entered manually by our residents into the ACGME website. The parsing and mapping methods assigned cases to a majority of the ACGME categories with accuracies of 95% and 97%, respectively, as compared with determinations made by 2 residents and 1 attending who manually reviewed all procedure descriptions. Comparison of AIMS-based case logs with reports from the ACGME Resident Case Log System website showed that >50% of residents either underreported or overreported their total case counts by at least 5%. The AIMS database is a source of contemporaneous documentation of resident experience that can be queried to generate valid, verifiable case logs. The extent of AIMS adoption by academic anesthesia departments should encourage accreditation organizations to support uploading of AIMS-based case log files to improve accuracy and to decrease the clerical burden on anesthesia residents.
Jha, Nayansi; Ryu, Jae Jun
2017-01-01
The generation of reactive oxygen and nitrogen species (RONS) has been found to occur during inflammatory procedures, during cell ischemia, and in various crucial developmental processes such as cell differentiation and along cell signaling pathways. The most common sources of intracellular RONS are the mitochondrial electron transport system, NADH oxidase, and cytochrome P450. In this review, we analyzed the extracellular and intracellular sources of reactive species, their cell signaling pathways, the mechanisms of action, and their positive and negative effects in the dental field. In dentistry, ROS can be found—in lasers, photosensitizers, bleaching agents, cold plasma, and even resin cements, all of which contribute to the generation and prevalence of ROS. Nonthermal plasma has been used as a source of ROS for biomedical applications and has the potential for use with dental stem cells as well. There are different types of dental stem cells, but their therapeutic use remains largely untapped, with the focus currently on only periodontal ligament stem cells. More research is necessary in this area, including studies about ROS mechanisms with dental cells, along with the utilization of reactive species in redox medicine. Such studies will help to provide successful treatment modalities for various diseases. PMID:29204250
Dobramysl, U; Holcman, D
2018-02-15
Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.
Advecting Procedural Textures for 2D Flow Animation
NASA Technical Reports Server (NTRS)
Kao, David; Pang, Alex; Moran, Pat (Technical Monitor)
2001-01-01
This paper proposes the use of specially generated 3D procedural textures for visualizing steady state 2D flow fields. We use the flow field to advect and animate the texture over time. However, using standard texture advection techniques and arbitrary textures will introduce some undesirable effects such as: (a) expanding texture from a critical source point, (b) streaking pattern from the boundary of the flowfield, (c) crowding of advected textures near an attracting spiral or sink, and (d) absent or lack of textures in some regions of the flow. This paper proposes a number of strategies to solve these problems. We demonstrate how the technique works using both synthetic data and computational fluid dynamics data.
Recommended OSC design and analysis of AMTEC power system for outer-planet missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schock, A.; Noravian, H.; Or, C.
1999-01-01
The paper describes OSC designs and analyses of AMTEC cells and radioisotope power systems for possible application to NASA{close_quote}s Europa Orbiter and Pluto Kuiper Express missions, and compares their predicted performance with JPL{close_quote}s preliminary mission goals. The latest cell and generator designs presented here were the culmination of studies covering a wide variety of generator configurations and operating parameters. The many steps and rationale leading to OSC{close_quote}s design evolution and materials selection were discussed in earlier publications and will not be repeated here except for a description of OSC{close_quote}s latest design, including a recent heat source support scheme and cellmore » configuration that have not been described in previous publications. As shown, that heat source support scheme eliminates all contact between the heat source and the AMTEC (Alkali Metal Thermal-to-Electrical Conversion) cells, which simplifies the generator{close_quote}s structural design as well as its fabrication and assembly procedure. An additional purpose of the paper is to describe a revised cell design and fabrication procedure which represent a major departure from previous OSC designs. Previous cells had a uniform diameter, but in the revised design the cell wall beyond the BASE tubes has a greatly reduced diameter. The paper presents analytical performance predictions which show that the revised ({open_quotes}chimney{close_quotes}) cell design yields substantially higher efficiencies than the previous (cylindrical) design. This makes it possible to meet and substantially exceed the JPL-stipulated EOM power goal with four instead of six General Purpose Heat Source (GPHS) modules, resulting in a one-third reduction in the heat source mass, cost, and fuel requirements. OSC{close_quote}s performance predictions were based on its techniques for the coupled thermal, electrical, and fluid flow analyses of AMTEC generators. Those analytical techniques have been partially validated by tests of prototypic test assemblies designed by OSC, built by AMPS, and tested by AFRL. The analytical results indicate that the OSC power system design, operating within the stipulated evaporator and clad temperature limits and well within its mass goals, can yield EOM power outputs and system efficiencies that substantially exceed the JPL-specified goals for the Europa and Pluto missions. However, those results only account for radioisotope decay. Other degradation mechanisms are still under study, and their short-and long-term effects must be quantified and understood before final conclusions about the adequacy and competitiveness of the AMTEC system can be drawn. {copyright} {ital 1999 American Institute of Physics.}« less
Residual Gases in Crystal Growth Systems
NASA Technical Reports Server (NTRS)
Palosz, W.
2003-01-01
Residual gases present in closed ampoules may affect different crystal growth processes. That seems to be particularly true under microgravity conditions where, due to weightlessness of the melt, the gases may lead to detached solidification and/or formation of voids and bubbles, as observed in the past. For that reason a good understanding and control of formation of residual gases is important for an optimum design and meaningful interpretation of crystal growth experiments. Our extensive experimental and theoretical studies of the subject, summarized in this paper, include degassing of silica glass and generation of gases from different source materials. Different materials processing conditions, like outgassing under vacuum, annealing in hydrogen, resublimation, different material preparation procedures, multiple annealings, different processing times, and others were applied and their effect on the amount and composition of gas were analyzed. The experimental results were interpreted based on theoretical calculations on diffusion in silica glass and source materials and thermochemistry of the system. Procedures for a reduction of the amount of gas are also discussed.
Major System Source Evaluation and Selection Procedures.
1987-04-02
A-RIBI I" MAJOR SYSTEM SOURCE EVALUATION AND SELECTION PROCEDURES / (U) BUSINESS MANAGEMENT RESEARCH ASSOCIATES INC ARLINGTON VA 02 APR 6? ORMC-5...BRMC-85-5142-1 0 I- MAJOR SYSTEM SOURCE EVALUATION AND SELECTION PROCEDURES o I Business Management Research Associates, Inc. 1911 Jefferson Davis...FORCE SOURCE EVALUATION AND SELECTI ON PROCEDURES Prepared by Business Management Research Associates, Inc., 1911 Jefferson Davis Highway, Arlington
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
Airborne transmission and precautions: facts and myths.
Seto, W H
2015-04-01
Airborne transmission occurs only when infectious particles of <5 μm, known as aerosols, are propelled into the air. The prevention of such transmission is expensive, requiring N95 respirators and negative pressure isolation rooms. This lecture first discussed whether respiratory viral infections are airborne with reference to published reviews of studies before 2008, comparative trials of surgical masks and N95 respirators, and relevant new experimental studies. However, the most recent experimental study, using naturally infected influenza volunteers as the source, showed negative results from all the manikins that were exposed. Modelling studies by ventilation engineers were then summarized to explain why these results were not unexpected. Second, the systematic review commissioned by the World Health Organization on what constituted aerosol-generating procedures was summarized. From the available evidence, endotracheal intubation either by itself or combined with other procedures (e.g. cardiopulmonary resuscitation or bronchoscopy) was consistently associated with increased risk of transmission by the generation of aerosols. Copyright © 2014. Published by Elsevier Ltd.
Performance of OSC's initial Amtec generator design, and comparison with JPL's Europa Orbiter goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schock, A.; Noravian, H.; Or, C.
1998-07-01
The procedure for the analysis (with overpotential correction) of multitube AMTEC (Alkali Metal Thermal-to-Electrical Conversion) cells described in Paper IECEC 98-243 was applied to a wide range of multicell radioisotope space power systems. System design options consisting of one or two generators, each with 2, 3, or 4 stacked GPHS (General Purpose Heat Source) modules, identical to those used on previous NASA missions, were analyzed and performance-mapped. The initial generators analyzed by OSC had 8 AMTEC cells on each end of the heat source stack, with five beta-alumina solid electrolyte (BASE) tubes per cell. The heat source and converters inmore » the Orbital generator designs are embedded in a thermal insulation system consisting of Min-K fibrous insulation surrounded by graded-length molybdenum multifoils. Detailed analyses in previous Orbital studies found that such an insulation system could reduce extraneous heat losses to about 10%. For the above design options, the present paper presents the system mass and performance (i.e., the EOM system efficiency and power output and the BOM evaporator and clad temperatures) for a wide range of heat inputs and load voltages, and compares the results with JPL's preliminary goals for the Europa Orbiter mission to be launched in November 2003. The analytical results showed that the initial 16-cell generator designs resulted in either excessive evaporator and clad temperatures and/or insufficient power outputs to meet the JPL-specified mission goals. The computed performance of modified OSC generators with different numbers of AMTEC cells, cell diameters, cell lengths, cell materials, BASE tube lengths, and number of tubes per cell are described in Paper IECEC.98.245 in these proceedings.« less
Liquid by-products from fish canning industry as sustainable sources of ω3 lipids.
Monteiro, Ana; Paquincha, Diogo; Martins, Florinda; Queirós, Rui P; Saraiva, Jorge A; Švarc-Gajić, Jaroslava; Nastić, Nataša; Delerue-Matos, Cristina; Carvalho, Ana P
2018-08-01
Fish canning industry generates large amounts of liquid wastes, which are discarded, after proper treatment to remove the organic load. However, alternative treatment processes may also be designed in order to target the recovery of valuable compounds; with this procedure, these wastewaters are converted into liquid by-products, becoming an additional source of revenue for the company. This study evaluated green and economically sustainable methodologies for the extraction of ω3 lipids from fish canning liquid by-products. Lipids were extracted by processes combining physical and chemical parameters (conventional and pressurized extraction processes), as well as chemical and biological parameters. Furthermore, LCA was applied to evaluate the environmental performance and costs indicators for each process. Results indicated that extraction with high hydrostatic pressure provides the highest amounts of ω3 polyunsaturated fatty acids (3331,5 mg L -1 effluent), apart from presenting the lowest environmental impact and costs. The studied procedures allow to obtain alternative, sustainable and traceable sources of ω3 lipids for further applications in food, pharmaceutical and cosmetic industries. Additionally, such approach contributes towards the organic depuration of canning liquid effluents, therefore reducing the overall waste treatment costs. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Next Generation of HLA Image Products
NASA Astrophysics Data System (ADS)
Gaffney, N. I.; Casertano, S.; Ferguson, B.
2012-09-01
We present the re-engineered pipeline based on existing and improved algorithms with the aim of improving processing quality, cross-instrument portability, data flow management, and software maintenance. The Hubble Legacy Archive (HLA) is a project to add value to the Hubble Space Telescope data archive by producing and delivering science-ready drizzled data products and source lists derived from these products. Initially, ACS, NICMOS, and WFCP2 data were combined using instrument-specific pipelines based on scripts developed to process the ACS GOODS data and a separate set of scripts to generate source extractor and DAOPhot source lists. The new pipeline, initially designed for WFC3 data, isolates instrument-specific processing and is easily extendable to other instruments and to generating wide-area mosaics. Significant improvements have been made in image combination using improved alignment, source detection, and background equalization routines. It integrates improved alignment procedures, better noise model, and source list generation within a single code base. Wherever practical, PyRAF based routines have been replaced with non-IRAF based python libraries (e.g. NumPy and PyFITS). The data formats have been modified to handle better and more consistent propagation of information from individual exposures to the combined products. A new exposure layer stores the effective exposure time for each pixel in the sky which is key in properly interpreting combined images from diverse data that were not initially planned to be mosaiced. We worked to improve the validity of the metadata within our FITS headers for these products relative to standard IRAF/PyRAF processing. Any keywords that pertain to individual exposures have been removed from the primary and extension headers and placed in a table extension for more direct and efficient perusal. This mechanism also allows for more detailed information on the processing of individual images to be stored and propagated providing a more hierarchical metadata storage system than key value pair FITS headers provide. In this poster we will discuss the changes to the pipeline processing and source list generation and the lessons learned which may be applicable to other archive projects as well as discuss our new metadata curation and preservation process.
A matrix-inversion method for gamma-source mapping from gamma-count data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adsley, Ian; Burgess, Claire; Bull, Richard K
In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less
Wen, Nainan; Chia, Stella C; Hao, Xiaoming
2015-01-01
This study examines portrayals of cosmetic surgery on YouTube, where we found a substantial number of cosmetic surgery videos. Most of the videos came from cosmetic surgeons who appeared to be aggressively using social media in their practices. Except for videos that explained cosmetic surgery procedures, most videos in our sample emphasized the benefits of cosmetic surgery, and only a small number of the videos addressed the involved risks. We also found that tactics of persuasive communication-namely, related to message source and message sensation value (MSV)-have been used in Web-based social media to attract viewers' attention and interests. Expert sources were used predominantly, although typical-consumer sources tended to generate greater viewer interest in cosmetic surgery than other types of message sources. High MSV, moreover, was found to increase a video's popularity.
NASA Astrophysics Data System (ADS)
Chan, Chang-Ching; Bolgar, Mark S.; Miller, Scott A.; Attygalle, Athula B.
2011-01-01
A source that couples the desorption ionization by charge exchange (DICE) and desorption electrospray ionization (DESI) techniques together was demonstrated to broaden the range of compounds that can be analyzed in a single mass spectrometric experiment under ambient conditions. A tee union was used to mix the spray reagents into a partially immiscible blend before this mixture was passed through a conventional electrospray (ES) probe capillary. Using this technique, compounds that are ionized more efficiently by the DICE method and those that are ionized better with the DESI procedure could be analyzed simultaneously. For example, hydroquinone, which is not detected when subjected to DESI-MS in the positive-ion generation mode, or the sodium adduct of guaifenesin, which is not detected when examined by DICE-MS, could both be detected in one experiment when the two techniques were combined. The combined technique was able to generate the molecular ion, proton and metal adduct from the same compound. When coupled to a tandem mass spectrometer, the combined source enabled the generation of product ion spectra from the molecular ion and the [M + H]+ or [M + metal]+ ions of the same compound without the need to physically change the source from DICE to DESI. The ability to record CID spectra of both the molecular ion and adduct ions in a single mass spectrometric experiment adds a new dimension to the array of mass spectrometric methods available for structural studies.
Aeroacoustic analysis of the human phonation process based on a hybrid acoustic PIV approach
NASA Astrophysics Data System (ADS)
Lodermeyer, Alexander; Tautz, Matthias; Becker, Stefan; Döllinger, Michael; Birk, Veronika; Kniesburges, Stefan
2018-01-01
The detailed analysis of sound generation in human phonation is severely limited as the accessibility to the laryngeal flow region is highly restricted. Consequently, the physical basis of the underlying fluid-structure-acoustic interaction that describes the primary mechanism of sound production is not yet fully understood. Therefore, we propose the implementation of a hybrid acoustic PIV procedure to evaluate aeroacoustic sound generation during voice production within a synthetic larynx model. Focusing on the flow field downstream of synthetic, aerodynamically driven vocal folds, we calculated acoustic source terms based on the velocity fields obtained by time-resolved high-speed PIV applied to the mid-coronal plane. The radiation of these sources into the acoustic far field was numerically simulated and the resulting acoustic pressure was finally compared with experimental microphone measurements. We identified the tonal sound to be generated downstream in a small region close to the vocal folds. The simulation of the sound propagation underestimated the tonal components, whereas the broadband sound was well reproduced. Our results demonstrate the feasibility to locate aeroacoustic sound sources inside a synthetic larynx using a hybrid acoustic PIV approach. Although the technique employs a 2D-limited flow field, it accurately reproduces the basic characteristics of the aeroacoustic field in our larynx model. In future studies, not only the aeroacoustic mechanisms of normal phonation will be assessable, but also the sound generation of voice disorders can be investigated more profoundly.
NASA Astrophysics Data System (ADS)
Granovskii, Mikhail; Dincer, Ibrahim; Rosen, Marc A.
Published data from various sources are used to perform economic and environmental comparisons of four types of vehicles: conventional, hybrid, electric and hydrogen fuel cell. The production and utilization stages of the vehicles are taken into consideration. The comparison is based on a mathematical procedure, which includes normalization of economic indicators (prices of vehicles and fuels during the vehicle life and driving range) and environmental indicators (greenhouse gas and air pollution emissions), and evaluation of an optimal relationship between the types of vehicles in the fleet. According to the comparison, hybrid and electric cars exhibit advantages over the other types. The economic efficiency and environmental impact of electric car use depends substantially on the source of the electricity. If the electricity comes from renewable energy sources, the electric car is advantageous compared to the hybrid. If electricity comes from fossil fuels, the electric car remains competitive only if the electricity is generated on board. It is shown that, if electricity is generated with an efficiency of about 50-60% by a gas turbine engine connected to a high-capacity battery and an electric motor, the electric car becomes advantageous. Implementation of fuel cells stacks and ion conductive membranes into gas turbine cycles permits electricity generation to increase to the above-mentioned level and air pollution emissions to decrease. It is concluded that the electric car with on-board electricity generation represents a significant and flexible advance in the development of efficient and ecologically benign vehicles.
78 FR 29672 - Small Generator Interconnection Agreements and Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-21
...] Small Generator Interconnection Agreements and Procedures AGENCY: Federal Energy Regulatory Commission... 7524). The regulations revised the pro forma Small Generator Interconnection Procedures (SGIP) and pro forma Small Generator Interconnection Agreement (SGIA) originally set forth in Order No. 2006. DATES...
NASA Technical Reports Server (NTRS)
Pond, C. R.; Texeira, P. D.
1985-01-01
A laser angle measurement system was designed and fabricated for NASA Langley Research Center. The instrument is a fringe counting interferometer that monitors the pitch attitude of a model in a wind tunnel. A laser source and detector are mounted above the model. Interference fringes are generated by a small passive element on the model. The fringe count is accumulated and displayed by a processor in the wind tunnel control room. This report includes optical and electrical schematics, system maintenance and operation procedures.
DeBeck, Kora; Wood, Evan; Qi, Jiezhi; Fu, Eric; McArthur, Doug; Montaner, Julio; Kerr, Thomas
2011-01-01
Background Income generation opportunities available to people who use illicit drugs have been associated with street disorder. Among a cohort of injection drug users (IDU) we sought to examine street-based income generation practices and willingness to forgo these sources of income if other low-threshold work opportunities were made available. Methods Data were derived from a prospective community recruited cohort of IDU. We assessed the prevalence of engaging in disorderly street-based income generation activities, including sex work, drug dealing, panhandling, and recycling/salvaging/vending. Using multivariate logistic regressions based on Akaike information criterion and the best subset selection procedure, we identified factors associated with disorderly income generation activities, and assessed willingness to forgo these sources of income during the period of November 2008 to July 2009. Results Among our sample of 874 IDU, 418 (48%) reported engaging in a disorderly income generation activity in the previous six months. In multivariate analyses, engaging in disorderly income generation activities was independently associated with high intensity stimulant use, as well as binge drug use, having encounters with police, being a victim of violence, sharing used syringes, and injecting in public areas. Among those engaged in disorderly income generation, 198 (47%) reported a willingness to forgo these income sources if given opportunities for low-threshold employment, with sex workers being most willing to engage in alternative employment. Conclusion Engagement in disorderly street-based income generation activities was associated with high intensity stimulant drug use and various markers of risk. We found that a high proportion of illicit drug users were willing to cease engagement in these activities if they had options for causal low-threshold employment. These findings indicate that there is a high demand for low-threshold employment that may offer important opportunities to reduce drug-related street disorder and associated harms. PMID:21684142
Debeck, Kora; Wood, Evan; Qi, Jiezhi; Fu, Eric; McArthur, Doug; Montaner, Julio; Kerr, Thomas
2011-09-01
Income generation opportunities available to people who use illicit drugs have been associated with street disorder. Among a cohort of injection drug users (IDU) we sought to examine street-based income generation practices and willingness to forgo these sources of income if other low-threshold work opportunities were made available. Data were derived from a prospective community recruited cohort of IDU. We assessed the prevalence of engaging in disorderly street-based income generation activities, including sex work, drug dealing, panhandling, and recycling/salvaging/vending. Using multivariate logistic regressions based on Akaike information criterion and the best subset selection procedure, we identified factors associated with disorderly income generation activities, and assessed willingness to forgo these sources of income during the period of November 2008 to July 2009. Among our sample of 874 IDU, 418 (48%) reported engaging in a disorderly income generation activity in the previous six months. In multivariate analyses, engaging in disorderly income generation activities was independently associated with high intensity stimulant use, as well as binge drug use, having encounters with police, being a victim of violence, sharing used syringes, and injecting in public areas. Among those engaged in disorderly income generation, 198 (47%) reported a willingness to forgo these income sources if given opportunities for low-threshold employment, with sex workers being most willing to engage in alternative employment. Engagement in disorderly street-based income generation activities was associated with high intensity stimulant drug use and various markers of risk. We found that a high proportion of illicit drug users were willing to cease engagement in these activities if they had options for causal low-threshold employment. These findings indicate that there is a high demand for low-threshold employment that may offer important opportunities to reduce drug-related street disorder and associated harms. Copyright © 2011 Elsevier B.V. All rights reserved.
Miniaci, M; Gliozzi, A S; Morvan, B; Krushynska, A; Bosia, F; Scalerandi, M; Pugno, N M
2017-05-26
The appearance of nonlinear effects in elastic wave propagation is one of the most reliable and sensitive indicators of the onset of material damage. However, these effects are usually very small and can be detected only using cumbersome digital signal processing techniques. Here, we propose and experimentally validate an alternative approach, using the filtering and focusing properties of phononic crystals to naturally select and reflect the higher harmonics generated by nonlinear effects, enabling the realization of time-reversal procedures for nonlinear elastic source detection. The proposed device demonstrates its potential as an efficient, compact, portable, passive apparatus for nonlinear elastic wave sensing and damage detection.
The Orbital precession around oblate spheroids
NASA Astrophysics Data System (ADS)
Montanus, J. M. C.
2006-07-01
An exact series will be given for the gravitational potential generated by an oblate gravitating source. To this end the corresponding Epstein-Hubbell type elliptic integral is evaluated. The procedure is based on the Legendre polynomial expansion method and on combinatorial techniques. The result is of interest for gravitational models based on the linearity of the gravitational potential. The series approximation for such potentials is of use for the analysis of orbital motions around a nonspherical source. It can be considered advantageous that the analysis is purely algebraic. Numerical approximations are not required. As an important example, the expression for the orbital precession will be derived for an object orbiting around an oblate homogeneous spheroid.
Evaluation of S190A radiometric exposure test data
NASA Technical Reports Server (NTRS)
Lockwood, H. E.; Goodding, R. A.
1974-01-01
The S190A preflight radiometric exposure test data generated as part of preflight and system test of KM-002 Sequence 29 on flight camera S/N 002 was analyzed. The analysis was to determine camera system transmission using available data which included: (1) films exposed to a calibrated light source subject; (2) filter transmission data; (3) calibrated light source data; (4) density vs. log10 exposure curves for the films; and (5) spectral sensitometric data for the films. The procedure used is outlined, and includes the data and a transmission matrix as a function of field position for nine measured points on each station-film-filter-aperture-shutter speed combination.
Autopoiesis + extended cognition + nature = can buildings think?
Dollens, Dennis
2015-01-01
To incorporate metabolic, bioremedial functions into the performance of buildings and to balance generative architecture's dominant focus on computational programming and digital fabrication, this text first discusses hybridizing Maturana and Varela's biological theory of autopoiesis with Andy Clark's hypothesis of extended cognition. Doing so establishes a procedural protocol to research biological domains from which design could source data/insight from biosemiotics, sensory plants, and biocomputation. I trace computation and botanic simulations back to Alan Turing's little-known 1950s Morphogenetic drawings, reaction-diffusion algorithms, and pioneering artificial intelligence (AI) in order to establish bioarchitecture's generative point of origin. I ask provocatively, Can buildings think? as a question echoing Turing's own, "Can machines think?" PMID:26478784
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lichius, Alexander; Bidard, Frédérique; Buchholz, Franziska
2015-04-20
Trichoderma reesei is the main industrial source of cellulases and hemicellulases required for the hydrolysis of biomass to simple sugars, which can then be used in the production of biofuels and biorefineries. The highly productive strains in use today were generated by classical mutagenesis. As byproducts of this procedure, mutants were generated that turned out to be unable to produce cellulases. In order to identify the mutations responsible for this inability, we sequenced the genome of one of these strains, QM9136, and compared it to that of its progenitor T. reesei QM6a.
WIPP waste characterization program sampling and analysis guidance manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
The Waste Isolation Pilot Plant (WIPP) Waste Characterization Program Sampling and Analysis Guidance Manual (Guidance Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Quality Assurance Program Plan (QAPP) for the WIPP Experimental-Waste Characterization Program (the Program). This Guidance Manual includes all of the sampling and testing methodologies accepted by the WIPP Project Office (DOE/WPO) for use in implementing the Program requirements specified in the QAPP. This includes methods for characterizing representative samples of transuranic (TRU) wastesmore » at DOE generator sites with respect to the gas generation controlling variables defined in the WIPP bin-scale and alcove test plans, as well as waste container headspace gas sampling and analytical procedures to support waste characterization requirements under the WIPP test program and the Resource Conservation and Recovery Act (RCRA). The procedures in this Guidance Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site specific procedures. The use of these procedures is intended to provide the necessary sensitivity, specificity, precision, and comparability of analyses and test results. The solutions to achieving specific program objectives will depend upon facility constraints, compliance with DOE Orders and DOE facilities' operating contractor requirements, and the knowledge and experience of the TRU waste handlers and analysts. With some analytical methods, such as gas chromatography/mass spectrometry, the Guidance Manual procedures may be used directly. With other methods, such as nondestructive/destructive characterization, the Guidance Manual provides guidance rather than a step-by-step procedure.« less
Recent progress on monolithic fiber amplifiers for next generation of gravitational wave detectors
NASA Astrophysics Data System (ADS)
Wellmann, Felix; Booker, Phillip; Hochheim, Sven; Theeg, Thomas; de Varona, Omar; Fittkau, Willy; Overmeyer, Ludger; Steinke, Michael; Weßels, Peter; Neumann, Jörg; Kracht, Dietmar
2018-02-01
Single-frequency fiber amplifiers in MOPA configuration operating at 1064 nm (Yb3+) and around 1550 nm (Er3+ or Er3+:Yb3+) are promising candidates to fulfill the challenging requirements of laser sources of the next generation of interferometric gravitational wave detectors (GWDs). Most probably, the next generation of GWDs is going to operate not only at 1064 nm but also at 1550 nm to cover a broader range of frequencies in which gravitational waves are detectable. We developed an engineering fiber amplifier prototype at 1064 nm emitting 215 W of linearly-polarized light in the TEM00 mode. The system consists of three modules: the seed source, the pre-amplifier, and the main amplifier. The modular design ensures reliable long-term operation, decreases system complexity and simplifies repairing and maintenance procedures. It also allows for the future integration of upgraded fiber amplifier systems without excessive downtimes. We also developed and characterized a fiber amplifier prototype at around 1550 nm that emits 100 W of linearly-polarized light in the TEM00 mode. This prototype uses an Er3+:Yb3+ codoped fiber that is pumped off-resonant at 940 nm. The off-resonant pumping scheme improves the Yb3+-to-Er3+ energy transfer and prevents excessive generation of Yb3+-ASE.
NASA Astrophysics Data System (ADS)
Regonda, Satish Kumar; Seo, Dong-Jun; Lawrence, Bill; Brown, James D.; Demargne, Julie
2013-08-01
We present a statistical procedure for generating short-term ensemble streamflow forecasts from single-valued, or deterministic, streamflow forecasts produced operationally by the U.S. National Weather Service (NWS) River Forecast Centers (RFCs). The resulting ensemble streamflow forecast provides an estimate of the predictive uncertainty associated with the single-valued forecast to support risk-based decision making by the forecasters and by the users of the forecast products, such as emergency managers. Forced by single-valued quantitative precipitation and temperature forecasts (QPF, QTF), the single-valued streamflow forecasts are produced at a 6-h time step nominally out to 5 days into the future. The single-valued streamflow forecasts reflect various run-time modifications, or "manual data assimilation", applied by the human forecasters in an attempt to reduce error from various sources in the end-to-end forecast process. The proposed procedure generates ensemble traces of streamflow from a parsimonious approximation of the conditional multivariate probability distribution of future streamflow given the single-valued streamflow forecast, QPF, and the most recent streamflow observation. For parameter estimation and evaluation, we used a multiyear archive of the single-valued river stage forecast produced operationally by the NWS Arkansas-Red River Basin River Forecast Center (ABRFC) in Tulsa, Oklahoma. As a by-product of parameter estimation, the procedure provides a categorical assessment of the effective lead time of the operational hydrologic forecasts for different QPF and forecast flow conditions. To evaluate the procedure, we carried out hindcasting experiments in dependent and cross-validation modes. The results indicate that the short-term streamflow ensemble hindcasts generated from the procedure are generally reliable within the effective lead time of the single-valued forecasts and well capture the skill of the single-valued forecasts. For smaller basins, however, the effective lead time is significantly reduced by short basin memory and reduced skill in the single-valued QPF.
Calibration of the ROSAT HRI Spectral Response
NASA Technical Reports Server (NTRS)
Prestwich, Andrea
1998-01-01
The ROSAT High Resolution Imager has a limited (2-band) spectral response. This spectral capability can give X-ray hardness ratios on spatial scales of 5 arcseconds. The spectral response of the center of the detector was calibrated before the launch of ROSAT, but the gain decreases-with time and also is a function of position on the detector. To complicate matters further, the satellite is "wobbled", possibly moving a source across several spatial gain states. These difficulties have prevented the spectral response of the ROSAT HRI from being used for scientific measurements. We have used Bright Earth data and in-flight calibration sources to map the spatial and temporal gain changes, and written software which will allow ROSAT users to generate a calibrated XSPEC response matrix and hence determine a calibrated hardness ratio. In this report, we describe the calibration procedure and show how to obtain a response matrix. In Section 2 we give an overview of the calibration procedure, in Section 3 we give a summary of HRI spatial and temporal gain variations. Section 4 describes the routines used to determine the gain distribution of a source. In Sections 5 and 6, we describe in detail how the Bright Earth database and calibration sources are used to derive a corrected response matrix for a given observation. Finally, Section 7 describes how to use the software.
Calibration of the ROSAT HRI Spectral Response
NASA Technical Reports Server (NTRS)
Prestwich, Andrea H.; Silverman, John; McDowell, Jonathan; Callanan, Paul; Snowden, Steve
2000-01-01
The ROSAT High Resolution Imager has a limited (2-band) spectral response. This spectral capability can give X-ray hardness ratios on spatial scales of 5 arcseconds. The spectral response of the center of the detector was calibrated before the launch of ROSAT, but the gain decreases with time and also is a function of position on the detector. To complicate matters further, the satellite is 'wobbled', possibly moving a source across several spatial gain states. These difficulties have prevented the spectral response of the ROSAT High Resolution Imager (HRI) from being used for scientific measurements. We have used Bright Earth data and in-flight calibration sources to map the spatial and temporal gain changes, and written software which will allow ROSAT users to generate a calibrated XSPEC (an x ray spectral fitting package) response matrix and hence determine a calibrated hardness ratio. In this report, we describe the calibration procedure and show how to obtain a response matrix. In Section 2 we give an overview of the calibration procedure, in Section 3 we give a summary of HRI spatial and temporal gain variations. Section 4 describes the routines used to determine the gain distribution of a source. In Sections 5 and 6, we describe in detail how, the Bright Earth database and calibration sources are used to derive a corrected response matrix for a given observation. Finally, Section 7 describes how to use the software.
A pulsed injection parahydrogen generator and techniques for quantifying enrichment.
Feng, Bibo; Coffey, Aaron M; Colon, Raul D; Chekmenev, Eduard Y; Waddell, Kevin W
2012-01-01
A device is presented for efficiently enriching parahydrogen by pulsed injection of ambient hydrogen gas. Hydrogen input to the generator is pulsed at high pressure to a catalyst chamber making thermal contact with the cold head of a closed-cycle cryocooler maintained between 15 and 20K. The system enables fast production (0.9 standard liters per minute) and allows for a wide range of production targets. Production rates can be systematically adjusted by varying the actuation sequence of high-pressure solenoid valves, which are controlled via an open source microcontroller to sample all combinations between fast and thorough enrichment by varying duration of hydrogen contact in the catalyst chamber. The entire enrichment cycle from optimization to quantification and storage kinetics are also described. Conversion of the para spin-isomer to orthohydrogen in borosilicate tubes was measured at 8 min intervals over a period of 64 h with a 12 T NMR spectrometer. These relaxation curves were then used to extract initial enrichment by exploiting the known equilibrium (relaxed) distribution of spin isomers with linear least squares fitting to a single exponential decay curve with an estimated error less than or equal to 1%. This procedure is time-consuming, but requires only one sample pressurized to atmosphere. Given that tedious matching to external references are unnecessary with this procedure, we find it to be useful for periodic inspection of generator performance. The equipment and procedures offer a variation in generator design that eliminate the need to meter flow while enabling access to increased rates of production. These tools for enriching and quantifying parahydrogen have been in steady use for 3 years and should be helpful as a template or as reference material for building and operating a parahydrogen production facility. Copyright © 2011 Elsevier Inc. All rights reserved.
A Pulsed Injection Parahydrogen Generator and Techniques for Quantifying Enrichment
Feng, Bibo; Coffey, Aaron M.; Colon, Raul D.; Chekmenev, Eduard Y.; Waddell, Kevin W.
2012-01-01
A device is presented for efficiently enriching parahydrogen by pulsed injection of ambient hydrogen gas. Hydrogen input to the generator is pulsed at high pressure to a catalyst chamber making thermal contact with the cold head of a closed cycle cryostat maintained between 15 and 20 K. The system enables fast production (0.9 standard liters per minute) and allows for a wide range of production targets. Production rates can be systematically adjusted by varying the actuation sequence of high-pressure solenoid valves, which are controlled via an open source microcontroller to sample all combinations between fast and thorough enrichment by varying duration of hydrogen contact in the catalyst chamber. The entire enrichment cycle from optimization to quantification and storage kinetics are also described. Conversion of the para spin-isomer to orthohydrogen in borosilicate tubes was measured at 8 minute intervals over a period of 64 hours with a 12 Tesla NMR spectrometer. These relaxation curves were then used to extract initial enrichment by exploiting the known equilibrium (relaxed) distribution of spin isomers with linear least squares fitting to a single exponential decay curve with an estimated error less than or equal to 1 %. This procedure is time-consuming, but requires only one sample pressurized to atmosphere. Given that tedious matching to external references are unnecessary with this procedure, we find it to be useful for periodic inspection of generator performance. The equipment and procedures offer a variation in generator design that eliminate the need to meter flow while enabling access to increased rates of production. These tools for enriching and quantifying parahydrogen have been in steady use for 3 years and should be helpful as a template or as reference material for building and operating a parahydrogen production facility. PMID:22188975
A scoping review on bio-aerosols in healthcare and the dental environment.
Zemouri, Charifa; de Soet, Hans; Crielaard, Wim; Laheij, Alexa
2017-01-01
Bio-aerosols originate from different sources and their potentially pathogenic nature may form a hazard to healthcare workers and patients. So far no extensive review on existing evidence regarding bio-aerosols is available. This study aimed to review evidence on bio-aerosols in healthcare and the dental setting. The objectives were 1) What are the sources that generate bio-aerosols?; 2) What is the microbial load and composition of bio-aerosols and how were they measured?; and 3) What is the hazard posed by pathogenic micro-organisms transported via the aerosol route of transmission? Systematic scoping review design. Searched in PubMed and EMBASE from inception to 09-03-2016. References were screened and selected based on abstract and full text according to eligibility criteria. Full text articles were assessed for inclusion and summarized. The results are presented in three separate objectives and summarized for an overview of evidence. The search yielded 5,823 studies, of which 62 were included. Dental hand pieces were found to generate aerosols in the dental settings. Another 30 sources from human activities, interventions and daily cleaning performances in the hospital also generate aerosols. Fifty-five bacterial species, 45 fungi genera and ten viruses were identified in a hospital setting and 16 bacterial and 23 fungal species in the dental environment. Patients with certain risk factors had a higher chance to acquire Legionella in hospitals. Such infections can lead to irreversible septic shock and death. Only a few studies found that bio-aerosol generating procedures resulted in transmission of infectious diseases or allergic reactions. Bio-aerosols are generated via multiple sources such as different interventions, instruments and human activity. Bio-aerosols compositions reported are heterogeneous in their microbiological composition dependent on the setting and methodology. Legionella species were found to be a bio-aerosol dependent hazard to elderly and patients with respiratory complaints. But all aerosols can be can be hazardous to both patients and healthcare workers.
Uncertainty quantification in (α,n) neutron source calculations for an oxide matrix
Pigni, M. T.; Croft, S.; Gauld, I. C.
2016-04-25
Here we present a methodology to propagate nuclear data covariance information in neutron source calculations from (α,n) reactions. The approach is applied to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types due to uncertainties on 1) 17,18O( α,n) reaction cross sections and 2) uranium and oxygen stopping power cross sections. The procedure to generate reaction cross section covariance information is based on the Bayesian fitting method implemented in the R-matrix SAMMY code. The evaluation methodology uses the Reich-Moore approximation to fit the 17,18O(α,n) reaction cross-sections in order to derive a set of resonance parameters andmore » a related covariance matrix that is then used to calculate the energydependent cross section covariance matrix. The stopping power cross sections and related covariance information for uranium and oxygen were obtained by the fit of stopping power data in the -energy range of 1 keV up to 12 MeV. Cross section perturbation factors based on the covariance information relative to the evaluated 17,18O( α,n) reaction cross sections, as well as uranium and oxygen stopping power cross sections, were used to generate a varied set of nuclear data libraries used in SOURCES4C and ORIGEN for inventory and source term calculations. The set of randomly perturbed output (α,n) source responses, provide the mean values and standard deviations of the calculated responses reflecting the uncertainties in nuclear data used in the calculations. Lastly, the results and related uncertainties are compared with experiment thick target (α,n) yields for uranium oxide.« less
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
Problem Formulation and Alternative Generation in the Decision Making Process
1988-06-30
Organizatio N00014-86-K-0678 Sc. ADDRESS(City, State, and ZIP Code) 10 SOURCE OF FUNDING NUMBERS p4000ub20/7-4-86 PROGRAM PROJECT TASK WORK UNIT ELEMENT...procedure will work satisfactorily (not optimally) as long as the organism has ample time to carry Ity Codesi and/or DIst 4pu cial3p Problem...among which the priorities are worked out. Neither problems nor opportunities can be considered for the agenda unless they are noticed, and except for
Laser angle measurement system
NASA Technical Reports Server (NTRS)
Pond, C. R.; Texeira, P. D.; Wilbert, R. E.
1980-01-01
The design and fabrication of a laser angle measurement system is described. The instrument is a fringe counting interferometer that monitors the pitch attitude of a model in a wind tunnel. A laser source and detector are mounted above the mode. Interference fringes are generated by a small passive element on the model. The fringe count is accumulated and displayed by a processor in the wind tunnel control room. Optical and electrical schematics, system maintenance and operation procedures are included, and the results of a demonstration test are given.
Kim, Jeong Hwan; Park, Si-Nae; Suh, Hwal
2007-02-28
The purpose of current experiment is the generation of insulin-producing human mesenchymal stem cells as therapeutic source for the cure of type 1 diabetes. Type 1 diabetes is generally caused by insulin deficiency accompanied by the destruction of islet beta-cells. In various trials for the treatment of type 1 diabetes, cell-based gene therapy using stem cells is considered as one of the most useful candidate for the treatment. In this experiment, human mesenchymal stem cells were transduced with AAV which is containing furin-cleavable human preproinsulin gene to generate insulin-producing cells as surrogate beta-cells for the type 1 diabetes therapy. In the rAAV production procedure, rAAV was generated by transfection of AD293 cells. Human mesenchymal stems cells were transduced using rAAV with a various multiplicity of infection. Transduction of recombinant AAV was also tested using beta-galactosidse expression. Cell viability was determined by using MTT assay to evaluate the toxicity of the transduction procedure. Expression and production of Insulin were tested using reverse transcriptase-polymerase chain reaction and immunocytochemistry. Secretion of human insulin and C-peptide from the cells was assayed using enzyme-linked immunosorbent assay. Production of insulin and C-peptide from the test group represented a higher increase compared to the control group. In this study, we examined generation of insulin-producing cells from mesenchymal stem cells by genetic engineering for diabetes therapy. This work might be valuable to the field of tissue engineering for diabetes treatment.
77 FR 21808 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
... and open source records and commercial database. EXEMPTIONS CLAIMED FOR THE SYSTEM: The Attorney... notification procedures, the record access procedures, the contesting record procedures, the record source..., confidential sources, and victims of crimes. The offenses and alleged offenses associated with the individuals...
Double-side illuminated titania nanotubes for high volume hydrogen generation by water splitting
NASA Astrophysics Data System (ADS)
Mohapatra, Susanta K.; Mahajan, Vishal K.; Misra, Mano
2007-11-01
A sonoelectrochemical anodization method is proposed to synthesize TiO2 nanotubular arrays on both sides of a titanium foil (TiO2/Ti/TiO2). Highly ordered TiO2 nanotubular arrays of 16 cm2 area with uniform surface distribution can be obtained using this anodization procedure. These double-sided TiO2/Ti/TiO2 materials are used as both photoanode (carbon-doped titania nanotubes) and cathode (Pt nanoparticles dispersed on TiO2 nanotubes; PtTiO2/Ti/PtTiO2) in a specially designed photoelectrochemical cell to generate hydrogen by water splitting at a rate of 38 ml h-1. The nanomaterials are characterized by FESEM, HRTEM, STEM, EDS, FFT, SAED and XPS techniques. The present approach can be used for large-scale hydrogen generation using renewable energy sources.
EnRICH: Extraction and Ranking using Integration and Criteria Heuristics.
Zhang, Xia; Greenlee, M Heather West; Serb, Jeanne M
2013-01-15
High throughput screening technologies enable biologists to generate candidate genes at a rate that, due to time and cost constraints, cannot be studied by experimental approaches in the laboratory. Thus, it has become increasingly important to prioritize candidate genes for experiments. To accomplish this, researchers need to apply selection requirements based on their knowledge, which necessitates qualitative integration of heterogeneous data sources and filtration using multiple criteria. A similar approach can also be applied to putative candidate gene relationships. While automation can assist in this routine and imperative procedure, flexibility of data sources and criteria must not be sacrificed. A tool that can optimize the trade-off between automation and flexibility to simultaneously filter and qualitatively integrate data is needed to prioritize candidate genes and generate composite networks from heterogeneous data sources. We developed the java application, EnRICH (Extraction and Ranking using Integration and Criteria Heuristics), in order to alleviate this need. Here we present a case study in which we used EnRICH to integrate and filter multiple candidate gene lists in order to identify potential retinal disease genes. As a result of this procedure, a candidate pool of several hundred genes was narrowed down to five candidate genes, of which four are confirmed retinal disease genes and one is associated with a retinal disease state. We developed a platform-independent tool that is able to qualitatively integrate multiple heterogeneous datasets and use different selection criteria to filter each of them, provided the datasets are tables that have distinct identifiers (required) and attributes (optional). With the flexibility to specify data sources and filtering criteria, EnRICH automatically prioritizes candidate genes or gene relationships for biologists based on their specific requirements. Here, we also demonstrate that this tool can be effectively and easily used to apply highly specific user-defined criteria and can efficiently identify high quality candidate genes from relatively sparse datasets.
RESEARCH MISCONDUCT POLICIES OF SCIENTIFIC JOURNALS
RESNIK, DAVID B.; PEDDADA, SHYAMAL; BRUNSON, WINNON
2014-01-01
The purpose of this study was to gather information on the misconduct policies of scientific journals. We contacted editors from a random sample of 399 journals drawn from the ISI Web of Knowledge database. We received 197 responses (49.4% response rate): 54.8% had a policy, and 47.7% had a formal (written) policy; 28.9% had a policy that only outlined procedures for handling misconduct, 15.7% had a policy that only defined misconduct, 10.2% had a policy that included both a definition and procedures; 26.9% of journals had a policy that was generated by the publisher, 13.2% had a policy that was generated by the journal, and 14.7% had a policy that was generated by another source, such as a professional association. We analyzed the relationship between having a policy and impact factor, field of science, publishing house, and nationality. Impact factor was the only variable with a statistically significant association with having a policy. Impact factor was slightly positively associated with whether or not the publisher had a policy, with an odds ratio of 1.49 (P < .0004) per 10 units increase in the impact factor, with a 95% confidence interval (1.20, 1.88). Our research indicates that more than half of scientific journals have developed misconduct policies, but that most of these policies do not define research misconduct and most of these policies were not generated by the journal. PMID:19757231
Developments and challenges in biodiesel production from microalgae: A review.
Taparia, Tanvi; Mvss, Manjari; Mehrotra, Rajesh; Shukla, Paritosh; Mehrotra, Sandhya
2016-09-01
The imminent depletion of fossil fuels and the surging global demand for renewable energy have led to the search for nonconventional energy sources. After a few decades of trial and error, the world is now testing the sources of the third generation of fossil fuels, which contain for most parts microalgae. With more than 80% oil content, being adaptable in growth parameters and highly versatile, microalgae are highly promising sources of biofuels in the present time. The present article makes a sweeping attempt to highlight the various methods employed for cultivation of microalgae, techniques to harvest and extract biomass from huge algal cultures, as well as their downstream production and processing procedures. The advantages, limitations, and challenges faced by each of them have been described to some extent. Major concerns pertaining to biofuels are supposed to be their environmental sustainability and economic viability along with their cost effectiveness. This would require a great deal of empirical data on existing systems and a great deal of optimization to generate a more robust one. We have concluded our article with a SWOT analysis of using algae for biodiesel production in a tabulated form. © 2015 International Union of Biochemistry and Molecular Biology, Inc.
Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality
NASA Astrophysics Data System (ADS)
Lee, I.-C.; Tsai, F.
2015-05-01
A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.
Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M
2014-04-01
Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.
Kettler, Susanne; Kennedy, Marc; McNamara, Cronan; Oberdörfer, Regina; O'Mahony, Cian; Schnabel, Jürgen; Smith, Benjamin; Sprong, Corinne; Faludi, Roland; Tennant, David
2015-08-01
Uncertainty analysis is an important component of dietary exposure assessments in order to understand correctly the strength and limits of its results. Often, standard screening procedures are applied in a first step which results in conservative estimates. If through those screening procedures a potential exceedance of health-based guidance values is indicated, within the tiered approach more refined models are applied. However, the sources and types of uncertainties in deterministic and probabilistic models can vary or differ. A key objective of this work has been the mapping of different sources and types of uncertainties to better understand how to best use uncertainty analysis to generate more realistic comprehension of dietary exposure. In dietary exposure assessments, uncertainties can be introduced by knowledge gaps about the exposure scenario, parameter and the model itself. With this mapping, general and model-independent uncertainties have been identified and described, as well as those which can be introduced and influenced by the specific model during the tiered approach. This analysis identifies that there are general uncertainties common to point estimates (screening or deterministic methods) and probabilistic exposure assessment methods. To provide further clarity, general sources of uncertainty affecting many dietary exposure assessments should be separated from model-specific uncertainties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, M.N.; Marse, T.J.; Williams, P.L.
1998-12-31
In this study initial data were generated to develop laboratory control charts for aquatic toxicity testing using the nematode Caenorhabditis elegans. Tests were performed using two reference toxicants: CdCl{sub 2} and CuCl{sub 2}. All tests were performed for 24 h without a food source and of 48 h with a food source in a commonly used nematode aquatic medium. Each test was replicated 6 times with each replicate having 6 wells per concentration with 10 {+-} 1 worms per well. Probit analysis was used to estimate LC{sub 50} values for each test. The data were used to construct a meanmore » ({bar x}) laboratory control chart for each reference toxicant. The coefficient of variation (CV) for three of the four reference toxicant tests was less than 20%, which demonstrates an excellent degree of reproducibility. These CV values are well within suggested standards for determination of organism sensitivity and overall test system credibility. A standardized procedure for performing 24 h and 48 h aquatic toxicity studies with C. elegans is proposed.« less
Raman-tailored photonic crystal fiber for telecom band photon-pair generation.
Cordier, M; Orieux, A; Gabet, R; Harlé, T; Dubreuil, N; Diamanti, E; Delaye, P; Zaquine, I
2017-07-01
We report on the experimental characterization of a novel nonlinear liquid-filled hollow-core photonic crystal fiber for the generation of photon pairs at a telecommunication wavelength through spontaneous four-wave mixing (SFWM). We show that the optimization procedure in view of this application links the choice of the nonlinear liquid to the design parameters of the fiber, and we give an example of such an optimization at telecom wavelengths. Combining the modeling of the fiber and classical characterization techniques at these wavelengths, we identify for the chosen fiber and liquid combination SFWM phase-matching frequency ranges with no Raman scattering noise contamination. This is a first step toward obtaining a telecom band fibered photon-pair source with a high signal-to-noise ratio.
Recommended design and fabrication sequence of AMTEC test assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schock, A.; Kumar, V.; Noravian, H.
1998-01-01
A series of previous OSC papers described: 1) a novel methodology for the coupled thermal, fluid flow, and electrical analysis of multitube AMTEC (Alkali Metal Thermal-to-Electric Conversion) cells; 2) the application of that methodology to determine the effect of numerous design variations on the cell{close_quote}s performance, leading to selection and performance characterization of an OSC-recommended cell design; and 3) the design, analysis, and characterization of an OSC-generated power system design combining sixteen of the above AMTEC cells with two or three GPHS (General Purpose Heat Source) radioisotope heat source modules, and the applicability of those power systems to future spacemore » missions ({ital e.g.} Pluto Express and Europa Orbiter) under consideration by NASA. The OSC system design studies demonstrated the critical importance of the thermal insulation subsystem, and culminated in a design in which the eight AMTEC cells on each end of the heat source stack are embedded in Min-K fibrous insulation, and the Min-K and the GPHS modules are surrounded by graded-length Mo multifoil insulation. The present paper depicts the OSC-recommended AMTEC cell and generator designs, and identifies the need for an electrically heated (scaled-down but otherwise prototypic) test assembly for the experimental validation of the generator{close_quote}s system performance predictions. It then describes the design of an OSC-recommended test assembly consisting of an electrical heater enclosed in a graphite box to simulate the radioisotope heat source, four series-connected prototypic AMTEC cells of the OSC-recommended configuration, and a prototypic hybrid insulation package consisting of Min-K and graded-length Mo multifoils. Finally, the paper describes and illustrates an OSC-recommended detailed fabrication sequence and procedure for the above cell and test assembly. That fabrication procedure is being implemented by AMPS, Inc. with the support of DOE{close_quote}s Oak Ridge and Mound Laboratories, and the Air Force Phillips Laboratory (AFPL) will test the performance of the assembly over a range of input thermal powers and output voltages. The experimentally measured performance will be compared with the results of OSC analyses of the same insulated test assembly over the same range of operating parameters. {copyright} {ital 1998 American Institute of Physics.}« less
Underground coal mining section data
NASA Technical Reports Server (NTRS)
Gabrill, C. P.; Urie, J. T.
1981-01-01
A set of tables which display the allocation of time for ten personnel and eight pieces of underground coal mining equipment to ten function categories is provided. Data from 125 full shift time studies contained in the KETRON database was utilized as the primary source data. The KETRON activity and delay codes were mapped onto JPL equipment, personnel and function categories. Computer processing was then performed to aggregate the shift level data and generate the matrices. Additional, documented time study data were analyzed and used to supplement the KETRON databased. The source data including the number of shifts are described. Specific parameters of the mines from which there data were extracted are presented. The result of the data processing including the required JPL matrices is presented. A brief comparison with a time study analysis of continuous mining systems is presented. The procedures used for processing the source data are described.
Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi
A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less
Towards a better understanding of helicopter external noise
NASA Astrophysics Data System (ADS)
Damongeot, A.; Dambra, F.; Masure, B.
The problem of helicopter external noise generation is studied taking into consideration simultaneously the multiple noise sources: rotor rotational-, rotor broadband -, and engine noise. The main data are obtained during flight tests of the rather quiet AS 332 Super Puma. The flight procedures settled by ICAO for noise regulations are used: horizontal flyover at 90 percent of the maximum speed, approach at minimum power velocity, take-off at best rate of climb. Noise source levels are assessed through narrow band analysis of ground microphone recordings, ground measurements of engine noise and theoretical means. With the perceived noise level unit used throughout the study, relative magnitude of noise sources is shown to be different from that obtained with linear noise unit. A parametric study of the influence of some helicopter parameters on external noise has shown that thickness-tapered, chord-tapered, and swept-back blade tips are good means to reduce the overall noise level in flyover and approach.
Siano, Gabriel G; Montemurro, Milagros; Alcaráz, Mirta R; Goicoechea, Héctor C
2017-10-17
Higher-order data generation implies some automation challenges, which are mainly related to the hidden programming languages and electronic details of the equipment. When techniques and/or equipment hyphenation are the key to obtaining higher-order data, the required simultaneous control of them demands funds for new hardware, software, and licenses, in addition to very skilled operators. In this work, we present Design of Inputs-Outputs with Sikuli (DIOS), a free and open-source code program that provides a general framework for the design of automated experimental procedures without prior knowledge of programming or electronics. Basically, instruments and devices are considered as nodes in a network, and every node is associated both with physical and virtual inputs and outputs. Virtual components, such as graphical user interfaces (GUIs) of equipment, are handled by means of image recognition tools provided by Sikuli scripting language, while handling of their physical counterparts is achieved using an adapted open-source three-dimensional (3D) printer. Two previously reported experiments of our research group, related to fluorescence matrices derived from kinetics and high-performance liquid chromatography, were adapted to be carried out in a more automated fashion. Satisfactory results, in terms of analytical performance, were obtained. Similarly, advantages derived from open-source tools assistance could be appreciated, mainly in terms of lesser intervention of operators and cost savings.
NASA Technical Reports Server (NTRS)
Steger, Joseph L.
1989-01-01
Hyperbolic grid generation procedures are described which have been used in external flow simulations about complex configurations. For many practical applications a single well-ordered (i.e., structured) grid can be used to mesh an entire configuration, in other problems, composite or unstructured grid procedures are needed. Although the hyperbolic partial differential equation grid generation procedure has mainly been utilized to generate structured grids, an extension of the procedure to semiunstructured grids is briefly described. Extensions of the methodology are also described using two-dimensional equations.
NASA Technical Reports Server (NTRS)
Steger, Joseph L.
1989-01-01
Hyperbolic grid generation procedures are described which have been used in external flow simulations about complex configurations. For many practical applications a single well-ordered (i.e., structured) grid can be used to mesh an entire configuration, in other problems, composite or unstructured grid procedures are needed. Although the hyperbolic partial differential equation grid generation procedure has mainly been utilized to generate structured grids, extension of the procedure to semiunstructured grids is briefly described. Extensions of the methodology are also described using two-dimensional equations.
Lead telluride as a thermoelectric material for thermoelectric power generation
NASA Astrophysics Data System (ADS)
Dughaish, Z. H.
2002-09-01
The specialized applications of thermoelectric generators are very successful and have motivated a search for materials with an improved figure of merit Z, and also for materials which operate at elevated temperatures. Lead telluride, PbTe, is an intermediate thermoelectric power generator. Its maximum operating temperature is 900 K. PbTe has a high melting point, good chemical stability, low vapor pressure and good chemical strength in addition to high figure of merit Z. Recently, research in thermoelectricity aims to obtain new improved materials for autonomous sources of electrical power in specialized medical, terrestial and space applications and to obtain an unconventional energy source after the oil crises of 1974. Although the efficiency of thermoelectric generators is rather low, typically ∼5%, the other advantages, such as compactness, silent, reliability, long life, and long period of operation without attention, led to a wide range of applications. PbTe thermoelectric generators have been widely used by the US army, in space crafts to provide onboard power, and in pacemakers batteries. The general physical properties of lead telluride and factors affecting the figure of merit have been reviewed. Various possibilities of improving the figure of merit of the material have been given, including effect of grain size on reducing the lattice thermal conductivity λL. Comparison of some transport properties of lead telluride with other thermoelectric materials and procedures of preparing compacts with transport properties very close to the single crystal values from PbTe powder by cold and hot-pressing techniques are discussed.
Norris, David C; Wilson, Andrew
2016-01-01
In a 2014 report on adolescent mental health outcomes in the Moving to Opportunity for Fair Housing Demonstration (MTO), Kessler et al. reported that, at 10- to 15-year follow-up, boys from households randomized to an experimental housing voucher intervention experienced 12-month prevalence of post-traumatic stress disorder (PTSD) at several times the rate of boys from control households. We reanalyze this finding here, bringing to light a PTSD outcome imputation procedure used in the original analysis, but not described in the study report. By bootstrapping with repeated draws from the frequentist sampling distribution of the imputation model used by Kessler et al., and by varying two pseudorandom number generator seeds that fed their analysis, we account for several purely statistical components of the uncertainty inherent in their imputation procedure. We also discuss other sources of uncertainty in this procedure that were not accessible to a formal reanalysis.
Characterizing the discoloration of methylene blue in Fe0/H2O systems.
Noubactep, C
2009-07-15
Methylene blue (MB) was used as a model molecule to characterize the aqueous reactivity of metallic iron in Fe(0)/H(2)O systems. Likely discoloration mechanisms under used experimental conditions are: (i) adsorption onto Fe(0) and Fe(0) corrosion products (CP), (ii) co-precipitation with in situ generated iron CP, (iii) reduction to colorless leukomethylene blue (LMB). MB mineralization (oxidation to CO(2)) is not expected. The kinetics of MB discoloration by Fe(0), Fe(2)O(3), Fe(3)O(4), MnO(2), and granular activated carbon were investigated in assay tubes under mechanically non-disturbed conditions. The evolution of MB discoloration was monitored spectrophotometrically. The effect of availability of CP, Fe(0) source, shaking rate, initial pH value, and chemical properties of the solution were studied. The results present evidence supporting co-precipitation of MB with in situ generated iron CP as main discoloration mechanism. Under high shaking intensities (>150 min(-1)), increased CP generation yields a brownish solution which disturbed MB determination, showing that a too high shear stress induced the suspension of in situ generated corrosion products. The present study clearly demonstrates that comparing results from various sources is difficult even when the results are achieved under seemingly similar conditions. The appeal for an unified experimental procedure for the investigation of processes in Fe(0)/H(2)O systems is reiterated.
Interpretation of the MEG-MUSIC scan in biomagnetic source localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1993-09-01
MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak atmore » unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.« less
On recovering distributed IP information from inductive source time domain electromagnetic data
NASA Astrophysics Data System (ADS)
Kang, Seogi; Oldenburg, Douglas W.
2016-10-01
We develop a procedure to invert time domain induced polarization (IP) data for inductive sources. Our approach is based upon the inversion methodology in conventional electrical IP (EIP), which uses a sensitivity function that is independent of time. However, significant modifications are required for inductive source IP (ISIP) because electric fields in the ground do not achieve a steady state. The time-history for these fields needs to be evaluated and then used to define approximate IP currents. The resultant data, either a magnetic field or its derivative, are evaluated through the Biot-Savart law. This forms the desired linear relationship between data and pseudo-chargeability. Our inversion procedure has three steps: (1) Obtain a 3-D background conductivity model. We advocate, where possible, that this be obtained by inverting early-time data that do not suffer significantly from IP effects. (2) Decouple IP responses embedded in the observations by forward modelling the TEM data due to a background conductivity and subtracting these from the observations. (3) Use the linearized sensitivity function to invert data at each time channel and recover pseudo-chargeability. Post-interpretation of the recovered pseudo-chargeabilities at multiple times allows recovery of intrinsic Cole-Cole parameters such as time constant and chargeability. The procedure is applicable to all inductive source survey geometries but we focus upon airborne time domain EM (ATEM) data with a coincident-loop configuration because of the distinctive negative IP signal that is observed over a chargeable body. Several assumptions are adopted to generate our linearized modelling but we systematically test the capability and accuracy of the linearization for ISIP responses arising from different conductivity structures. On test examples we show: (1) our decoupling procedure enhances the ability to extract information about existence and location of chargeable targets directly from the data maps; (2) the horizontal location of a target body can be well recovered through inversion; (3) the overall geometry of a target body might be recovered but for ATEM data a depth weighting is required in the inversion; (4) we can recover estimates of intrinsic τ and η that may be useful for distinguishing between two chargeable targets.
Positive and negative generation effects in source monitoring.
Riefer, David M; Chien, Yuchin; Reimer, Jason F
2007-10-01
Research is mixed as to whether self-generation improves memory for the source of information. We propose the hypothesis that positive generation effects (better source memory for self-generated information) occur in reality-monitoring paradigms, while negative generation effects (better source memory for externally presented information) tend to occur in external source-monitoring paradigms. This hypothesis was tested in an experiment in which participants read or generated words, followed by a memory test for the source of each word (read or generated) and the word's colour. Meiser and Bröder's (2002) multinomial model for crossed source dimensions was used to analyse the data, showing that source memory for generation (reality monitoring) was superior for the generated words, while source memory for word colour (external source monitoring) was superior for the read words. The model also revealed the influence of strong response biases in the data, demonstrating the usefulness of formal modelling when examining generation effects in source monitoring.
Beckmann, A; Hamm, C; Figulla, H R; Cremer, J; Kuck, K H; Lange, R; Zahn, R; Sack, S; Schuler, G C; Walther, T; Beyersdorf, F; Böhm, M; Heusch, G; Funkat, A K; Meinertz, T; Neumann, T; Papoutsis, K; Schneider, S; Welz, A; Mohr, F W
2012-07-01
Background The increasing prevalence of severe aortic valve defects correlates with the increase of life expectancy. For decades, surgical aortic valve replacement (AVR), under the use of extracorporeal circulation, has been the gold standard for treatment of severe aortic valve diseases. In Germany ~12,000 patients receive isolated aortic valve surgery per year. For some time, percutaneous balloon valvuloplasty has been used as a palliative therapeutic option for very few patients. Currently, alternatives for the established surgical procedures such as transcatheter aortic valve implantation (TAVI) have become available, but there are only limited data from randomized studies or low-volume registries concerning long-time outcome. In Germany, the implementation of this new technology into hospital care increased rapidly in the past few years. Therefore, the German Aortic Valve Registry (GARY) was founded in July 2010 including all available therapeutic options and providing data from a large quantity of patients.Methods The GARY is assembled as a complete survey for all invasive therapies in patients with relevant aortic valve diseases. It evaluates the new therapeutic options and compares them to surgical AVR. The model for data acquisition is based on three data sources: source I, the mandatory German database for external performance measurement; source II, a specific registry dataset; and source III, a follow-up data sheet (generated by phone interview). Various procedures will be compared concerning observed complications, mortality, and quality of life up to 5 years after the initial procedure. Furthermore, the registry will enable a compilation of evidence-based indication criteria and, in addition, also a comparison of all approved operative procedures, such as Ross or David procedures, and the use of different mechanical or biological aortic valve prostheses.Results Since the launch of data acquisition in July 2010, almost all institutions performing aortic valve procedures in Germany joined the registry. By now, 91 sites which perform TAVI in Germany participate and more than 15,000 datasets are already in the registry.Conclusion The implementation of new or innovative medical therapies needs supervision under the conditions of a well-structured scientific project. Up to now relevant data for implementation of TAVI and long-term results are missing. In contrast to randomized controlled trials, GARY is a prospective, controlled, 5-year observational multicenter registry, and a real world investigation with only one exclusion criterion, the absence of patients' written consent. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Method, memory media and apparatus for detection of grid disconnect
Ye, Zhihong [Clifton Park, NY; Du, Pengwei [Troy, NY
2008-09-23
A phase shift procedure for detecting a disconnect of a power grid from a feeder that is connected to a load and a distributed generator. The phase shift procedure compares a current phase shift of the output voltage of the distributed generator with a predetermined threshold and if greater, a command is issued for a disconnect of the distributed generator from the feeder. To extend the range of detection, the phase shift procedure is used when a power mismatch between the distributed generator and the load exceeds a threshold and either or both of an under/over frequency procedure and an under/over voltage procedure is used when any power mismatch does not exceed the threshold.
a Procedural Solution to Model Roman Masonry Structures
NASA Astrophysics Data System (ADS)
Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.
2013-07-01
The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.
Handbook for industrial noise control
NASA Technical Reports Server (NTRS)
1981-01-01
The basic principles of sound, measuring techniques, and instrumentation associated with general purpose noise control are discussed. Means for identifying and characterizing a noise problem so that subsequent work may provide the most efficient and cost effective solution are outlined. A methodology for choosing appropriate noise control materials and the proper implementation of control procedures is detailed. The most significant NASA sponsored contributions to the state of the art development of optimum noise control technologies are described including cases in which aeroacoustics and related research have shed some light on ways of reducing noise generation at its source.
Microwave-Assisted Extraction of Fucoidan from Marine Algae.
Mussatto, Solange I
2015-01-01
Microwave-assisted extraction (MAE) is a technique that can be applied to extract compounds from different natural resources. In this chapter, the use of this technique to extract fucoidan from marine algae is described. The method involves a closed MAE system, ultrapure water as extraction solvent, and suitable conditions of time, pressure, and algal biomass/water ratio. By using this procedure under the specified conditions, the penetration of the electromagnetic waves into the material structure occurs in an efficient manner, generating a distributed heat source that promotes the fucoidan extraction from the algal biomass.
Handbook for industrial noise control
NASA Astrophysics Data System (ADS)
The basic principles of sound, measuring techniques, and instrumentation associated with general purpose noise control are discussed. Means for identifying and characterizing a noise problem so that subsequent work may provide the most efficient and cost effective solution are outlined. A methodology for choosing appropriate noise control materials and the proper implementation of control procedures is detailed. The most significant NASA sponsored contributions to the state of the art development of optimum noise control technologies are described including cases in which aeroacoustics and related research have shed some light on ways of reducing noise generation at its source.
Strategies for automatic processing of large aftershock sequences
NASA Astrophysics Data System (ADS)
Kvaerna, T.; Gibbons, S. J.
2017-12-01
Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.
Generating DEM from LIDAR data - comparison of available software tools
NASA Astrophysics Data System (ADS)
Korzeniowska, K.; Lacka, M.
2011-12-01
In recent years many software tools and applications have appeared that offer procedures, scripts and algorithms to process and visualize ALS data. This variety of software tools and of "point cloud" processing methods contributed to the aim of this study: to assess algorithms available in various software tools that are used to classify LIDAR "point cloud" data, through a careful examination of Digital Elevation Models (DEMs) generated from LIDAR data on a base of these algorithms. The works focused on the most important available software tools: both commercial and open source ones. Two sites in a mountain area were selected for the study. The area of each site is 0.645 sq km. DEMs generated with analysed software tools ware compared with a reference dataset, generated using manual methods to eliminate non ground points. Surfaces were analysed using raster analysis. Minimum, maximum and mean differences between reference DEM and DEMs generated with analysed software tools were calculated, together with Root Mean Square Error. Differences between DEMs were also examined visually using transects along the grid axes in the test sites.
Development of a primary diffusion source of organic vapors for gas analyzer calibration
NASA Astrophysics Data System (ADS)
Lecuna, M.; Demichelis, A.; Sassi, G.; Sassi, M. P.
2018-03-01
The generation of reference mixtures of volatile organic compounds (VOCs) at trace levels (10 ppt-10 ppb) is a challenge for both environmental and clinical measurements. The calibration of gas analyzers for trace VOC measurements requires a stable and accurate source of the compound of interest. The dynamic preparation of gas mixtures by diffusion is a suitable method for fulfilling these requirements. The estimation of the uncertainty of the molar fraction of the VOC in the mixture is a key step in the metrological characterization of a dynamic generator. The performance of a dynamic generator was monitored over a wide range of operating conditions. The generation system was simulated by a model developed with computational fluid dynamics and validated against experimental data. The vapor pressure of the VOC was found to be one of the main contributors to the uncertainty of the diffusion rate and its influence at 10-70 kPa was analyzed and discussed. The air buoyancy effect and perturbations due to the weighing duration were studied. The gas carrier flow rate and the amount of liquid in the vial were found to play a role in limiting the diffusion rate. The results of sensitivity analyses were reported through an uncertainty budget for the diffusion rate. The roles of each influence quantity were discussed. A set of criteria to minimize the uncertainty contribution to the primary diffusion source (25 µg min-1) were estimated: carrier gas flow rate higher than 37.7 sml min-1, a maximum VOC liquid mass decrease in the vial of 4.8 g, a minimum residual mass of 1 g and vial weighing times of 1-3 min. With this procedure a limit uncertainty of 0.5% in the diffusion rate can be obtained for VOC mixtures at trace levels (10 ppt-10 ppb), making the developed diffusion vials a primary diffusion source with potential to become a new reference material for trace VOC analysis.
A numerical study of nonlinear infrasound propagation in a windy atmosphere.
Sabatini, R; Marsden, O; Bailly, C; Bogey, C
2016-07-01
Direct numerical simulations of the two-dimensional unsteady compressible Navier-Stokes equations are performed to study the acoustic field generated by an infrasonic source in a realistic atmosphere. Some of the main phenomena affecting the propagation of infrasonic waves at large distances from the source are investigated. The effects of thermal and wind-related refraction on the signals recorded at ground level are highlighted, with particular emphasis on the phase shift induced by the presence of caustics in the acoustic field. Nonlinear waveform steepening associated with harmonic generation, and period lengthening, both of which are typical of large source amplitudes, are illustrated, and the importance of thermoviscous absorption in the upper atmosphere is clearly demonstrated. The role of diffraction in the shadow zone, around caustics and at stratospheric altitudes is also pointed out. The Navier-Stokes equations are solved using high-order finite-differences and a Runge-Kutta time integration method both originally developed for aeroacoustic applications, along with an adaptive shock-capturing algorithm which allows high-intensity acoustic fields to be examined. An improvement to the shock detection procedure is also proposed in order to meet the specificities of nonlinear propagation at long range. The modeling as well as the numerical results are reported in detail and discussed.
Development of unauthorized airborne emission source identification procedure
NASA Astrophysics Data System (ADS)
Shtripling, L. O.; Bazhenov, V. V.; Varakina, N. S.; Kupriyanova, N. P.
2018-01-01
The paper presents the procedure for searching sources of unauthorized airborne emissions. To make reasonable regulation decisions on airborne pollutant emissions and to ensure the environmental safety of population, the procedure provides for the determination of a pollutant mass emission value from the source being the cause of high pollution level and the search of a previously unrecognized contamination source in a specified area. To determine the true value of mass emission from the source, the minimum of the mean-root-square mismatch criterion between the computed and measured pollutant concentration in the given location is used.
Hybrid Analysis of Engine Core Noise
NASA Astrophysics Data System (ADS)
O'Brien, Jeffrey; Kim, Jeonglae; Ihme, Matthias
2015-11-01
Core noise, or the noise generated within an aircraft engine, is becoming an increasing concern for the aviation industry as other noise sources are progressively reduced. The prediction of core noise generation and propagation is especially challenging for computationalists since it involves extensive multiphysics including chemical reaction and moving blades in addition to the aerothermochemical effects of heated jets. In this work, a representative engine flow path is constructed using experimentally verified geometries to simulate the physics of core noise. A combustor, single-stage turbine, nozzle and jet are modeled in separate calculations using appropriate high fidelity techniques including LES, actuator disk theory and Ffowcs-Williams Hawkings surfaces. A one way coupling procedure is developed for passing fluctuations downstream through the flowpath. This method effectively isolates the core noise from other acoustic sources, enables straightforward study of the interaction between core noise and jet exhaust, and allows for simple distinction between direct and indirect noise. The impact of core noise on the farfield jet acoustics is studied extensively and the relative efficiency of different disturbance types and shapes is examined in detail.
Scardigno, Domenico; Fanelli, Emanuele; Viggiano, Annarita; Braccio, Giacobbe; Magi, Vinicio
2016-06-01
This article provides the dataset of operating conditions of a hybrid organic Rankine plant generated by the optimization procedure employed in the research article "A genetic optimization of a hybrid organic Rankine plant for solar and low-grade energy sources" (Scardigno et al., 2015) [1]. The methodology used to obtain the data is described. The operating conditions are subdivided into two separate groups: feasible and unfeasible solutions. In both groups, the values of the design variables are given. Besides, the subset of feasible solutions is described in details, by providing the thermodynamic and economic performances, the temperatures at some characteristic sections of the thermodynamic cycle, the net power, the absorbed powers and the area of the heat exchange surfaces.
Ebner, Hubert; Hayn, Dieter; Falgenhauer, Markus; Nitzlnader, Michael; Schleiermacher, Gudrun; Haupt, Riccardo; Erminio, Giovanni; Defferrari, Raffaella; Mazzocco, Katia; Kohler, Jan; Tonini, Gian Paolo; Ladenstein, Ruth; Schreier, Guenter
2016-01-01
Data from two contexts, i.e. the European Unresectable Neuroblastoma (EUNB) clinical trial and results from comparative genomic hybridisation (CGH) analyses from corresponding tumour samples shall be provided to existing repositories for secondary use. Utilizing the European Unified Patient IDentity Management (EUPID) as developed in the course of the ENCCA project, the following processes were applied to the data: standardization (providing interoperability), pseudonymization (generating distinct but linkable pseudonyms for both contexts), and linking both data sources. The applied procedures resulted in a joined dataset that did not contain any identifiers that would allow to backtrack the records to either data sources. This provided a high degree of privacy to the involved patients as required by data protection regulations, without preventing proper analysis.
Guerrero-Barajas, Claudia; Ordaz, Alberto; García-Solares, Selene Montserrat; Garibay-Orijel, Claudio; Bastida-González, Fernando; Zárate-Segura, Paola Berenice
2015-01-01
The importance of microbial sulfate reduction relies on the various applications that it offers in environmental biotechnology. Engineered sulfate reduction is used in industrial wastewater treatment to remove large concentrations of sulfate along with the chemical oxygen demand (COD) and heavy metals. The most common approach to the process is with anaerobic bioreactors in which sulfidogenic sludge is obtained through adaptation of predominantly methanogenic granular sludge to sulfidogenesis. This process may take a long time and does not always eliminate the competition for substrate due to the presence of methanogens in the sludge. In this work, we propose a novel approach to obtain sulfidogenic sludge in which hydrothermal vents sediments are the original source of microorganisms. The microbial community developed in the presence of sulfate and volatile fatty acids is wide enough to sustain sulfate reduction over a long period of time without exhibiting inhibition due to sulfide. This protocol describes the procedure to generate the sludge from the sediments in an upflow anaerobic sludge blanket (UASB) type of reactor. Furthermore, the protocol presents the procedure to demonstrate the capability of the sludge to remove by reductive dechlorination a model of a highly toxic organic pollutant such as trichloroethylene (TCE). The protocol is divided in three stages: (1) the formation of the sludge and the determination of its sulfate reducing activity in the UASB, (2) the experiment to remove the TCE by the sludge, and (3) the identification of microorganisms in the sludge after the TCE reduction. Although in this case the sediments were taken from a site located in Mexico, the generation of a sulfidogenic sludge by using this procedure may work if a different source of sediments is taken since marine sediments are a natural pool of microorganisms that may be enriched in sulfate reducing bacteria. PMID:26555802
Guerrero-Barajas, Claudia; Ordaz, Alberto; García-Solares, Selene Montserrat; Garibay-Orijel, Claudio; Bastida-González, Fernando; Zárate-Segura, Paola Berenice
2015-10-15
The importance of microbial sulfate reduction relies on the various applications that it offers in environmental biotechnology. Engineered sulfate reduction is used in industrial wastewater treatment to remove large concentrations of sulfate along with the chemical oxygen demand (COD) and heavy metals. The most common approach to the process is with anaerobic bioreactors in which sulfidogenic sludge is obtained through adaptation of predominantly methanogenic granular sludge to sulfidogenesis. This process may take a long time and does not always eliminate the competition for substrate due to the presence of methanogens in the sludge. In this work, we propose a novel approach to obtain sulfidogenic sludge in which hydrothermal vents sediments are the original source of microorganisms. The microbial community developed in the presence of sulfate and volatile fatty acids is wide enough to sustain sulfate reduction over a long period of time without exhibiting inhibition due to sulfide. This protocol describes the procedure to generate the sludge from the sediments in an upflow anaerobic sludge blanket (UASB) type of reactor. Furthermore, the protocol presents the procedure to demonstrate the capability of the sludge to remove by reductive dechlorination a model of a highly toxic organic pollutant such as trichloroethylene (TCE). The protocol is divided in three stages: (1) the formation of the sludge and the determination of its sulfate reducing activity in the UASB, (2) the experiment to remove the TCE by the sludge, and (3) the identification of microorganisms in the sludge after the TCE reduction. Although in this case the sediments were taken from a site located in Mexico, the generation of a sulfidogenic sludge by using this procedure may work if a different source of sediments is taken since marine sediments are a natural pool of microorganisms that may be enriched in sulfate reducing bacteria.
Practical issues in quantum-key-distribution postprocessing
NASA Astrophysics Data System (ADS)
Fung, Chi-Hang Fred; Ma, Xiongfeng; Chau, H. F.
2010-01-01
Quantum key distribution (QKD) is a secure key generation method between two distant parties by wisely exploiting properties of quantum mechanics. In QKD, experimental measurement outcomes on quantum states are transformed by the two parties to a secret key. This transformation is composed of many logical steps (as guided by security proofs), which together will ultimately determine the length of the final secret key and its security. We detail the procedure for performing such classical postprocessing taking into account practical concerns (including the finite-size effect and authentication and encryption for classical communications). This procedure is directly applicable to realistic QKD experiments and thus serves as a recipe that specifies what postprocessing operations are needed and what the security level is for certain lengths of the keys. Our result is applicable to the BB84 protocol with a single or entangled photon source.
Response functions for neutron skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-02-01
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less
Technical Note: A 3-D rendering algorithm for electromechanical wave imaging of a beating heart.
Nauleau, Pierre; Melki, Lea; Wan, Elaine; Konofagou, Elisa
2017-09-01
Arrhythmias can be treated by ablating the heart tissue in the regions of abnormal contraction. The current clinical standard provides electroanatomic 3-D maps to visualize the electrical activation and locate the arrhythmogenic sources. However, the procedure is time-consuming and invasive. Electromechanical wave imaging is an ultrasound-based noninvasive technique that can provide 2-D maps of the electromechanical activation of the heart. In order to fully visualize the complex 3-D pattern of activation, several 2-D views are acquired and processed separately. They are then manually registered with a 3-D rendering software to generate a pseudo-3-D map. However, this last step is operator-dependent and time-consuming. This paper presents a method to generate a full 3-D map of the electromechanical activation using multiple 2-D images. Two canine models were considered to illustrate the method: one in normal sinus rhythm and one paced from the lateral region of the heart. Four standard echographic views of each canine heart were acquired. Electromechanical wave imaging was applied to generate four 2-D activation maps of the left ventricle. The radial positions and activation timings of the walls were automatically extracted from those maps. In each slice, from apex to base, these values were interpolated around the circumference to generate a full 3-D map. In both cases, a 3-D activation map and a cine-loop of the propagation of the electromechanical wave were automatically generated. The 3-D map showing the electromechanical activation timings overlaid on realistic anatomy assists with the visualization of the sources of earlier activation (which are potential arrhythmogenic sources). The earliest sources of activation corresponded to the expected ones: septum for the normal rhythm and lateral for the pacing case. The proposed technique provides, automatically, a 3-D electromechanical activation map with a realistic anatomy. This represents a step towards a noninvasive tool to efficiently localize arrhythmias in 3-D. © 2017 American Association of Physicists in Medicine.
Update and evaluation of decay data for spent nuclear fuel analyses
NASA Astrophysics Data System (ADS)
Simeonov, Teodosi; Wemple, Charles
2017-09-01
Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.
NASA Astrophysics Data System (ADS)
Götz, Joachim; Buckel, Johannes; Heckmann, Tobias
2013-04-01
The analysis of alpine sediment cascades requires the identification, differentiation and quantification of sediment sources, storages, and transport processes. This study deals with the origin of alpine sediment transfer and relates primary talus deposits to corresponding rockwall source areas within the Gradenbach catchment (Schober Mountains, Austrian Alps). Sediment storage landforms are based on a detailed geomorphological map of the catchment which was generated to analyse the sediment transfer system. Mapping was mainly performed in the field and supplemented by post-mapping analysis using LIDAR data and digital orthophotos. A fundamental part of the mapping procedure was to capture additional landform-based information with respect to morphometry, activity and connectivity. The applied procedure provides a detailed inventory of sediment storage landforms including additional information on surface characteristics, dominant and secondary erosion and deposition processes, process activity and sediment storage coupling. We develop the working hypothesis that the present-day surface area ratio between rockfall talus (area as a proxy for volume, backed by geophysical analysis of selected talus cones) and corresponding rockwall source area is a measure of rockfall activity since deglaciation; large talus cones derived from small rockwall catchments indicate high activity, while low activity can be inferred where rockfall from large rock faces has created only small deposits. The surface area ratio of talus and corresponding rockwalls is analysed using a landform-based and a process-based approach. For the landform-based approach, we designed a GIS procedure which derives the (hydrological) catchment area of the contact lines of talus and rockwall landforms in the geomorphological map. The process-based approach simulates rockfall trajectories from steep (>45°) portions of a DEM generated by a random-walk rockfall model. By back-tracing those trajectories that end on a selected talus landform, the 'rockfall contributing area' is delineated; this approach takes account of the stochastic nature of rockfall trajectories and is able to identify, for example, rockfall delivery from one rockwall segment to multiple talus landforms (or from multiple rockfall segments to the same deposit, respectively). Using both approaches, a total of 290 rockwall-talus-subsystems are statistically analysed indicating a constant relationship between rockfall source areas and corresponding areas of talus deposits of almost 1:1. However, certain rockwall-talus-subsystems deviate from this correlation since sediment storage landforms of similar size originate from varying rockwall source areas and vice versa. This varying relationship is assumed to be strongly controlled by morphometric parameters, such as rockwall slope, altitudinal interval, and aspect. The impact of these parameters on the surface area ratio will be finally discussed.
48 CFR 715.370 - Alternative source selection procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Alternative source selection procedures. 715.370 Section 715.370 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 715...
Evaluation of noise pollution level in the operating rooms of hospitals: A study in Iran.
Giv, Masoumeh Dorri; Sani, Karim Ghazikhanlou; Alizadeh, Majid; Valinejadi, Ali; Majdabadi, Hesamedin Askari
2017-06-01
Noise pollution in the operating rooms is one of the remaining challenges. Both patients and physicians are exposed to different sound levels during the operative cases, many of which can last for hours. This study aims to evaluate the noise pollution in the operating rooms during different surgical procedures. In this cross-sectional study, sound level in the operating rooms of Hamadan University-affiliated hospitals (totally 10) in Iran during different surgical procedures was measured using B&K sound meter. The gathered data were compared with national and international standards. Statistical analysis was performed using descriptive statistics and one-way ANOVA, t -test, and Pearson's correlation test. Noise pollution level at majority of surgical procedures is higher than national and international documented standards. The highest level of noise pollution is related to orthopedic procedures, and the lowest one related to laparoscopic and heart surgery procedures. The highest and lowest registered sound level during the operation was 93 and 55 dB, respectively. Sound level generated by equipments (69 ± 4.1 dB), trolley movement (66 ± 2.3 dB), and personnel conversations (64 ± 3.9 dB) are the main sources of noise. The noise pollution of operating rooms are higher than available standards. The procedure needs to be corrected for achieving the proper conditions.
NASA Astrophysics Data System (ADS)
Mlimandago, S.
This research paper have gone out with very simple and easy (several) new concepts in document management for space projects and programs which can be applied anywhere both in the developing and developed countries. These several new concepts are and have been applied in Tanzania, Kenya and Uganda and found out to bear very good results using simple procedures. The intergral project based its documentation management approach from the outset on electronic document sharing and archiving. The main objective of having new concepts was to provide a faster and wider availability of the most current space information to all parties rather than creating a paperless office. Implementation of the new concepts approach required the capturing of documents in an appropriate and simple electronic format at source establishing new procedures for project wide information sharing and the deployment of a new generation of simple procedure - WEB - based tools. Key success factors were the early adoption of Internet technologies and simple procedures for improved information flow new concepts which can be applied anywhere both in the developed and the developing countries.
An investigation of the uniform random number generator
NASA Technical Reports Server (NTRS)
Temple, E. C.
1982-01-01
Most random number generators that are in use today are of the congruential form X(i+1) + AX(i) + C mod M where A, C, and M are nonnegative integers. If C=O, the generator is called the multiplicative type and those for which C/O are called mixed congruential generators. It is easy to see that congruential generators will repeat a sequence of numbers after a maximum of M values have been generated. The number of numbers that a procedure generates before restarting the sequence is called the length or the period of the generator. Generally, it is desirable to make the period as long as possible. A detailed discussion of congruential generators is given. Also, several promising procedures that differ from the multiplicative and mixed procedure are discussed.
Management and display of four-dimensional environmental data sets using McIDAS
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Santek, David; Suomi, Verner E.
1990-01-01
Over the past four years, great strides have been made in the areas of data management and display of 4-D meteorological data sets. A survey was conducted of available and planned 4-D meteorological data sources. The data types were evaluated for their impact on the data management and display system. The requirements were analyzed for data base management generated by the 4-D data display system. The suitability of the existing data base management procedures and file structure were evaluated in light of the new requirements. Where needed, new data base management tools and file procedures were designed and implemented. The quality of the basic 4-D data sets was assured. The interpolation and extrapolation techniques of the 4-D data were investigated. The 4-D data from various sources were combined to make a uniform and consistent data set for display purposes. Data display software was designed to create abstract line graphic 3-D displays. Realistic shaded 3-D displays were created. Animation routines for these displays were developed in order to produce a dynamic 4-D presentation. A prototype dynamic color stereo workstation was implemented. A computer functional design specification was produced based on interactive studies and user feedback.
Inside-out: comparing internally generated and externally generated basic emotions.
Salas, Christian E; Radovic, Darinka; Turnbull, Oliver H
2012-06-01
A considerable number of mood induction (MI) procedures have been developed to elicit emotion in normal and clinical populations. Although external procedures (e.g., film clips, pictures) are widely used, a number of experiments elicit emotion by using self-generated procedures (e.g., recalling an emotional personal episode). However, no study has directly compared the effectiveness of two types of internal versus external MI across multiple discrete emotions. In the present experiment, 40 undergraduate students watched film clips (external procedure) and recalled personal events (internal procedure) inducing 4 basic emotions (fear, anger, joy, sadness) and later completed a self-report questionnaire. Remarkably, both internal and external procedures elicited target emotions selectively, compared with nontarget emotions. When contrasting the intensity of target emotions, both techniques showed no significant differences, with the exception of Joy, which was more intensely elicited by the internal procedure. Importantly, when considering the overall level of intensity, it was always greater in the internal procedure, for each stimulus. A more detailed investigation of the data suggest that recalling personal events (a type of internal procedure) generates more negative and mixed blends of emotions, which might account for the overall higher intensity of the internal mood induction.
NASA Astrophysics Data System (ADS)
Nardi, F.; Grimaldi, S.; Petroselli, A.
2012-12-01
Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.
Initial experiments with a versatile multi-aperture negative-ion source and related improvements
NASA Astrophysics Data System (ADS)
Cavenago, M.
2016-03-01
A relatively compact ion source, named NIO1 (Negative-Ion Optimization 1), with 9 beam apertures for H- extraction is under commissioning, in collaboration between Consorzio RFX and INFN, to provide a test bench for source optimizations, for innovations, and for simulation code validations in support of Neutral Beam Injectors (NBI) optimization. NIO1 installation includes a 60kV high-voltage deck, power supplies for a 130mA ion nominal current, an X-ray shield, and beam diagnostics. Plasma is heated with a tunable 2MHz radiofrequency (rf) generator. Physical aspects of source operation and rf-plasma coupling are discussed. NIO1 tuning procedures and plasma experiments both with air and with hydrogen as filling gas are described, up to a 1.7kW rf power. Transitions to inductively coupled plasma are reported in the case of air (for a rf power of about 0.5kW and a gas pressure below 2Pa), discussing their robust signature in optical emission, and briefly summarized for hydrogen, where more than 1kW rf power is needed.
Self-Consistent Sources Extensions of Modified Differential-Difference KP Equation
NASA Astrophysics Data System (ADS)
Gegenhasi; Li, Ya-Qian; Zhang, Duo-Duo
2018-04-01
In this paper, we investigate a modified differential-difference KP equation which is shown to have a continuum limit into the mKP equation. It is also shown that the solution of the modified differential-difference KP equation is related to the solution of the differential-difference KP equation through a Miura transformation. We first present the Grammian solution to the modified differential-difference KP equation, and then produce a coupled modified differential-difference KP system by applying the source generation procedure. The explicit N-soliton solution of the resulting coupled modified differential-difference system is expressed in compact forms by using the Grammian determinant and Casorati determinant. We also construct and solve another form of the self-consistent sources extension of the modified differential-difference KP equation, which constitutes a Bäcklund transformation for the differential-difference KP equation with self-consistent sources. Supported by the National Natural Science Foundation of China under Grant Nos. 11601247 and 11605096, the Natural Science Foundation of Inner Mongolia Autonomous Region under Grant Nos. 2016MS0115 and 2015MS0116 and the Innovation Fund Programme of Inner Mongolia University No. 20161115
Generating Models of Surgical Procedures using UMLS Concepts and Multiple Sequence Alignment
Meng, Frank; D’Avolio, Leonard W.; Chen, Andrew A.; Taira, Ricky K.; Kangarloo, Hooshang
2005-01-01
Surgical procedures can be viewed as a process composed of a sequence of steps performed on, by, or with the patient’s anatomy. This sequence is typically the pattern followed by surgeons when generating surgical report narratives for documenting surgical procedures. This paper describes a methodology for semi-automatically deriving a model of conducted surgeries, utilizing a sequence of derived Unified Medical Language System (UMLS) concepts for representing surgical procedures. A multiple sequence alignment was computed from a collection of such sequences and was used for generating the model. These models have the potential of being useful in a variety of informatics applications such as information retrieval and automatic document generation. PMID:16779094
The document is a recommended operating procedure, prepare or use in research activities conducted by EPA's Air and Energy Engineering Research Laboratory (AEERL). The procedure applies to the collection of gaseous grab samples from fossil fuel combustion sources for subsequent a...
Code of Federal Regulations, 2014 CFR
2014-07-01
... issuing special procedures for declassification of records pertaining to intelligence activities and intelligence sources or methods, or of classified cryptologic records in NARA's holdings? 1260.26 Section 1260... procedures for declassification of records pertaining to intelligence activities and intelligence sources or...
Code of Federal Regulations, 2012 CFR
2012-07-01
... issuing special procedures for declassification of records pertaining to intelligence activities and intelligence sources or methods, or of classified cryptologic records in NARA's holdings? 1260.26 Section 1260... procedures for declassification of records pertaining to intelligence activities and intelligence sources or...
An interactive multi-block grid generation system
NASA Technical Reports Server (NTRS)
Kao, T. J.; Su, T. Y.; Appleby, Ruth
1992-01-01
A grid generation procedure combining interactive and batch grid generation programs was put together to generate multi-block grids for complex aircraft configurations. The interactive section provides the tools for 3D geometry manipulation, surface grid extraction, boundary domain construction for 3D volume grid generation, and block-block relationships and boundary conditions for flow solvers. The procedure improves the flexibility and quality of grid generation to meet the design/analysis requirements.
Stanislawski, Larry V.; Survila, Kornelijus; Wendel, Jeffrey; Liu, Yan; Buttenfield, Barbara P.
2018-01-01
This paper describes a workflow for automating the extraction of elevation-derived stream lines using open source tools with parallel computing support and testing the effectiveness of procedures in various terrain conditions within the conterminous United States. Drainage networks are extracted from the US Geological Survey 1/3 arc-second 3D Elevation Program elevation data having a nominal cell size of 10 m. This research demonstrates the utility of open source tools with parallel computing support for extracting connected drainage network patterns and handling depressions in 30 subbasins distributed across humid, dry, and transitional climate regions and in terrain conditions exhibiting a range of slopes. Special attention is given to low-slope terrain, where network connectivity is preserved by generating synthetic stream channels through lake and waterbody polygons. Conflation analysis compares the extracted streams with a 1:24,000-scale National Hydrography Dataset flowline network and shows that similarities are greatest for second- and higher-order tributaries.
Matching radio catalogues with realistic geometry: application to SWIRE and ATLAS
NASA Astrophysics Data System (ADS)
Fan, Dongwei; Budavári, Tamás; Norris, Ray P.; Hopkins, Andrew M.
2015-08-01
Cross-matching catalogues at different wavelengths is a difficult problem in astronomy, especially when the objects are not point-like. At radio wavelengths, an object can have several components corresponding, for example, to a core and lobes. Considering not all radio detections correspond to visible or infrared sources, matching these catalogues can be challenging. Traditionally, this is done by eye for better quality, which does not scale to the large data volumes expected from the next-generation of radio telescopes. We present a novel automated procedure, using Bayesian hypothesis testing, to achieve reliable associations by explicit modelling of a particular class of radio-source morphology. The new algorithm not only assesses the likelihood of an association between data at two different wavelengths, but also tries to assess whether different radio sources are physically associated, are double-lobed radio galaxies, or just distinct nearby objects. Application to the Spitzer Wide-Area Infrared Extragalactic and Australia Telescope Large Area Survey CDF-S catalogues shows that this method performs well without human intervention.
Overview of the Mathematical and Empirical Receptor Models Workshop (Quail Roost II)
NASA Astrophysics Data System (ADS)
Stevens, Robert K.; Pace, Thompson G.
On 14-17 March 1982, the U.S. Environmental Protection Agency sponsored the Mathematical and Empirical Receptor Models Workshop (Quail Roost II) at the Quail Roost Conference Center, Rougemont, NC. Thirty-five scientists were invited to participate. The objective of the workshop was to document and compare results of source apportionment analyses of simulated and real aerosol data sets. The simulated data set was developed by scientists from the National Bureau of Standards. It consisted of elemental mass data generated using a dispersion model that simulated transport of aerosols from a variety of sources to a receptor site. The real data set contained the mass, elemental, and ionic species concentrations of samples obtained in 18 consecutive 12-h sampling periods in Houston, TX. Some participants performed additional analyses of the Houston filters by X-ray powder diffraction, scanning electron microscopy, or light microscopy. Ten groups analyzed these data sets using a variety of modeling procedures. The results of the modeling exercises were evaluated and structured in a manner that permitted model intercomparisons. The major conclusions and recommendations derived from the intercomparisons were: (1) using aerosol elemental composition data, receptor models can resolve major emission sources, but additional analyses (including light microscopy and X-ray diffraction) significantly increase the number of sources that can be resolved; (2) simulated data sets that contain up to 6 dissimilar emission sources need to be generated, so that different receptor models can be adequately compared; (3) source apportionment methods need to be modified to incorporate a means of apportioning such aerosol species as sulfate and nitrate formed from SO 2 and NO, respectively, because current models tend to resolve particles into chemical species rather than to deduce their sources and (4) a source signature library may be required to be compiled for each airshed in order to improve the resolving capabilities of receptor models.
Kermanshah, Hamid; Khorsandian, Hossein
2017-01-01
Background: One of the most important factors in restoration failure is microleakage at the restoration interface. Furthermore, a new generation of bonding, Scotchbond Universal (multi-mode adhesive), has been introduced to facilitate the bonding steps. The aim of this study was to compare the microleakage of Class V cavities restored using Scotchbond™ Universal with Scotchbond Multi-Purpose in two procedures. Materials and Methods: Eighteen freshly extracted human molars were used in this study. Thirty-six standardized Class V cavities were prepared on the buccal and lingual surfaces. The teeth were divided into three groups: (1) Group A: Scotchbond Universal with “self-etching” procedure and nanohybrid composite Filtek Z350. (2) Group B: Scotchbond Universal with “total etching” procedure and Filtek Z350. (3) Group C: Scotchbond Multi-Purpose and Filtek Z350. Microleakage at enamel and dentinal margins was evaluated after thermocycling under 5000 cycles by two methods of microleakage assay: swept source optical coherence tomography (OCT) and dye penetration. Wilcoxon's signed-rank test and Kruskal–Wallis test were used to analyze microleakage. Results: In silver nitrate dye penetration method, group A exhibited the minimum microleakage at dentin margins and group C exhibited the minimum microleakage at enamel margins (P < 0.05). Furthermore, in OCT method, group C demonstrated the minimum microleakage at enamel margins (P = 0.047), with no difference in the microleakage rate at dentin margins. Conclusion: Scotchbond Universal with “self-etching” procedure at dentin margin exhibited more acceptable performance compared to the Scotchbond Multi-Purpose with the two methods. PMID:28928782
Tarrass, Faissal; Benjelloun, Meryem; Benjelloun, Omar
2008-07-01
Water is a vital aspect of hemodialysis. During the procedure, large volumes of water are used to prepare dialysate and clean and reprocess machines. This report evaluates the technical and economic feasibility of recycling hemodialysis wastewater for irrigation uses, such as watering gardens and landscape plantings. Water characteristics, possible recycling methods, and production costs of treated water are discussed in terms of the quality of the generated wastewater. A cost-benefit analysis is also performed through comparison of intended cost with that of seawater desalination, which is widely used in irrigation.
Stem Cell Therapies for Treating Diabetes: Progress and Remaining Challenges.
Sneddon, Julie B; Tang, Qizhi; Stock, Peter; Bluestone, Jeffrey A; Roy, Shuvo; Desai, Tejal; Hebrok, Matthias
2018-06-01
Restoration of insulin independence and normoglycemia has been the overarching goal in diabetes research and therapy. While whole-organ and islet transplantation have become gold-standard procedures in achieving glucose control in diabetic patients, the profound lack of suitable donor tissues severely hampers the broad application of these therapies. Here, we describe current efforts aimed at generating a sustainable source of functional human stem cell-derived insulin-producing islet cells for cell transplantation and present state-of-the-art efforts to protect such cells via immune modulation and encapsulation strategies. Copyright © 2018. Published by Elsevier Inc.
Miyazaki, Shinsuke; Watanabe, Tomonori; Kajiyama, Takatsugu; Iwasawa, Jin; Ichijo, Sadamitsu; Nakamura, Hiroaki; Taniguchi, Hiroshi; Hirao, Kenzo; Iesaka, Yoshito
2017-12-01
Atrial fibrillation ablation is associated with substantial risks of silent cerebral events (SCEs) or silent cerebral lesions. We investigated which procedural processes during cryoballoon procedures carried a risk. Forty paroxysmal atrial fibrillation patients underwent pulmonary vein isolation using second-generation cryoballoons with single 28-mm balloon 3-minute freeze techniques. Microembolic signals (MESs) were monitored by transcranial Doppler throughout all procedures. Brain magnetic resonance imaging was obtained pre- and post-procedure in 34 patients (85.0%). Of 158 pulmonary veins, 152 (96.2%) were isolated using cryoablation, and 6 required touch-up radiofrequency ablation. A mean of 5.0±1.2 cryoballoon applications was applied, and the left atrial dwell time was 76.7±22.4 minutes. The total MES counts/procedures were 522 (426-626). Left atrial access and Flexcath sheath insertion generated 25 (11-44) and 34 (24-53) MESs. Using radiofrequency ablation for transseptal access increased the MES count during transseptal punctures. During cryoapplications, MES counts were greatest during first applications (117 [81-157]), especially after balloon stretch/deflations (43 [21-81]). Pre- and post-pulmonary vein potential mapping with Lasso catheters generated 57 (21-88) and 61 (36-88) MESs. Reinsertion of once withdrawn cryoballoons and subsequent applications produced 205 (156-310) MESs. Touch-up ablation generated 32 (19-62) MESs, whereas electric cardioversion generated no MESs. SCEs and silent cerebral lesions were detected in 11 (32.3%) and 4 (11.7%) patients, respectively. The patients with SCEs were older than those without; however, there were no significant factors associated with SCEs. A significant number of MESs and SCE/silent cerebral lesion occurrences were observed during second-generation cryoballoon ablation procedures. MESs were recorded during a variety of steps throughout the procedure; however, the majority occurred during phases with a high probability of gaseous emboli. © 2017 American Heart Association, Inc.
Audio CAPTCHA for SIP-Based VoIP
NASA Astrophysics Data System (ADS)
Soupionis, Yannis; Tountas, George; Gritzalis, Dimitris
Voice over IP (VoIP) introduces new ways of communication, while utilizing existing data networks to provide inexpensive voice communications worldwide as a promising alternative to the traditional PSTN telephony. SPam over Internet Telephony (SPIT) is one potential source of future annoyance in VoIP. A common way to launch a SPIT attack is the use of an automated procedure (bot), which generates calls and produces audio advertisements. In this paper, our goal is to design appropriate CAPTCHA to fight such bots. We focus on and develop audio CAPTCHA, as the audio format is more suitable for VoIP environments and we implement it in a SIP-based VoIP environment. Furthermore, we suggest and evaluate the specific attributes that audio CAPTCHA should incorporate in order to be effective, and test it against an open source bot implementation.
Estimating the costs of VA ambulatory care.
Phibbs, Ciaran S; Bhandari, Aman; Yu, Wei; Barnett, Paul G
2003-09-01
This article reports how we matched Common Procedure Terminology (CPT) codes with Medicare payment rates and aggregate Veterans Affairs (VA) budget data to estimate the costs of every VA ambulatory encounter. Converting CPT codes to encounter-level costs was more complex than a simple match of Medicare reimbursements to CPT codes. About 40 percent of the CPT codes used in VA, representing about 20 percent of procedures, did not have a Medicare payment rate and required other cost estimates. Reconciling aggregated estimated costs to the VA budget allocations for outpatient care produced final VA cost estimates that were lower than projected Medicare reimbursements. The methods used to estimate costs for encounters could be replicated for other settings. They are potentially useful for any system that does not generate billing data, when CPT codes are simpler to collect than billing data, or when there is a need to standardize cost estimates across data sources.
An automatic approach to exclude interlopers from asteroid families
NASA Astrophysics Data System (ADS)
Radović, Viktor; Novaković, Bojan; Carruba, Valerio; Marčeta, Dušan
2017-09-01
Asteroid families are a valuable source of information to many asteroid-related researches, assuming a reliable list of their members could be obtained. However, as the number of known asteroids increases fast it becomes more and more difficult to obtain a robust list of members of an asteroid family. Here, we are proposing a new approach to deal with the problem, based on the well-known hierarchical clustering method. An additional step in the whole procedure is introduced in order to reduce a so-called chaining effect. The main idea is to prevent chaining through an already identified interloper. We show that in this way a number of potential interlopers among family members is significantly reduced. Moreover, we developed an automatic online-based portal to apply this procedure, I.e. to generate a list of family members as well as a list of potential interlopers. The Asteroid Families Portal is freely available to all interested researchers.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
Screening procedure for airborne pollutants emitted from a high-tech industrial complex in Taiwan.
Wang, John H C; Tsai, Ching-Tsan; Chiang, Chow-Feng
2015-11-01
Despite the modernization of computational techniques, atmospheric dispersion modeling remains a complicated task as it involves the use of large amounts of interrelated data with wide variability. The continuously growing list of regulated air pollutants also increases the difficulty of this task. To address these challenges, this study aimed to develop a screening procedure for a long-term exposure scenario by generating a site-specific lookup table of hourly averaged dispersion factors (χ/Q), which could be evaluated by downwind distance, direction, and effective plume height only. To allow for such simplification, the average plume rise was weighted with the frequency distribution of meteorological data so that the prediction of χ/Q could be decoupled from the meteorological data. To illustrate this procedure, 20 receptors around a high-tech complex in Taiwan were selected. Five consecutive years of hourly meteorological data were acquired to generate a lookup table of χ/Q, as well as two regression formulas of plume rise as functions of downwind distance, buoyancy flux, and stack height. To calculate the concentrations for the selected receptors, a six-step Excel algorithm was programmed with four years of emission records and 10 most critical toxics were screened out. A validation check using Industrial Source Complex (ISC3) model with the same meteorological and emission data showed an acceptable overestimate of 6.7% in the average concentration of 10 nearby receptors. The procedure proposed in this study allows practical and focused emission management for a large industrial complex and can therefore be integrated into an air quality decision-making system. Copyright © 2015 Elsevier Ltd. All rights reserved.
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
How much can a single webcam tell to the operation of a water system?
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Castelletti, Andrea; Fedorov, Roman; Fraternali, Piero
2017-04-01
Recent advances in environmental monitoring are making a wide range of hydro-meteorological data available with a great potential to enhance understanding, modelling and management of environmental processes. Despite this progress, continuous monitoring of highly spatiotemporal heterogeneous processes is not well established yet, especially in inaccessible sites. In this context, the unprecedented availability of user-generated data on the web might open new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatiotemporally dense. In this work, we focus on snow and contribute a novel crowdsourcing procedure for extracting snow-related information from public web images, either produced by users or generated by touristic webcams. A fully automated process fetches mountain images from multiple sources, identifies the peaks present therein, and estimates virtual snow indexes representing a proxy of the snow-covered area. The operational value of the obtained virtual snow indexes is then assessed for a real-world water-management problem, where we use these indexes for informing the daily control of a regulated lake supplying water for multiple purposes. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance. Our procedure has the potential for complementing traditional snow-related information, minimizing costs and efforts for obtaining the virtual snow indexes and, at the same time, maximizing the portability of the procedure to several locations where such public images are available.
Using crowdsourced web content for informing water systems operations in snow-dominated catchments
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Castelletti, Andrea; Fedorov, Roman; Fraternali, Piero
2016-12-01
Snow is a key component of the hydrologic cycle in many regions of the world. Despite recent advances in environmental monitoring that are making a wide range of data available, continuous snow monitoring systems that can collect data at high spatial and temporal resolution are not well established yet, especially in inaccessible high-latitude or mountainous regions. The unprecedented availability of user-generated data on the web is opening new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatiotemporally dense. In this paper, we contribute a novel crowdsourcing procedure for extracting snow-related information from public web images, either produced by users or generated by touristic webcams. A fully automated process fetches mountain images from multiple sources, identifies the peaks present therein, and estimates virtual snow indexes representing a proxy of the snow-covered area. Our procedure has the potential for complementing traditional snow-related information, minimizing costs and efforts for obtaining the virtual snow indexes and, at the same time, maximizing the portability of the procedure to several locations where such public images are available. The operational value of the obtained virtual snow indexes is assessed for a real-world water-management problem, the regulation of Lake Como, where we use these indexes for informing the daily operations of the lake. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance.
Interactive 3d Landscapes on Line
NASA Astrophysics Data System (ADS)
Fanini, B.; Calori, L.; Ferdani, D.; Pescarin, S.
2011-09-01
The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.
Adjoint Inversion for Extended Earthquake Source Kinematics From Very Dense Strong Motion Data
NASA Astrophysics Data System (ADS)
Ampuero, J. P.; Somala, S.; Lapusta, N.
2010-12-01
Addressing key open questions about earthquake dynamics requires a radical improvement of the robustness and resolution of seismic observations of large earthquakes. Proposals for a new generation of earthquake observation systems include the deployment of “community seismic networks” of low-cost accelerometers in urban areas and the extraction of strong ground motions from high-rate optical images of the Earth's surface recorded by a large space telescope in geostationary orbit. Both systems could deliver strong motion data with a spatial density orders of magnitude higher than current seismic networks. In particular, a “space seismometer” could sample the seismic wave field at a spatio-temporal resolution of 100 m, 1 Hz over areas several 100 km wide with an amplitude resolution of few cm/s in ground velocity. The amount of data to process would be immensely larger than what current extended source inversion algorithms can handle, which hampers the quantitative assessment of the cost-benefit trade-offs that can guide the practical design of the proposed earthquake observation systems. We report here on the development of a scalable source imaging technique based on iterative adjoint inversion and its application to the proof-of-concept of a space seismometer. We generated synthetic ground motions for M7 earthquake rupture scenarios based on dynamic rupture simulations on a vertical strike-slip fault embedded in an elastic half-space. A range of scenarios include increasing levels of complexity and interesting features such as supershear rupture speed. The resulting ground shaking is then processed accordingly to what would be captured by an optical satellite. Based on the resulting data, we perform source inversion by an adjoint/time-reversal method. The gradient of a cost function quantifying the waveform misfit between data and synthetics is efficiently obtained by applying the time-reversed ground velocity residuals as surface force sources, back-propagating onto the locked fault plane through a seismic wave simulation and recording the fault shear stress, which is the adjoint field of the fault slip-rate. Restricting the procedure to a single iteration is known as imaging. The source reconstructed by imaging reproduces the original forward model quite well in the shallow part of the fault. However, the deeper part of the earthquake source is not well reproduced, due to the lack of data on the side and bottom boundaries of our computational domain. To resolve this issue, we are implementing the complete iterative procedure and we will report on the convergence aspects of the adjoint iterations. Our current work is also directed towards addressing the lack of data on other boundaries of our domain and improving the source reconstruction by including teleseismic data for those boundaries and non-negativity constraints on the dominant slip-rate component.
Towards the Next Generation of Space Environment Prediction Capabilities.
NASA Astrophysics Data System (ADS)
Kuznetsova, M. M.
2015-12-01
Since its establishment more than 15 years ago, the Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) is serving as an assess point to expanding collection of state-of-the-art space environment models and frameworks as well as a hub for collaborative development of next generation space weather forecasting systems. In partnership with model developers and international research and operational communities the CCMC integrates new data streams and models from diverse sources into end-to-end space weather impacts predictive systems, identifies week links in data-model & model-model coupling and leads community efforts to fill those gaps. The presentation will highlight latest developments, progress in CCMC-led community-wide projects on testing, prototyping, and validation of models, forecasting techniques and procedures and outline ideas on accelerating implementation of new capabilities in space weather operations.
Generation of high-yield insulin producing cells from human bone marrow mesenchymal stem cells.
Jafarian, Arefeh; Taghikhani, Mohammad; Abroun, Saeid; Pourpak, Zahra; Allahverdi, Amir; Soleimani, Masoud
2014-07-01
Allogenic islet transplantation is a most efficient approach for treatment of diabetes mellitus. However, the scarcity of islets and long term need for an immunosuppressant limits its application. Recently, cell replacement therapies that generate of unlimited sources of β cells have been developed to overcome these limitations. In this study we have described a stage specific differentiation protocol for the generation of insulin producing islet-like clusters from human bone marrow mesenchymal stem cells (hBM-MSCs). This specific stepwise protocol induced differentiation of hMSCs into definitive endoderm, pancreatic endoderm and pancreatic endocrine cells that expressed of sox17, foxa2, pdx1, ngn3, nkx2.2, insulin, glucagon, somatostatin, pancreatic polypeptide, and glut2 transcripts respectively. In addition, immunocytochemical analysis confirmed protein expression of the above mentioned genes. Western blot analysis discriminated insulin from proinsulin in the final differentiated cells. In derived insulin producing cells (IPCs), secreted insulin and C-peptide was in a glucose dependent manner. We have developed a protocol that generates effective high-yield human IPCs from hBM-MSCs in vitro. These finding suggest that functional IPCs generated by this procedure can be used as a cell-based approach for insulin dependent diabetes mellitus.
10 CFR 39.55 - Tritium neutron generator target sources.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Tritium neutron generator target sources. 39.55 Section 39... Equipment § 39.55 Tritium neutron generator target sources. (a) Use of a tritium neutron generator target....77. (b) Use of a tritium neutron generator target source, containing quantities exceeding 1,110 GBg...
10 CFR 39.55 - Tritium neutron generator target sources.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false Tritium neutron generator target sources. 39.55 Section 39... Equipment § 39.55 Tritium neutron generator target sources. (a) Use of a tritium neutron generator target....77. (b) Use of a tritium neutron generator target source, containing quantities exceeding 1,110 GBg...
10 CFR 39.55 - Tritium neutron generator target sources.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 1 2014-01-01 2014-01-01 false Tritium neutron generator target sources. 39.55 Section 39... Equipment § 39.55 Tritium neutron generator target sources. (a) Use of a tritium neutron generator target....77. (b) Use of a tritium neutron generator target source, containing quantities exceeding 1,110 GBg...
10 CFR 39.55 - Tritium neutron generator target sources.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 1 2013-01-01 2013-01-01 false Tritium neutron generator target sources. 39.55 Section 39... Equipment § 39.55 Tritium neutron generator target sources. (a) Use of a tritium neutron generator target....77. (b) Use of a tritium neutron generator target source, containing quantities exceeding 1,110 GBg...
10 CFR 39.55 - Tritium neutron generator target sources.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 1 2012-01-01 2012-01-01 false Tritium neutron generator target sources. 39.55 Section 39... Equipment § 39.55 Tritium neutron generator target sources. (a) Use of a tritium neutron generator target....77. (b) Use of a tritium neutron generator target source, containing quantities exceeding 1,110 GBg...
TOWARD THE DEVELOPMENT OF A CONSENSUS MATERIALS DATABASE FOR PRESSURE TECHNOLGY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, Robert W; Ren, Weiju
The ASME construction code books specify materials and fabrication procedures that are acceptable for pressure technology applications. However, with few exceptions, the materials properties provided in the ASME code books provide no statistics or other information pertaining to material variability. Such information is central to the prediction and prevention of failure events. Many sources of materials data exist that provide variability information but such sources do not necessarily represent a consensus of experts with respect to the reported trends that are represented. Such a need has been identified by the ASME Standards Technology, LLC and initial steps have been takenmore » to address these needs: however, these steps are limited to project-specific applications only, such as the joint DOE-ASME project on materials for Generation IV nuclear reactors. In contrast to light-water reactor technology, the experience base for the Generation IV nuclear reactors is somewhat lacking and heavy reliance must be placed on model development and predictive capability. The database for model development is being assembled and includes existing code alloys such as alloy 800H and 9Cr-1Mo-V steel. Ownership and use rights are potential barriers that must be addressed.« less
Generating Quasigroups: A Group Theory Investigation
ERIC Educational Resources Information Center
Lynch, Mark A. M.
2011-01-01
A procedure for generating quasigroups from groups is described, and the properties of these derived quasigroups are investigated. Some practical examples of the procedure and related results are presented.
Oliveira, Michael L; Brandao, Geovani C; de Andrade, Jailson B; Ferreira, Sergio L C
2018-03-01
This work proposes a method for the determination of free and total sulfur(IV) compounds in coconut water samples, using the high-resolution continuum source molecular absorption spectrometry. It is based on the measurement of the absorbance signal of the SO 2 gas generate, which is resultant of the addition of hydrochloric acid solution on the sample containing the sulfating agent. The sulfite bound to the organic compounds is released by the addition of sodium hydroxide solution, before the generation of the SO 2 gas. The optimization step was performed using multivariate methodology involving volume, concentration and flow rate of hydrochloric acid. This method was established by the sum of the absorbances obtained in the three lines of molecular absorption of the SO 2 gas. This strategy allowed a procedure for the determination of sulfite with limits of detection and quantification of 0.36 and 1.21mgL -1 (for a sample volume of 10mL) and precision expressed as relative standard deviation of 5.4% and 6.4% for a coconut water sample containing 38.13 and 54.58mgL -1 of free and total sulfite, respectively. The method was applied for analyzing five coconut water samples from Salvador city, Brazil. The average contents varied from 13.0 to 55.4mgL -1 for free sulfite and from 24.7 to 66.9mgL -1 for total sulfur(IV) compounds. The samples were also analyzed employing the Ripper´s procedure, which is a reference method for the quantification of this additive. A statistical test at 95% confidence level demonstrated that there is no significant difference between the results obtained by the two methods. Copyright © 2017 Elsevier B.V. All rights reserved.
DeWerd, Larry A; Huq, M Saiful; Das, Indra J; Ibbott, Geoffrey S; Hanson, William F; Slowey, Thomas W; Williamson, Jeffrey F; Coursey, Bert M
2004-03-01
Low dose rate brachytherapy is being used extensively for the treatment of prostate cancer. As of September 2003, there are a total of thirteen 125I and seven 103Pd sources that have calibrations from the National Institute of Standards and Technology (NIST) and the Accredited Dosimetry Calibration Laboratories (ADCLs) of the American Association of Physicists in Medicine (AAPM). The dosimetry standards for these sources are traceable to the NIST wide-angle free-air chamber. Procedures have been developed by the AAPM Calibration Laboratory Accreditation Subcommittee to standardize quality assurance and calibration, and to maintain the dosimetric traceability of these sources to ensure accurate clinical dosimetry. A description of these procedures is provided to the clinical users for traceability purposes as well as to provide guidance to the manufacturers of brachytherapy sources and ADCLs with regard to these procedures.
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
MAGI: many-component galaxy initializer
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2018-04-01
Providing initial conditions is an essential procedure for numerical simulations of galaxies. The initial conditions for idealized individual galaxies in N-body simulations should resemble observed galaxies and be dynamically stable for time-scales much longer than their characteristic dynamical times. However, generating a galaxy model ab initio as a system in dynamical equilibrium is a difficult task, since a galaxy contains several components, including a bulge, disc, and halo. Moreover, it is desirable that the initial-condition generator be fast and easy to use. We have now developed an initial-condition generator for galactic N-body simulations that satisfies these requirements. The developed generator adopts a distribution-function-based method, and it supports various kinds of density models, including custom-tabulated inputs and the presence of more than one disc. We tested the dynamical stability of systems generated by our code, representing early- and late-type galaxies, with N = 2097 152 and 8388 608 particles, respectively, and we found that the model galaxies maintain their initial distributions for at least 1 Gyr. The execution times required to generate the two models were 8.5 and 221.7 seconds, respectively, which is negligible compared to typical execution times for N-body simulations. The code is provided as open-source software and is publicly and freely available at https://bitbucket.org/ymiki/magi.
Wang, Mian-Ying; Hurn, Jenae; Peng, Lin; Nowicki, Diane; Anderson, Gary
2011-01-01
The impact of Morinda citrifolia (noni) juice on fertility and offspring health in three generations of ICR mice was evaluated. The authenticity of the source of noni juice in this study was determined by chemical analysis of known marker compounds. Mice were supplied with 5% noni juice at gestation (day 0) until weaning (21 days postpartum). This procedure was followed through three generations of offspring. Three generations of control mice were also evaluated. There were no intergroup differences in gestation and fertility indices or malformation rates. However, litter sizes of the noni group in the first (F1), second (F2), and third (F3) generations were, respectively, 29.3% (P < 0.01), 19.8% (P < 0.01) and 19.6% (P < 0.01) larger than corresponding controls. Despite larger litter sizes, there were no decreases in fetal weight in any generation of the noni group. Further, maternal health and offspring viability in the noni groups were equal to or greater than the controls. The results of this study suggest that authentic noni juice has no adverse effect on fertility and fetal development, consistent with previous two-generation studies of noni fruit from French Polynesia, Indonesia, and Hainan , China. On the contrary, noni juice appears to facilitate pregnancy and fetal development.
An algebra for spatio-temporal information generation
NASA Astrophysics Data System (ADS)
Pebesma, Edzer; Scheider, Simon; Gräler, Benedikt; Stasch, Christoph; Hinz, Matthias
2016-04-01
When we accept the premises of James Frew's laws of metadata (Frew's first law: scientists don't write metadata; Frew's second law: any scientist can be forced to write bad metadata), but also assume that scientists try to maximise the impact of their research findings, can we develop our information infrastructures such that useful metadata is generated automatically? Currently, sharing of data and software to completely reproduce research findings is becoming standard, e.g. in the Journal of Statistical Software [1]. The reproduction (e.g. R) scripts however convey correct syntax, but still limited semantics. We propose [2] a new, platform-neutral way to algebraically describe how data is generated, e.g. by observation, and how data is derived, e.g. by processing observations. It starts with forming functions composed of four reference system types (space, time, quality, entity), which express for instance continuity of objects over time, and continuity of fields over space and time. Data, which is discrete by definition, is generated by evaluating such functions at discrete space and time instances, or by evaluating a convolution (aggregation) over them. Derived data is obtained by inputting data to data derivation functions, which for instance interpolate, estimate, aggregate, or convert fields into objects and vice versa. As opposed to the traditional when, where and what semantics of data sets, our algebra focuses on describing how a data set was generated. We argue that it can be used to discover data sets that were derived from a particular source x, or derived by a particular procedure y. It may also form the basis for inferring meaningfulness of derivation procedures [3]. Current research focuses on automatically generating provenance documentation from R scripts. [1] http://www.jstatsoft.org/ (open access) [2] http://www.meaningfulspatialstatistics.org has the full paper (in review) [3] Stasch, C., S. Scheider, E. Pebesma, W. Kuhn, 2014. Meaningful Spatial Prediction and Aggregation. Environmental Modelling & Software, 51, 149-165 (open access)
Concise review: stem cell-derived erythrocytes as upcoming players in blood transfusion.
Zeuner, Ann; Martelli, Fabrizio; Vaglio, Stefania; Federici, Giulia; Whitsett, Carolyn; Migliaccio, Anna Rita
2012-08-01
Blood transfusions have become indispensable to treat the anemia associated with a variety of medical conditions ranging from genetic disorders and cancer to extensive surgical procedures. In developed countries, the blood supply is generally adequate. However, the projected decline in blood donor availability due to population ageing and the difficulty in finding rare blood types for alloimmunized patients indicate a need for alternative red blood cell (RBC) transfusion products. Increasing knowledge of processes that govern erythropoiesis has been translated into efficient procedures to produce RBC ex vivo using primary hematopoietic stem cells, embryonic stem cells, or induced pluripotent stem cells. Although in vitro-generated RBCs have recently entered clinical evaluation, several issues related to ex vivo RBC production are still under intense scrutiny: among those are the identification of stem cell sources more suitable for ex vivo RBC generation, the translation of RBC culture methods into clinical grade production processes, and the development of protocols to achieve maximal RBC quality, quantity, and maturation. Data on size, hemoglobin, and blood group antigen expression and phosphoproteomic profiling obtained on erythroid cells expanded ex vivo from a limited number of donors are presented as examples of the type of measurements that should be performed as part of the quality control to assess the suitability of these cells for transfusion. New technologies for ex vivo erythroid cell generation will hopefully provide alternative transfusion products to meet present and future clinical requirements. Copyright © 2012 AlphaMed Press.
Milagro Observations of Potential TeV Emitters
NASA Astrophysics Data System (ADS)
Abeysekara, Anushka; Linnemann, James
2012-03-01
We searched for point sources in Milagro sky maps at the locations in four catalogs of potential TeV emitting sources. Our candidates are selected from the Fermi 2FGL pulsars, Fermi 2FGL extragalactic sources, TeVCat extragalactic sources, and from the BL Lac TeV Candidate list published by Costamante and Ghisellini in 2002. The False Discovery Rate (FDR) statistical procedure is used to select the sources. The FDR procedure controls the fraction of false detections. Our results are presented in this talk.
Mayer, R. E.; Vierheilig, J.; Egle, L.; Reischer, G. H.; Saracevic, E.; Mach, R. L.; Kirschner, A. K. T.; Zessner, M.; Farnleitner, A. H.
2015-01-01
Because of high diurnal water quality fluctuations in raw municipal wastewater, the use of proportional autosampling over a period of 24 h at municipal wastewater treatment plants (WWTPs) to evaluate carbon, nitrogen, and phosphorus removal has become a standard in many countries. Microbial removal or load estimation at municipal WWTPs, however, is still based on manually recovered grab samples. The goal of this study was to establish basic knowledge regarding the persistence of standard bacterial fecal indicators and Bacteroidetes genetic microbial source tracking markers in municipal wastewater in order to evaluate their suitability for automated sampling, as the potential lack of persistence is the main argument against such procedures. Raw and secondary treated wastewater of municipal origin from representative and well-characterized biological WWTPs without disinfection (organic carbon and nutrient removal) was investigated in microcosm experiments at 5 and 21°C with a total storage time of 32 h (including a 24-h autosampling component and an 8-h postsampling phase). Vegetative Escherichia coli and enterococci, as well as Clostridium perfringens spores, were selected as indicators for cultivation-based standard enumeration. Molecular analysis focused on total (AllBac) and human-associated genetic Bacteroidetes (BacHum-UCD, HF183 TaqMan) markers by using quantitative PCR, as well as 16S rRNA gene-based next-generation sequencing. The microbial parameters showed high persistence in both raw and treated wastewater at 5°C under the storage conditions used. Surprisingly, and in contrast to results obtained with treated wastewater, persistence of the microbial markers in raw wastewater was also high at 21°C. On the basis of our results, 24-h autosampling procedures with 5°C storage conditions can be recommended for the investigation of fecal indicators or Bacteroidetes genetic markers at municipal WWTPs. Such autosampling procedures will contribute to better understanding and monitoring of municipal WWTPs as sources of fecal pollution in water resources. PMID:26002900
77 FR 13121 - Solar Energy Industries Association: Notice of Petition for Rulemaking
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-05
... its small generator interconnection rules and procedures \\1\\ for solar electric generation. \\1\\ Standardization of Small Generator Interconnection Agreements and Procedures, Order No. 2006, FERC Stats. & Regs... First Street NE., Washington, DC 20426. This filing is accessible on-line at http://www.ferc.gov , using...
Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M
2008-12-09
The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.
Petri, Nils; Gassenmaier, Tobias; Allmendinger, Thomas; Flohr, Thomas; Voelker, Wolfram; Bley, Thorsten A
2017-02-01
To detect an in-stent restenosis, an invasive coronary angiography is commonly performed. Owing to the risk associated with this procedure, a non-invasive method to detect or exclude an in-stent restenosis is desirable. The purpose of this study was to evaluate the influence of cardiac motion on stent lumen visibility in a third-generation dual-source CT scanner (SOMATOM Force; Siemens Healthcare, Forchheim, Germany), employing a pulsatile heart model (CoroSim ® ; Mecora, Aachen, Germany). 13 coronary stents with a diameter of 3.0 mm were implanted in plastic tubes filled with a contrast medium and then fixed onto the pulsatile phantom heart model. The scans were performed while the heart model mimicked the heartbeat. Coronary stents were scanned in an orientation parallel to the scanner z-axis. The evaluation of the stents was performed by employing a medium sharp convolution kernel optimized for vascular imaging. The mean visible stent lumen was reduced from 65.6 ± 5.7% for the stents at rest to 60.8 ± 4.4% for the stents in motion (p-value: <0.001). While the difference in lumen visibility between stents in motion and at rest was significant, the use of this third-generation dual-source CT scanner enabled a high stent lumen visibility under the influence of cardiac motion. Whether this translates into a clinical setting has to be evaluated in further patient studies. Advances in knowledge: The employed modern CT scanner enables a high stent lumen visibility even under the influence of cardiac motion, which is important to detect or exclude an in-stent restenosis.
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.
Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen
2010-12-21
There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.
ERIC Educational Resources Information Center
Hooshyar, Danial; Yousefi, Moslem; Lim, Heuiseok
2018-01-01
Automated content generation for educational games has become an emerging research problem, as manual authoring is often time consuming and costly. In this article, we present a procedural content generation framework that intends to produce educational game content from the viewpoint of both designer and user. This framework generates content by…
Minimization of power consumption during charging of superconducting accelerating cavities
NASA Astrophysics Data System (ADS)
Bhattacharyya, Anirban Krishna; Ziemann, Volker; Ruber, Roger; Goryashko, Vitaliy
2015-11-01
The radio frequency cavities, used to accelerate charged particle beams, need to be charged to their nominal voltage after which the beam can be injected into them. The standard procedure for such cavity filling is to use a step charging profile. However, during initial stages of such a filling process a substantial amount of the total energy is wasted in reflection for superconducting cavities because of their extremely narrow bandwidth. The paper presents a novel strategy to charge cavities, which reduces total energy reflection. We use variational calculus to obtain analytical expression for the optimal charging profile. Energies, reflected and required, and generator peak power are also compared between the charging schemes and practical aspects (saturation, efficiency and gain characteristics) of power sources (tetrodes, IOTs and solid state power amplifiers) are also considered and analysed. The paper presents a methodology to successfully identify the optimal charging scheme for different power sources to minimize total energy requirement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less
He, Yu-Ming; Liu, Jin; Maier, Sebastian; Emmerling, Monika; Gerhardt, Stefan; Davanço, Marcelo; Srinivasan, Kartik; Schneider, Christian; Höfling, Sven
2017-07-20
Deterministic techniques enabling the implementation and engineering of bright and coherent solid-state quantum light sources are key for the reliable realization of a next generation of quantum devices. Such a technology, at best, should allow one to significantly scale up the number of implemented devices within a given processing time. In this work, we discuss a possible technology platform for such a scaling procedure, relying on the application of nanoscale quantum dot imaging to the pillar microcavity architecture, which promises to combine very high photon extraction efficiency and indistinguishability. We discuss the alignment technology in detail, and present the optical characterization of a selected device which features a strongly Purcell-enhanced emission output. This device, which yields an extraction efficiency of η = (49 ± 4) %, facilitates the emission of photons with (94 ± 2.7) % indistinguishability.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
Contribution of LPG-Derived Emissions to Air Pollution in the Metropolitan Area of Guadalajara City.
Schifter, Isaac; Magdaleno, Moises; Díaz, Luis; Melgarejo, Luis A; Barrera, Adrian; Krüger, Burkhard; Arriaga, José L; Lopez-Salinas, Esteban
2000-10-01
Measurements of hydrocarbon (HC) emissions generated by the use of liquefied petroleum gas (LPG) in the metropolitan area of Guadalajara City (MAG) are presented in this work. Based on measurements in the course of distribution, handling, and consumption, an estimated 4407 tons/yr are released into the atmosphere. The three most important contributors to LPG emissions were refilling of LPG-fueled vehicles and commercial and domestic consumption. The MAG shows a different contribution pattern of LPG emission sources compared with that of the metropolitan area of Mexico City (MAMC). These results show that each megacity has different sources of emissions, which provides more accurate strategies in the handling procedures for LPG to decrease the impact in O 3 levels. This work represents the first evaluation performed in Guadalajara City, based on current measurements, of the LPG contribution to polluting emissions.
Contribution of LPG-derived emissions to air pollution in the metropolitan area of Guadalajara City.
Schifter, I; Magdaleno, M; Díaz, L; Melgarejo, L A; Barrera, A; Krüger, B; Arriaga, J L; López-Salinas, E
2000-10-01
Measurements of hydrocarbon (HC) emissions generated by the use of liquefied petroleum gas (LPG) in the metropolitan area of Guadalajara City (MAG) are presented in this work. Based on measurements in the course of distribution, handling, and consumption, an estimated 4407 tons/yr are released into the atmosphere. The three most important contributors to LPG emissions were refilling of LPG-fueled vehicles and commercial and domestic consumption. The MAG shows a different contribution pattern of LPG emission sources compared with that of the metropolitan area of Mexico City (MAMC). These results show that each megacity has different sources of emissions, which provides more accurate strategies in the handling procedures for LPG to decrease the impact in O3 levels. This work represents the first evaluation performed in Guadalajara City, based on current measurements, of the LPG contribution to polluting emissions.
Towards Verification of Operational Procedures Using Auto-Generated Diagnostic Trees
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Lutz, Robyn; Patterson-Hine, Ann
2009-01-01
The design, development, and operation of complex space, lunar and planetary exploration systems require the development of general procedures that describe a detailed set of instructions capturing how mission tasks are performed. For both crewed and uncrewed NASA systems, mission safety and the accomplishment of the scientific mission objectives are highly dependent on the correctness of procedures. In this paper, we describe how to use the auto-generated diagnostic trees from existing diagnostic models to improve the verification of standard operating procedures. Specifically, we introduce a systematic method, namely the Diagnostic Tree for Verification (DTV), developed with the goal of leveraging the information contained within auto-generated diagnostic trees in order to check the correctness of procedures, to streamline the procedures in terms of reducing the number of steps or use of resources in them, and to propose alternative procedural steps adaptive to changing operational conditions. The application of the DTV method to a spacecraft electrical power system shows the feasibility of the approach and its range of capabilities
NASA Astrophysics Data System (ADS)
Palazón, Leticia; Gaspar, Leticia; Latorre, Borja; Blake, Will; Navas, Ana
2014-05-01
Spanish Pyrenean reservoirs are under pressure from high sediment yields in contributing catchments. Sediment fingerprinting approaches offer potential to quantify the contribution of different sediment sources, evaluate catchment erosion dynamics and develop management plans to tackle the reservoir siltation problems. The drainage basin of the Barasona reservoir (1509 km2), located in the Central Spanish Pyrenees, is an alpine-prealpine agroforest basin supplying sediments to the reservoir at an annual rate of around 350 t km-2 with implications for reservoir longevity. The climate is mountain type, wet and cold, with both Atlantic and Mediterranean influences. Steep slopes and the presence of deep and narrow gorges favour rapid runoff and large floods. The ability of geochemical fingerprint properties to discriminate between the sediment sources was investigated by conducting the nonparametric Kruskal-Wallis H-test and a stepwise discriminant function analysis (minimization of Wilk's lambda). This standard procedure selects potential fingerprinting properties as optimum composite fingerprint to characterize and discriminate between sediment sources to the reservoir. Then the contribution of each potential sediment source was assessed by applying a Monte Carlo mixing model to obtain source proportions for the Barasona reservoir sediment samples. The Monte Carlo mixing model was written in C programming language and designed to deliver a user-defined number possible solutions. A Combinatorial Principals method was used to identify the most probable solution with associated uncertainty based on source variability. The unique solution for each sample was characterized by the mean value and the standard deviation of the generated solutions and the lower goodness of fit value applied. This method is argued to guarantee a similar set of representative solutions in all unmixing cases based on likelihood of occurrence. Soil samples for the different potential sediment sources of the drainage basin were compared with samples from the reservoir using a range of different fingerprinting properties (i.e. mass activities of environmental radionuclides, elemental composition and magnetic susceptibility) analyzed in the < 63 μm sediment fraction. In this case, the 100 best results from 106 generated iterations were selected obtaining a goodness of fit higher than 0.76. The preliminary results using this new data processing methodology for samples collected in the reservoir allowed us to identify cultivated fields and badlands as main potential sources of sediments to the reservoir. These findings support the appropriate use of the fingerprinting methodology in a Spanish Pyrenees basin, which will enable us to better understand the basin sediment production of the Barasona reservoir.
Acquisition, representation and rule generation for procedural knowledge
NASA Technical Reports Server (NTRS)
Ortiz, Chris; Saito, Tim; Mithal, Sachin; Loftin, R. Bowen
1991-01-01
Current research into the design and continuing development of a system for the acquisition of procedural knowledge, its representation in useful forms, and proposed methods for automated C Language Integrated Production System (CLIPS) rule generation is discussed. The Task Analysis and Rule Generation Tool (TARGET) is intended to permit experts, individually or collectively, to visually describe and refine procedural tasks. The system is designed to represent the acquired knowledge in the form of graphical objects with the capacity for generating production rules in CLIPS. The generated rules can then be integrated into applications such as NASA's Intelligent Computer Aided Training (ICAT) architecture. Also described are proposed methods for use in translating the graphical and intermediate knowledge representations into CLIPS rules.
Pressure Mapping and Efficiency Analysis of an EPPLER 857 Hydrokinetic Turbine
NASA Astrophysics Data System (ADS)
Clark, Tristan
A conceptual energy ship is presented to provide renewable energy. The ship, driven by the wind, drags a hydrokinetic turbine through the water. The power generated is used to run electrolysis on board, taking the resultant hydrogen back to shore to be used as an energy source. The basin efficiency (Power/thrust*velocity) of the Hydrokinetic Turbine (HTK) plays a vital role in this process. In order to extract the maximum allowable power from the flow, the blades need to be optimized. The structural analysis of the blade is important, as the blade will undergo high pressure loads from the water. A procedure for analysis of a preliminary Hydrokinetic Turbine blade design is developed. The blade was designed by a non-optimized Blade Element Momentum Theory (BEMT) code. Six simulations were run, with varying mesh resolution, turbulence models, and flow region size. The procedure was developed that provides detailed explanation for the entire process, from geometry and mesh generation to post-processing analysis tools. The efficiency results from the simulations are used to study the mesh resolution, flow region size, and turbulence models. The results are compared to the BEMT model design targets. Static pressure maps are created that can be used for structural analysis of the blades.
Improved synthesis of carbon-clad silica stationary phases.
Haidar Ahmad, Imad A; Carr, Peter W
2013-12-17
Previously, we described a novel method for cladding elemental carbon onto the surface of catalytically activated silica by a chemical vapor deposition (CVD) method using hexane as the carbon source and its use as a substitute for carbon-clad zirconia.1,2 In that method, we showed that very close to exactly one uniform monolayer of Al (III) was deposited on the silica by a process analogous to precipitation from homogeneous solution in order to preclude pore blockage. The purpose of the Al(III) monolayer is to activate the surface for subsequent CVD of carbon. In this work, we present an improved procedure for preparing the carbon-clad silica (denoted CCSi) phases along with a new column packing process. The new method yields CCSi phases having better efficiency, peak symmetry, and higher retentivity compared to carbon-clad zirconia. The enhancements were achieved by modifying the original procedure in three ways: First, the kinetics of the deposition of Al(III) were more stringently controlled. Second, the CVD chamber was flushed with a mixture of hydrogen and nitrogen gas during the carbon cladding process to minimize generation of polar sites by oxygen incorporation. Third, the fine particles generated during the CVD process were exhaustively removed by flotation in an appropriate solvent.
Procedure to Generate the MPACT Multigroup Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog
The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the light water reactor. The objective of this document is focused on reviewing the current procedure to generate the MPACT multigroup library. Detailed methodologies and procedures are included in this document for further discussion to improve the MPACT multigroup library.
2011-11-21
CAPE CANAVERAL, Fla. -- Members of the media view the Radiological Control Center (RADCC) at NASA's Kennedy Space Center in Florida during a tour regarding safety equipment and procedures for the upcoming launch of the Mars Science Laboratory (MSL) mission. The MSL spacecraft includes a multi-mission radioisotope thermoelectric generator (MMRTG) that will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Frankie Martin
2011-11-21
CAPE CANAVERAL, Fla. -- Randy Scott, director of Kennedy Space Center's Radiological Control Center (RADCC), speaks to media during a tour regarding safety equipment and procedures for the upcoming launch of the Mars Science Laboratory (MSL) mission. Behind him is Steve Homann, senior advisor for the Department of Energy. The MSL spacecraft includes a multi-mission radioisotope thermoelectric generator (MMRTG) that will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Frankie Martin
2011-11-21
CAPE CANAVERAL, Fla. -- Members of the media take a tour of the Radiological Control Center (RADCC) at NASA's Kennedy Space Center in Florida. The tour focused on safety equipment and procedures for the upcoming launch of the Mars Science Laboratory (MSL) mission. The MSL spacecraft includes a multi-mission radioisotope thermoelectric generator (MMRTG) that will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Frankie Martin
2011-11-21
CAPE CANAVERAL, Fla. -- Surrounded by monitors and consoles, Randy Scott, director of Kennedy Space Center's Radiological Control Center (RADCC), speaks to media during a tour regarding safety equipment and procedures for the upcoming launch of the Mars Science Laboratory (MSL) mission. The MSL spacecraft includes a multi-mission radioisotope thermoelectric generator (MMRTG) that will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Frankie Martin
2011-11-21
CAPE CANAVERAL, Fla. -- Steve Homann, senior advisor for the Department of Energy, speaks to media during a tour of the Radiological Control Center (RADCC) at NASA's Kennedy Space Center in Florida. The tour focused on safety equipment and procedures for the upcoming launch of the Mars Science Laboratory (MSL) mission. The MSL spacecraft includes a multi-mission radioisotope thermoelectric generator (MMRTG) that will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Frankie Martin
2011-11-21
CAPE CANAVERAL, Fla. -- Several instruments are displayed for the media during a tour of the Radiological Control Center (RADCC) at NASA's Kennedy Space Center in Florida. The tour focused on safety equipment and procedures for the upcoming launch of the Mars Science Laboratory (MSL) mission. The MSL spacecraft includes a multi-mission radioisotope thermoelectric generator (MMRTG) that will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Frankie Martin
2011-11-21
CAPE CANAVERAL, Fla. -- During a tour of the Radiological Control Center (RADCC) at NASA's Kennedy Space Center in Florida, members of the media listen as Ryan Bechtel of the U.S. Department of Energy explains safety equipment and procedures for the upcoming launch of the Mars Science Laboratory (MSL) mission. The MSL spacecraft includes a multi-mission radioisotope thermoelectric generator (MMRTG) that will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Frankie Martin
2011-11-21
CAPE CANAVERAL, Fla. -- Steve Homann, senior advisor for the Department of Energy, speaks to media during a tour of the Radiological Control Center (RADCC) at NASA's Kennedy Space Center in Florida. The tour focused on safety equipment and procedures for the upcoming launch of the Mars Science Laboratory (MSL) mission. The MSL spacecraft includes a multi-mission radioisotope thermoelectric generator (MMRTG) that will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Frankie Martin
LACIE performance predictor FOC users manual
NASA Technical Reports Server (NTRS)
1976-01-01
The LACIE Performance Predictor (LPP) is a computer simulation of the LACIE process for predicting worldwide wheat production. The simulation provides for the introduction of various errors into the system and provides estimates based on these errors, thus allowing the user to determine the impact of selected error sources. The FOC LPP simulates the acquisition of the sample segment data by the LANDSAT Satellite (DAPTS), the classification of the agricultural area within the sample segment (CAMS), the estimation of the wheat yield (YES), and the production estimation and aggregation (CAS). These elements include data acquisition characteristics, environmental conditions, classification algorithms, the LACIE aggregation and data adjustment procedures. The operational structure for simulating these elements consists of the following key programs: (1) LACIE Utility Maintenance Process, (2) System Error Executive, (3) Ephemeris Generator, (4) Access Generator, (5) Acquisition Selector, (6) LACIE Error Model (LEM), and (7) Post Processor.
Merlin: Computer-Aided Oligonucleotide Design for Large Scale Genome Engineering with MAGE.
Quintin, Michael; Ma, Natalie J; Ahmed, Samir; Bhatia, Swapnil; Lewis, Aaron; Isaacs, Farren J; Densmore, Douglas
2016-06-17
Genome engineering technologies now enable precise manipulation of organism genotype, but can be limited in scalability by their design requirements. Here we describe Merlin ( http://merlincad.org ), an open-source web-based tool to assist biologists in designing experiments using multiplex automated genome engineering (MAGE). Merlin provides methods to generate pools of single-stranded DNA oligonucleotides (oligos) for MAGE experiments by performing free energy calculation and BLAST scoring on a sliding window spanning the targeted site. These oligos are designed not only to improve recombination efficiency, but also to minimize off-target interactions. The application further assists experiment planning by reporting predicted allelic replacement rates after multiple MAGE cycles, and enables rapid result validation by generating primer sequences for multiplexed allele-specific colony PCR. Here we describe the Merlin oligo and primer design procedures and validate their functionality compared to OptMAGE by eliminating seven AvrII restriction sites from the Escherichia coli genome.
Laboratory procedures to generate viral metagenomes.
Thurber, Rebecca V; Haynes, Matthew; Breitbart, Mya; Wegley, Linda; Rohwer, Forest
2009-01-01
This collection of laboratory protocols describes the steps to collect viruses from various samples with the specific aim of generating viral metagenome sequence libraries (viromes). Viral metagenomics, the study of uncultured viral nucleic acid sequences from different biomes, relies on several concentration, purification, extraction, sequencing and heuristic bioinformatic methods. No single technique can provide an all-inclusive approach, and therefore the protocols presented here will be discussed in terms of hypothetical projects. However, care must be taken to individualize each step depending on the source and type of viral-particles. This protocol is a description of the processes we have successfully used to: (i) concentrate viral particles from various types of samples, (ii) eliminate contaminating cells and free nucleic acids and (iii) extract, amplify and purify viral nucleic acids. Overall, a sample can be processed to isolate viral nucleic acids suitable for high-throughput sequencing in approximately 1 week.
In-situ Tapering of Chalcogenide Fiber for Mid-infrared Supercontinuum Generation
Rudy, Charles W.; Marandi, Alireza; Vodopyanov, Konstantin L.; Byer, Robert L.
2013-01-01
Supercontinuum generation (SCG) in a tapered chalcogenide fiber is desirable for broadening mid-infrared (or mid-IR, roughly the 2-20 μm wavelength range) frequency combs1, 2 for applications such as molecular fingerprinting, 3 trace gas detection, 4 laser-driven particle acceleration, 5 and x-ray production via high harmonic generation. 6 Achieving efficient SCG in a tapered optical fiber requires precise control of the group velocity dispersion (GVD) and the temporal properties of the optical pulses at the beginning of the fiber, 7 which depend strongly on the geometry of the taper. 8 Due to variations in the tapering setup and procedure for successive SCG experiments-such as fiber length, tapering environment temperature, or power coupled into the fiber, in-situ spectral monitoring of the SCG is necessary to optimize the output spectrum for a single experiment. In-situ fiber tapering for SCG consists of coupling the pump source through the fiber to be tapered to a spectral measurement device. The fiber is then tapered while the spectral measurement signal is observed in real-time. When the signal reaches its peak, the tapering is stopped. The in-situ tapering procedure allows for generation of a stable, octave-spanning, mid-IR frequency comb from the sub harmonic of a commercially available near-IR frequency comb. 9 This method lowers cost due to the reduction in time and materials required to fabricate an optimal taper with a waist length of only 2 mm. The in-situ tapering technique can be extended to optimizing microstructured optical fiber (MOF) for SCG10 or tuning of the passband of MOFs, 11 optimizing tapered fiber pairs for fused fiber couplers12 and wavelength division multiplexers (WDMs), 13 or modifying dispersion compensation for compression or stretching of optical pulses.14-16 PMID:23748947
Petters, Oliver; Schmidt, Christian; Henkelmann, Ralf; Pieroh, Philipp; Hütter, Gero; Marquass, Bastian; Aust, Gabriela; Schulz, Ronny M
2018-04-15
Due to the limited self-healing capacity of articular cartilage, innovative, regenerative approaches are of particular interest. The use of two-stage procedures utilizing in vitro-expanded mesenchymal stromal cells (MSCs) from various cell sources requires good manufacturing practice-compliant production, a process with high demands on time, staffing, and financial resources. In contrast, one- stage procedures are directly available, but need a safe enrichment of potent MSCs. CD271 is a surface marker known to marking the majority of native MSCs in bone marrow (BM). In this study, the feasibility of generating a single-stage cartilage graft of enriched CD271 + BM-derived mononuclear cells (MNCs) without in vitro monolayer expansion from eight healthy donors was investigated. Cartilage grafts were generated by magnetic-activated cell sorting and separated cells were directly transferred into collagen type I hydrogels, followed by 3D proliferation and differentiation period of CD271 + , CD271 - , or unseparated MNCs. CD271 + MNCs showed the highest proliferation rate, cell viability, sulfated glycosaminoglycan deposition, and cartilage marker expression compared to the CD271 - or unseparated MNC fractions in 3D culture. Analysis according to the minimal criteria of the International Society for Cellular Therapy highlighted a 66.8-fold enrichment of fibroblast colony-forming units in CD271 + MNCs and the only fulfillment of the MSC marker profile compared to unseparated MNCs. In summary, CD271 + MNCs are capable of generating adequate articular cartilage grafts presenting high cell viability and notable chondrogenic matrix deposition in a CE-marked collagen type I hydrogel, which can obviate the need for an initial monolayer expansion.
Chasing the TIRS ghosts: calibrating the Landsat 8 thermal bands
NASA Astrophysics Data System (ADS)
Schott, John R.; Gerace, Aaron; Raqueno, Nina; Ientilucci, Emmett; Raqueno, Rolando; Lunsford, Allen W.
2014-10-01
The Thermal Infrared Sensor (TIRS) on board Landsat 8 has exhibited a number of anomalous characteristics that have made it difficult to calibrate. These anomalies include differences in the radiometric appearance across the blackbody pre- and post-launch, variations in the cross calibration ratios between detectors that overlap on adjacent arrays (resulting in banding) and bias errors in the absolute calibration that can change spatially/temporally. Several updates to the TIRS calibration procedures were made in the months after launch to attempt to mitigate the impact of these anomalies on flat fielding (cosmetic removal of banding and striping) and mean level bias correction. As a result, banding and striping variations have been reduced but not eliminated and residual bias errors in band 10 should be less than 2 degrees for most targets but can be significantly more in some cases and are often larger in band 11. These corrections have all been essentially ad hoc without understanding or properly accounting for the source of the anomalies, which were, at the time unknown. This paper addresses the procedures that have been undertaken to; better characterize the nature of these anomalies, attempt to identify the source(s) of the anomalies, quantify the phenomenon responsible for them, and develop correction procedures to more effectively remove the impacts on the radiometric products. Our current understanding points to all of the anomalies being the result of internal reflections of energy from outside the target detector's field-of-view, and often outside the telescope field-of-view, onto the target detector. This paper discusses how various members of the Landsat calibration team discovered the clues that led to how; these "ghosts" were identified, they are now being characterized, and their impact can hopefully eventually be corrected. This includes use of lunar scans to generate initial maps of influence regions, use of long path overlap ratios to explore sources of change and use of variations in bias calculated from truth sites to quantify influences from the surround on absolute bias errors.
Code of Federal Regulations, 2010 CFR
2010-04-01
... obtaining exempt wholesale generator and foreign utility company status. 366.7 Section 366.7 Conservation of... THE PUBLIC UTILITY HOLDING COMPANY ACT OF 2005, FEDERAL POWER ACT AND NATURAL GAS ACT BOOKS AND... Procedures for obtaining exempt wholesale generator and foreign utility company status. (a) Self...
Code of Federal Regulations, 2010 CFR
2010-10-01
... CONTRACTOR QUALIFICATIONS Organizational and Consultant Conflicts of Interest 9.506 Procedures. (a) If... conflicts of interest or to develop recommended actions, contracting officers should first seek the information from within the Government or from other readily available sources. Government sources include the...
Surgeon and type of anesthesia predict variability in surgical procedure times.
Strum, D P; Sampson, A R; May, J H; Vargas, L G
2000-05-01
Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated with variability in surgical procedure times, knowledge of which may ultimately be used to improve surgical scheduling and operating room utilization.
Designing Flightdeck Procedures: Literature Resources
NASA Technical Reports Server (NTRS)
Feldman, Jolene; Barshi, Immanuel; Degani, Asaf; Loukopoulou, Loukia; Mauro, Robert
2017-01-01
This technical publication contains the titles, abstracts, summaries, descriptions, and/or annotations of available literature sources on procedure design and development, requirements, and guidance. It is designed to provide users with an easy access to available resources on the topic of procedure design, and with a sense of the contents of these sources. This repository of information is organized into the following publication sources: Research (e.g., journal articles, conference proceedings), Manufacturers' (e.g., operation manuals, newsletters), and Regulatory and/or Government (e.g., advisory circulars, reports). An additional section contains synopses of Accident/Incident Reports involving procedures. This work directly supports a comprehensive memorandum by Barshi, Mauro, Degani, & Loukopoulou (2016) that summarizes the results of a multi-year project, partially funded by the FAA, to develop technical reference materials that support guidance on the process of developing cockpit procedures (see "Designing Flightdeck Procedures" https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20160013263.pdf). An extensive treatment of this topic is presented in a forthcoming book by the same authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.
2010-01-01
The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind and solar power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation), and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the loadmore » and wind/solar forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. To improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. Currently, uncertainties associated with wind and load forecasts, as well as uncertainties associated with random generator outages and unexpected disconnection of supply lines, are not taken into account in power grid operation. Thus, operators have little means to weigh the likelihood and magnitude of upcoming events of power imbalance. In this project, funded by the U.S. Department of Energy (DOE), a framework has been developed for incorporating uncertainties associated with wind and load forecast errors, unpredicted ramps, and forced generation disconnections into the energy management system (EMS) as well as generation dispatch and commitment applications. A new approach to evaluate the uncertainty ranges for the required generation performance envelope including balancing capacity, ramping capability, and ramp duration has been proposed. The approach includes three stages: forecast and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence levels. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis, incorporating all sources of uncertainties of both continuous (wind and load forecast errors) and discrete (forced generator outages and start-up failures) nature. A new method called the “flying brick” technique has been developed to evaluate the look-ahead required generation performance envelope for the worst case scenario within a user-specified confidence level. A self-validation algorithm has been developed to validate the accuracy of the confidence intervals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira, M.; Doom, L.; Hseuh, H.
2009-09-13
National Synchrotron Light Source II, being constructed at Brookhaven, is a 3-GeV, 500 mA, 3rd generation synchrotron radiation facility with ultra low emittance electron beams. The storage ring vacuum system has a circumference of 792 m and consists of over 250 vacuum chambers with a simulated average operating pressure of less than 1 x 10{sup -9} mbar. A summary of the update design of the vacuum system including girder supports of the chambers, gauges, vacuum pumps, bellows, beam position monitors and simulation of the average pressure will be shown. A brief description of the techniques and procedures for cleaning andmore » mounting the chambers are given.« less
Infrasound associated with 2004-2005 large Sumatra earthquakes and tsunami
NASA Astrophysics Data System (ADS)
Le Pichon, A.; Herry, P.; Mialle, P.; Vergoz, J.; Brachet, N.; Garcés, M.; Drob, D.; Ceranna, L.
2005-10-01
Large earthquakes that occurred in the Sumatra region in 2004 and 2005 generated acoustic waves recorded by the Diego Garcia infrasound array. The Progressive Multi-Channel Correlation (PMCC) analysis is performed to detect the seismic and infrasound signals associated with these events. The study is completed by an inverse location procedure that permitted reconstruction of the source location of the infrasonic waves. The results show that ground motion near the epicenter and vibrations of nearby land masses efficiently produced infrasound. The analysis also reveals unique evidence of long period pressure waves from the tsunami earthquake (M9.0) of December 26, 2004.
NASA Astrophysics Data System (ADS)
Hayat, T.; Ahmad, Salman; Ijaz Khan, M.; Alsaedi, A.
2018-05-01
In this article we investigate the flow of Sutterby liquid due to rotating stretchable disk. Mass and heat transport are analyzed through Brownian diffusion and thermophoresis. Further the effects of magnetic field, chemical reaction and heat source are also accounted. We employ transformation procedure to obtain a system of nonlinear ODE’s. This system is numerically solved by Built-in-Shooting method. Impacts of different involved parameter on velocity, temperature and concentration are described. Velocity, concentration and temperature gradients are numerically computed. Obtained results show that velocity is reduced through material parameter. Temperature and concentration are enhanced with thermophoresis parameter.
Clinical uses of liver stem cells.
Dan, Yock Young
2012-01-01
Liver transplantation offers a definitive cure for many liver and metabolic diseases. However, the complex invasive procedure and paucity of donor liver graft organs limit its clinical applicability. Liver stem cells provide a potentially limitless source of cells that would be useful for a variety of clinical applications. These stem cells or hepatocytes generated from them can be used in cellular transplantation, bioartificial liver devices and drug testing in the development of new drugs. In this chapter, we review the technical aspects of clinical applications of liver stem cells and the progress made to date in the clinical setting. The difficulties and challenges of realizing the potential of these cells are discussed.
NASA Astrophysics Data System (ADS)
Reolon, David; Jacquot, Maxime; Verrier, Isabelle; Brun, Gérald; Veillas, Colette
2006-12-01
In this paper we propose group refractive index measurement with a spectral interferometric set-up using a broadband supercontinuum generated in an air-silica Microstructured Optical Fibre (MOF) pumped with a picosecond pulsed microchip laser. This source authorizes high fringes visibility for dispersion measurements by Spectroscopic Analysis of White Light Interferograms (SAWLI). Phase calculation is assumed by a wavelet transform procedure combined with a curve fit of the recorded channelled spectrum intensity. This approach provides high resolution and absolute group refractive index measurements along one line of the sample by recording a single 2D spectral interferogram without mechanical scanning.
NASA Technical Reports Server (NTRS)
Trosset, Michael W.
1999-01-01
Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.
ERIC Educational Resources Information Center
Bowler, Dermot M.; Gaigg, Sebastian B.; Gardiner, John M.
2015-01-01
Adults with autism spectrum disorder (ASD) show intact recognition (supported procedure) but impaired recall (unsupported procedure) of incidentally-encoded context. Because this has not been demonstrated for temporal source, we compared the temporal and spatial source memory of adults with ASD and verbally matched typical adults. Because of…
NASA Technical Reports Server (NTRS)
Moore, Dwight G; Mason, Mary A; Harrison, William N
1953-01-01
When porcelain enamels or vitreous-type ceramic coatings are applied to ferrous metals, there is believed to be an evolution of hydrogen gas both during and after the firing operation. At elevated temperatures rapid evolution may result in blistering while if hydrogen becomes trapped in the steel during the rapid cooling following the firing operation gas pressures may be generated at the coating-metal interface and flakes of the coating literally blown off the metal. To determine experimentally the relative importance of the principal sources of the hydrogen causing the defects, a procedure was devised in which heavy hydrogen (deuterium) was substituted in turn for regular hydrogen in each of five possible hydrogen-producing operations in the coating process. The findings of the study were as follows: (1) the principal source of the defect-producing hydrogen was the dissolved water present in the enamel frit that was incorporated into the coating. (2) the acid pickling, the milling water, the chemically combined water in the clay, and the quenching water were all minor sources of defect-producing hydrogen under the test conditions used. Confirming experiments showed that fishscaling could be eliminated by using a water-free coating.
NASA Astrophysics Data System (ADS)
Yang, Tao; Zhang, Qi; Hao, Yue; Zhou, Xin-hui; Yi, Ming-dong; Wei, Wei; Huang, Wei; Li, Xing-ao
2017-10-01
A multiple-input multiple-output visible light communication (VLC) system based on disorder dispersion components is presented. Instead of monochromatic sources and large size photodetectors used in the traditional VLC systems, broadband sources with different spectra act as the transmitters and a compact imaging chip sensor accompanied by a disorder dispersion component and a calculating component serve as the receivers in the proposed system. This system has the merits of small size, more channels, simple structure, easy integration, and low cost. Simultaneously, the broadband sources are suitable to act as illumination sources for their white color. A regularized procedure is designed to solve a matrix equation for decoding the signals at the receivers. A proof-of-concept experiment using on-off keying modulation has been done to prove the feasibility of the design. The experimental results show that the signals decoded by the receivers fit well with those generated from the transmitters, but the bit error ratio is increased with the number of the signal channels. The experimental results can be further improved using a high-speed charge-coupled device, decreasing noises, and increasing the distance between the transmitters and the receivers.
Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.
McShane, L M; Clark, L C; Combs, G F; Turnbull, B W
1991-06-01
Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.
Neutrinos as a diagnostic of cosmic ray galactic-extragalactic transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlers, Markus; Ringwald, Andreas; Anchordoqui, Luis A.
2005-07-15
Motivated by a recent change in viewing the onset of the extragalactic component in the cosmic ray spectrum, we have fitted the observed data down to 10{sup 8.6} GeV and have obtained the corresponding power emissivity. This transition energy is well below the threshold for resonant p{gamma} absorption on the cosmic microwave background, and thus source evolution is an essential ingredient in the fitting procedure. Two-parameter fits in the spectral and redshift evolution indices show that a standard Fermi E{sub i}{sup -2} source spectrum is excluded at larger than 95% confidence level (CL). Armed with the primordial emissivity, we followmore » Waxman and Bahcall to derive the associated neutrino flux on the basis of optically thin sources. For pp interactions as the generating mechanism, the neutrino flux exceeds the AMANDA-B10 90% CL upper limits. In the case of p{gamma} dominance, the flux is consistent with AMANDA-B10 data. In the new scenario the source neutrino flux is considerably enhanced, especially below 10{sup 9} GeV. Should data from AMANDA-II prove consistent with the model, we show that IceCube can measure the characteristic power law of the neutrino spectrum, and thus provide a window on the source dynamics.« less
UrQt: an efficient software for the Unsupervised Quality trimming of NGS data.
Modolo, Laurent; Lerat, Emmanuelle
2015-04-29
Quality control is a necessary step of any Next Generation Sequencing analysis. Although customary, this step still requires manual interventions to empirically choose tuning parameters according to various quality statistics. Moreover, current quality control procedures that provide a "good quality" data set, are not optimal and discard many informative nucleotides. To address these drawbacks, we present a new quality control method, implemented in UrQt software, for Unsupervised Quality trimming of Next Generation Sequencing reads. Our trimming procedure relies on a well-defined probabilistic framework to detect the best segmentation between two segments of unreliable nucleotides, framing a segment of informative nucleotides. Our software only requires one user-friendly parameter to define the minimal quality threshold (phred score) to consider a nucleotide to be informative, which is independent of both the experiment and the quality of the data. This procedure is implemented in C++ in an efficient and parallelized software with a low memory footprint. We tested the performances of UrQt compared to the best-known trimming programs, on seven RNA and DNA sequencing experiments and demonstrated its optimality in the resulting tradeoff between the number of trimmed nucleotides and the quality objective. By finding the best segmentation to delimit a segment of good quality nucleotides, UrQt greatly increases the number of reads and of nucleotides that can be retained for a given quality objective. UrQt source files, binary executables for different operating systems and documentation are freely available (under the GPLv3) at the following address: https://lbbe.univ-lyon1.fr/-UrQt-.html .
Song, Kwangsun; Han, Jung Hyun; Yang, Hyung Chae; Nam, Kwang Il; Lee, Jongho
2017-06-15
Medical electronic implants can significantly improve people's health and quality of life. These implants are typically powered by batteries, which usually have a finite lifetime and therefore must be replaced periodically using surgical procedures. Recently, subdermal solar cells that can generate electricity by absorbing light transmitted through skin have been proposed as a sustainable electricity source to power medical electronic implants in bodies. However, the results to date have been obtained with animal models. To apply the technology to human beings, electrical performance should be characterized using human skin covering the subdermal solar cells. In this paper, we present electrical performance results (up to 9.05mW/cm 2 ) of the implantable solar cell array under 59 human skin samples isolated from 10 cadavers. The results indicate that the power densities depend on the thickness and tone of the human skin, e.g., higher power was generated under thinner and brighter skin. The generated power density is high enough to operate currently available medical electronic implants such as pacemakers that require tens of microwatt. Copyright © 2016 Elsevier B.V. All rights reserved.
Das, Ravi K.; Gale, Grace; Hennessy, Vanessa; Kamboj, Sunjeev K.
2018-01-01
Maladaptive reward memories (MRMs) can become unstable following retrieval under certain conditions, allowing their modification by subsequent new learning. However, robust (well-rehearsed) and chronologically old MRMs, such as those underlying substance use disorders, do not destabilize easily when retrieved. A key determinate of memory destabilization during retrieval is prediction error (PE). We describe a retrieval procedure for alcohol MRMs in hazardous drinkers that specifically aims to maximize the generation of PE and therefore the likelihood of MRM destabilization. The procedure requires explicitly generating the expectancy of alcohol consumption and then violating this expectancy (withholding alcohol) following the presentation of a brief set of prototypical alcohol cue images (retrieval + PE). Control procedures involve presenting the same cue images, but allow alcohol to be consumed, generating minimal PE (retrieval-no PE) or generate PE without retrieval of alcohol MRMs, by presenting orange juice cues (no retrieval + PE). Subsequently, we describe a multisensory disgust-based counterconditioning procedure to probe MRM destabilization by re-writing alcohol cue-reward associations prior to reconsolidation. This procedure pairs alcohol cues with images invoking pathogen disgust and an extremely bitter-tasting solution (denatonium benzoate), generating gustatory disgust. Following retrieval + PE, but not no retrieval + PE or retrieval-no PE, counterconditioning produces evidence of MRM rewriting as indexed by lasting reductions in alcohol cue valuation, attentional capture, and alcohol craving. PMID:29364255
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.
2010-09-01
The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation) and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and windmore » forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. In order to improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively, by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. In this report, a new methodology to predict the uncertainty ranges for the required balancing capacity, ramping capability and ramp duration is presented. Uncertainties created by system load forecast errors, wind and solar forecast errors, generation forced outages are taken into account. The uncertainty ranges are evaluated for different confidence levels of having the actual generation requirements within the corresponding limits. The methodology helps to identify system balancing reserve requirement based on a desired system performance levels, identify system “breaking points”, where the generation system becomes unable to follow the generation requirement curve with the user-specified probability level, and determine the time remaining to these potential events. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (California ISO) real life data have shown the effectiveness of the proposed approach. A tool developed based on the new methodology described in this report will be integrated with the California ISO systems. Contractual work is currently in place to integrate the tool with the AREVA EMS system.« less
Harańczyk, Maciej; Gutowski, Maciej
2007-01-01
We describe a procedure of finding low-energy tautomers of a molecule. The procedure consists of (i) combinatorial generation of a library of tautomers, (ii) screening based on the results of geometry optimization of initial structures performed at the density functional level of theory, and (iii) final refinement of geometry for the top hits at the second-order Möller-Plesset level of theory followed by single-point energy calculations at the coupled cluster level of theory with single, double, and perturbative triple excitations. The library of initial structures of various tautomers is generated with TauTGen, a tautomer generator program. The procedure proved to be successful for these molecular systems for which common chemical knowledge had not been sufficient to predict the most stable structures.
Assessment of the MPACT Resonance Data Generation Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L.
Currently, heterogeneous models are being used to generate resonance self-shielded cross-section tables as a function of background cross sections for important nuclides such as 235U and 238U by performing the CENTRM (Continuous Energy Transport Model) slowing down calculation with the MOC (Method of Characteristics) spatial discretization and ESSM (Embedded Self-Shielding Method) calculations to obtain background cross sections. And then the resonance self-shielded cross section tables are converted into subgroup data which are to be used in estimating problem-dependent self-shielded cross sections in MPACT (Michigan Parallel Characteristics Transport Code). Although this procedure has been developed and thus resonance data have beenmore » generated and validated by benchmark calculations, assessment has never been performed to review if the resonance data are properly generated by the procedure and utilized in MPACT. This study focuses on assessing the procedure and a proper use in MPACT.« less
Von Benda-Beckmann, Alexander M; Wensveen, Paul J; Kvadsheim, Petter H; Lam, Frans-Peter A; Miller, Patrick J O; Tyack, Peter L; Ainslie, Michael A
2014-02-01
Ramp-up or soft-start procedures (i.e., gradual increase in the source level) are used to mitigate the effect of sonar sound on marine mammals, although no one to date has tested whether ramp-up procedures are effective at reducing the effect of sound on marine mammals. We investigated the effectiveness of ramp-up procedures in reducing the area within which changes in hearing thresholds can occur. We modeled the level of sound killer whales (Orcinus orca) were exposed to from a generic sonar operation preceded by different ramp-up schemes. In our model, ramp-up procedures reduced the risk of killer whales receiving sounds of sufficient intensity to affect their hearing. The effectiveness of the ramp-up procedure depended strongly on the assumed response threshold and differed with ramp-up duration, although extending the duration of the ramp up beyond 5 min did not add much to its predicted mitigating effect. The main factors that limited effectiveness of ramp up in a typical antisubmarine warfare scenario were high source level, rapid moving sonar source, and long silences between consecutive sonar transmissions. Our exposure modeling approach can be used to evaluate and optimize mitigation procedures. © 2013 Society for Conservation Biology.
Meng, Xiangpeng; Chan, Wan
2017-02-15
Previous studies have established that 2-alkylcyclobutanones (2-ACBs) are unique radiolytic products in lipid-containing foods that could only be formed through exposure to ionizing radiation, but not by any other means of physical/heat treatment methods. Therefore, 2-ACBs are currently the marker molecules required by the European Committee for Standardization to be used to identify foods irradiated with ionizing irradiation. Using a spectrum of state-of-the-art analytical instruments, we present in this study for the first time that the generation of 2-ACBs was also possible when fatty acids and triglycerides are exposed to a non-ionizing, short-wavelength ultraviolet (UV-C) light source. An irradiation dosage-dependent formation of 2-ACBs was also observed in UV-C irradiated fatty acids, triglycerides, corn oil, and pork samples. With UV-C irradiation becoming an increasingly common food treatment procedure, it is anticipated that the results from this study will alert food scientists and regulatory officials to a potential new source for 2-ACBs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measurement of Quantum Interference in a Silicon Ring Resonator Photon Source.
Steidle, Jeffrey A; Fanto, Michael L; Preble, Stefan F; Tison, Christopher C; Howland, Gregory A; Wang, Zihao; Alsing, Paul M
2017-04-04
Silicon photonic chips have the potential to realize complex integrated quantum information processing circuits, including photon sources, qubit manipulation, and integrated single-photon detectors. Here, we present the key aspects of preparing and testing a silicon photonic quantum chip with an integrated photon source and two-photon interferometer. The most important aspect of an integrated quantum circuit is minimizing loss so that all of the generated photons are detected with the highest possible fidelity. Here, we describe how to perform low-loss edge coupling by using an ultra-high numerical aperture fiber to closely match the mode of the silicon waveguides. By using an optimized fusion splicing recipe, the UHNA fiber is seamlessly interfaced with a standard single-mode fiber. This low-loss coupling allows the measurement of high-fidelity photon production in an integrated silicon ring resonator and the subsequent two-photon interference of the produced photons in a closely integrated Mach-Zehnder interferometer. This paper describes the essential procedures for the preparation and characterization of high-performance and scalable silicon quantum photonic circuits.
Exception handling for sensor fusion
NASA Astrophysics Data System (ADS)
Chavez, G. T.; Murphy, Robin R.
1993-08-01
This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, M., E-mail: mike.schmidt@dreebit.com; Zschornack, G.; Kentsch, U.
The magnetic system of a Dresden electron beam ion source (EBIS) generating the necessary magnetic field with a new type of permanent magnet made of high energy density NdFeB-type material operable at temperatures above 100 °C has been investigated and tested. The employment of such kind of magnets provides simplified operation without the time-consuming installation and de-installation procedures of the magnets for the necessary baking of the ion source after commissioning and maintenance work. Furthermore, with the use of a new magnetization technique the geometrical filling factor of the magnetic Dresden EBIS design could be increased to a filling factor ofmore » 100% leading to an axial magnetic field strength of approximately 0.5 T exceeding the old design by 20%. Simulations using the finite element method software Field Precision and their results compared with measurements are presented as well. It could be shown that several baking cycles at temperatures higher than 100 °C did not change the magnetic properties of the setup.« less
Biophoton research in blood reveals its holistic properties.
Voeikov, V L; Asfaramov, R; Bouravleva, E V; Novikov, C N; Vilenskaya, N D
2003-05-01
Monitoring of spontaneous and luminophore amplified photon emission (PE) from non-diluted human blood under resting conditions and artificially induced immune reaction revealed that blood is a continuous source of biophotons indicating that it persists in electronically excited state. This state is pumped through generation of electron excitation produced in reactive oxygen species (ROS) reactions. Excited state of blood and of neutrophil suspensions (primary sources of ROS in blood) is an oscillatory one suggesting of interaction between individual sources of electron excitation. Excited state of blood is extremely sensitive to the tiniest fluctuations of external photonic fields but resistant to temperature variations as reflected in hysteresis of PE in response to temperature variations. These data suggest that blood is a highly cooperative non-equilibrium and non-linear system, whose components unceasingly interact in time and space. At least in part this property is provided by the ability of blood to store energy of electron excitation that is produced in course of its own normal metabolism. From a practical point of view analysis of these qualities of blood may be a basement of new approach to diagnostic procedures.
Schmidt, M; Zschornack, G; Kentsch, U; Ritter, E
2014-02-01
The magnetic system of a Dresden electron beam ion source (EBIS) generating the necessary magnetic field with a new type of permanent magnet made of high energy density NdFeB-type material operable at temperatures above 100 °C has been investigated and tested. The employment of such kind of magnets provides simplified operation without the time-consuming installation and de-installation procedures of the magnets for the necessary baking of the ion source after commissioning and maintenance work. Furthermore, with the use of a new magnetization technique the geometrical filling factor of the magnetic Dresden EBIS design could be increased to a filling factor of 100% leading to an axial magnetic field strength of approximately 0.5 T exceeding the old design by 20%. Simulations using the finite element method software Field Precision and their results compared with measurements are presented as well. It could be shown that several baking cycles at temperatures higher than 100 °C did not change the magnetic properties of the setup.
Standards for the validation of remotely sensed albedo products
NASA Astrophysics Data System (ADS)
Adams, Jennifer
2015-04-01
Land surface albedo is important component of the Earth's energy balance, defined as the fraction of shortwave radiation absorbed by a surface, and is one many Essential Climate Variables (ECVS) that can be retrieved from space through remote sensing. To quantify the accuracy of these products, they must be validated with respect to in-situ measurements of albedo using an albedometer. Whilst accepted standards exist for the calibration of albedometers, standards for the use of in-situ measurement schemes, and their use in validation procedures have yet to be developed. It is essential that we can assess the quality of remotely sensed albedo data, and to identify traceable sources of uncertainty during process of providing these data. As a result of the current lack of accepted standards for in-situ albedo retrieval and validation procedures, we are not yet able to identify and quantify traceable sources of uncertainty. Establishing standard protocols for in-situ retrievals for the validation of global albedo products would allow inter-product use and comparison, in addition to product standardization. Accordingly, this study aims to assess the quality of in-situ albedo retrieval schemes and identify sources of uncertainty, specifically in vegetation environments. A 3D Monte Carlo Ray Tracing Model will be used to simulate albedometer instruments in complex 3D vegetation canopies. To determine sources of uncertainty, factors that influence albedo measurement uncertainty were identified and will subsequently be examined: 1. Time of day (Solar Zenith Angle) 2. Ecosytem type 3. Placement of albedometer within the ecosystem 4. Height of albedometer above the canopy 5. Clustering within the ecosystem A variety of 3D vegetation canopies have been generated to cover the main ecosystems found globally, different seasons, and different plant distributions. Canopies generated include birchstand and pinestand forests for summer and winter, savanna, shrubland, cropland and citrus orchard. All canopies were simulated for a 100x100m area to best represent in-situ measurement conditions. Preliminary tests have been conducted, firstly, identifying the spectral range required to estimate broadband albedo (BBA) and secondly, determining the hyper-spectral intervals required to calculate BBA from spectral albedo. Final results are expected to be able to identify for the factors aforementioned, given a specified confidence level and within 3% accuracy, when does uncertainty of in-situ measurement fall within these critera, and outside these criteria. As the uncertainty of in-situ measurements should be made on an individual basis accounting for relevant factors, this study aims to document for a specific scenario traceable uncertainty sources in in-situ albedo retrieval.
High-speed photorefractive keratectomy with femtosecond ultraviolet pulses
NASA Astrophysics Data System (ADS)
Danieliene, Egle; Gabryte, Egle; Vengris, Mikas; Ruksenas, Osvaldas; Gutauskas, Algimantas; Morkunas, Vaidotas; Danielius, Romualdas
2015-05-01
Femtosecond near-infrared lasers are widely used for a number of ophthalmic procedures, with flap cutting in the laser-assisted in situ keratomileusis (LASIK) surgery being the most frequent one. At the same time, lasers of this type, equipped with harmonic generators, have been shown to deliver enough ultraviolet (UV) power for the second stage of the LASIK procedure, the stromal ablation. However, the speed of the ablation reported so far was well below the currently accepted standards. Our purpose was to perform high-speed photorefractive keratectomy (PRK) with femtosecond UV pulses in rabbits and to evaluate its predictability, reproducibility and healing response. The laser source delivered femtosecond 206 nm pulses with a repetition rate of 50 kHz and an average power of 400 mW. Transepithelial PRK was performed using two different ablation protocols, to a total depth of 110 and 150 μm. The surface temperature was monitored during ablation; haze dynamics and histological samples were evaluated to assess outcomes of the PRK procedure. For comparison, analogous excimer ablation was performed. Increase of the ablation speed up to 1.6 s/diopter for a 6 mm optical zone using femtosecond UV pulses did not significantly impact the healing process.
Surface characterization protocol for precision aspheric optics
NASA Astrophysics Data System (ADS)
Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra
2017-10-01
In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.
Meyer, Matthew J; Dzik, Walter H; Levine, Wilton C
2017-02-01
Blood product transfusion is the most commonly performed hospital procedure. Intraoperative blood product utilization varies between institutions and anesthesiologists. In the United States in 2011, nearly 4 million plasma units were transfused. A retrospective analysis of intraoperative plasma ordering patterns and utilization (thawing and transfusing) was performed at a tertiary, academic hospital between January 2015 and March 2016. Over 15 months, 46,002 operative procedures were performed. In 1540 of them, plasma was thawed or transfused: 8297 plasma units were thawed and 3306 of those units were transfused. These 3306 plasma units were transfused in 749 cases with a median of 2 plasma units (interquartile range, 2-4) transfused. The percentage of average monthly procedures with plasma thawed and none transfused was 51.3% (confidence interval, 49.0%-53.6%). The cardiac surgery service requested the greatest number of plasma units to be thawed (2143) but only transfused 712 (33.2%) of them. Of all plasma units not transfused, 45% were generated by procedures with 1 to 4 units of plasma thawed; 95.7% of these units were thawed as even integers (ie, 2, 4). For operative procedures, far more plasma was thawed than was transfused and this practice occurred across surgical specialties and anesthesiologists. Considering the plasma that was not transfused, 45% occurred in procedures with 4 or fewer units of plasma requested suggesting these low-volume requests were a primary source of potential waste. Further studies are needed to examine associations between plasma utilization and clinical outcomes.
Combustion Characterization and Model Fuel Development for Micro-tubular Flame-assisted Fuel Cells.
Milcarek, Ryan J; Garrett, Michael J; Baskaran, Amrish; Ahn, Jeongmin
2016-10-02
Combustion based power generation has been accomplished for many years through a number of heat engine systems. Recently, a move towards small scale power generation and micro combustion as well as development in fuel cell research has created new means of power generation that combine solid oxide fuel cells with open flames and combustion exhaust. Instead of relying upon the heat of combustion, these solid oxide fuel cell systems rely on reforming of the fuel via combustion to generate syngas for electrochemical power generation. Procedures were developed to assess the combustion by-products under a wide range of conditions. While theoretical and computational procedures have been developed for assessing fuel-rich combustion exhaust in these applications, experimental techniques have also emerged. The experimental procedures often rely upon a gas chromatograph or mass spectrometer analysis of the flame and exhaust to assess the combustion process as a fuel reformer and means of heat generation. The experimental techniques developed in these areas have been applied anew for the development of the micro-tubular flame-assisted fuel cell. The protocol discussed in this work builds on past techniques to specify a procedure for characterizing fuel-rich combustion exhaust and developing a model fuel-rich combustion exhaust for use in flame-assisted fuel cell testing. The development of the procedure and its applications and limitations are discussed.
NASA Astrophysics Data System (ADS)
Aminov, R. Z.; Khrustalev, V. A.; Portyankin, A. V.
2015-02-01
The effectiveness of combining nuclear power plants equipped with water-cooled water-moderated power-generating reactors (VVER) with other sources of energy within unified power-generating complexes is analyzed. The use of such power-generating complexes makes it possible to achieve the necessary load pickup capability and flexibility in performing the mandatory selective primary and emergency control of load, as well as participation in passing the night minimums of electric load curves while retaining high values of the capacity utilization factor of the entire power-generating complex at higher levels of the steam-turbine part efficiency. Versions involving combined use of nuclear power plants with hydrogen toppings and gas turbine units for generating electricity are considered. In view of the fact that hydrogen is an unsafe energy carrier, the use of which introduces additional elements of risk, a procedure for evaluating these risks under different conditions of implementing the fuel-and-hydrogen cycle at nuclear power plants is proposed. Risk accounting technique with the use of statistical data is considered, including the characteristics of hydrogen and gas pipelines, and the process pipelines equipment tightness loss occurrence rate. The expected intensities of fires and explosions at nuclear power plants fitted with hydrogen toppings and gas turbine units are calculated. In estimating the damage inflicted by events (fires and explosions) occurred in nuclear power plant turbine buildings, the US statistical data were used. Conservative scenarios of fires and explosions of hydrogen-air mixtures in nuclear power plant turbine buildings are presented. Results from calculations of the introduced annual risk to the attained net annual profit ratio in commensurable versions are given. This ratio can be used in selecting projects characterized by the most technically attainable and socially acceptable safety.
Bedding disposal cabinet for containment of aerosols generated by animal cage cleaning procedures.
Baldwin, C L; Sabel, F L; Henke, C B
1976-01-01
Laboratory tests with aerosolized spores and animal room tests with uranine dye indicate the effectiveness of a prototype bedding disposal cabinet in reducing airborne contamination generated by cage cleaning procedures. Images PMID:826219
Hinojosa, J A; Fernández-Folgueiras, U; Albert, J; Santaniello, G; Pozo, M A; Capilla, A
2017-01-27
The present event-related potentials (ERPs) study investigated the effects of mood on phonological encoding processes involved in word generation. For this purpose, negative, positive and neutral affective states were induced in participants during three different recording sessions using short film clips. After the mood induction procedure, participants performed a covert picture naming task in which they searched letters. The negative compared to the neutral mood condition elicited more negative amplitudes in a component peaking around 290ms. Furthermore, results from source localization analyses suggested that this activity was potentially generated in the left prefrontal cortex. In contrast, no differences were found in the comparison between positive and neutral moods. Overall, current data suggest that processes involved in the retrieval of phonological information during speech generation are impaired when participants are in a negative mood. The mechanisms underlying these effects were discussed in relation to linguistic and attentional processes, as well as in terms of the use of heuristics. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zorpas, Antonis A., E-mail: antonis.zorpas@ouc.ac.cy; Lasaridi, Katia, E-mail: klasaridi@hua.gr; Voukkali, Irene
Highlights: • Waste framework directive has set clear waste prevention procedures. • Household Compositional analysis. • Waste management plans. • Zero waste approach. • Waste generation. - Abstract: Waste management planning requires reliable data regarding waste generation, affecting factors on waste generation and forecasts of waste quantities based on facts. In order to decrease the environmental impacts of waste management the choice of prevention plan as well as the treatment method must be based on the features of the waste that are produced in a specific area. Factors such as culture, economic development, climate, and energy sources have an impactmore » on waste composition; composition influences the need of collecting waste more or less frequently of waste collection and disposition. The research question was to discover the main barriers concerning the compositional analysis in Insular Communities under warm climate conditions and the findings from this study enabled the main contents of a waste management plan to be established. These included advice to residents on waste minimisation, liaison with stakeholders and the expansion of kerbside recycling schemes.« less
Algebraic grid generation with corner singularities
NASA Technical Reports Server (NTRS)
Vinokur, M.; Lombard, C. K.
1983-01-01
A simple noniterative algebraic procedure is presented for generating smooth computational meshes on a quadrilateral topology. Coordinate distribution and normal derivative are provided on all boundaries, one of which may include a slope discontinuity. The boundary conditions are sufficient to guarantee continuity of global meshes formed of joined patches generated by the procedure. The method extends to 3-D. The procedure involves a synthesis of prior techniques stretching functions, cubic blending functions, and transfinite interpolation - to which is added the functional form of the corner solution. The procedure introduces the concept of generalized blending, which is implemented as an automatic scaling of the boundary derivatives for effective interpolation. Some implications of the treatment at boundaries for techniques solving elliptic PDE's are discussed in an Appendix.
De Nicola, Antonio; Kawakatsu, Toshihiro; Milano, Giuseppe
2014-12-09
A procedure based on Molecular Dynamics (MD) simulations employing soft potentials derived from self-consistent field (SCF) theory (named MD-SCF) able to generate well-relaxed all-atom structures of polymer melts is proposed. All-atom structures having structural correlations indistinguishable from ones obtained by long MD relaxations have been obtained for poly(methyl methacrylate) (PMMA) and poly(ethylene oxide) (PEO) melts. The proposed procedure leads to computational costs mainly related on system size rather than to the chain length. Several advantages of the proposed procedure over current coarse-graining/reverse mapping strategies are apparent. No parametrization is needed to generate relaxed structures of different polymers at different scales or resolutions. There is no need for special algorithms or back-mapping schemes to change the resolution of the models. This characteristic makes the procedure general and its extension to other polymer architectures straightforward. A similar procedure can be easily extended to the generation of all-atom structures of block copolymer melts and polymer nanocomposites.
40 CFR 98.40 - Definition of the source category.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.40 Definition of the source category. (a) The electricity generation source category comprises electricity generating units that are subject to the requirements of the Acid Rain Program and any other electricity generating units that are...
40 CFR 98.40 - Definition of the source category.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.40 Definition of the source category. (a) The electricity generation source category comprises electricity generating units that are subject to the requirements of the Acid Rain Program and any other electricity generating units that are...
40 CFR 98.40 - Definition of the source category.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.40 Definition of the source category. (a) The electricity generation source category comprises electricity generating units that are subject to the requirements of the Acid Rain Program and any other electricity generating units that are...
40 CFR 98.40 - Definition of the source category.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.40 Definition of the source category. (a) The electricity generation source category comprises electricity generating units that are subject to the requirements of the Acid Rain Program and any other electricity generating units that are...
40 CFR 98.40 - Definition of the source category.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.40 Definition of the source category. (a) The electricity generation source category comprises electricity generating units that are subject to the requirements of the Acid Rain Program and any other electricity generating units that are...
Multi-particle inspection using associated particle sources
Bingham, Philip R.; Mihalczo, John T.; Mullens, James A.; McConchie, Seth M.; Hausladen, Paul A.
2016-02-16
Disclosed herein are representative embodiments of methods, apparatus, and systems for performing combined neutron and gamma ray radiography. For example, one exemplary system comprises: a neutron source; a set of alpha particle detectors configured to detect alpha particles associated with neutrons generated by the neutron source; neutron detectors positioned to detect at least some of the neutrons generated by the neutron source; a gamma ray source; a set of verification gamma ray detectors configured to detect verification gamma rays associated with gamma rays generated by the gamma ray source; a set of gamma ray detectors configured to detect gamma rays generated by the gamma ray source; and an interrogation region located between the neutron source, the gamma ray source, the neutron detectors, and the gamma ray detectors.
Promulgated quality assurance Procedure 5 Quality Assurance Requirements For Vapor Phase Mercury Continuous Emissions Monitoring Systems And Sorbent Trap Monitoring Systems Used For Compliance Determination At Stationary Sources
Yahtzee: An Anonymized Group Level Matching Procedure
Jones, Jason J.; Bond, Robert M.; Fariss, Christopher J.; Settle, Jaime E.; Kramer, Adam D. I.; Marlow, Cameron; Fowler, James H.
2013-01-01
Researchers often face the problem of needing to protect the privacy of subjects while also needing to integrate data that contains personal information from diverse data sources. The advent of computational social science and the enormous amount of data about people that is being collected makes protecting the privacy of research subjects ever more important. However, strict privacy procedures can hinder the process of joining diverse sources of data that contain information about specific individual behaviors. In this paper we present a procedure to keep information about specific individuals from being “leaked” or shared in either direction between two sources of data without need of a trusted third party. To achieve this goal, we randomly assign individuals to anonymous groups before combining the anonymized information between the two sources of data. We refer to this method as the Yahtzee procedure, and show that it performs as predicted by theoretical analysis when we apply it to data from Facebook and public voter records. PMID:23441156
Yahtzee: an anonymized group level matching procedure.
Jones, Jason J; Bond, Robert M; Fariss, Christopher J; Settle, Jaime E; Kramer, Adam D I; Marlow, Cameron; Fowler, James H
2013-01-01
Researchers often face the problem of needing to protect the privacy of subjects while also needing to integrate data that contains personal information from diverse data sources. The advent of computational social science and the enormous amount of data about people that is being collected makes protecting the privacy of research subjects ever more important. However, strict privacy procedures can hinder the process of joining diverse sources of data that contain information about specific individual behaviors. In this paper we present a procedure to keep information about specific individuals from being "leaked" or shared in either direction between two sources of data without need of a trusted third party. To achieve this goal, we randomly assign individuals to anonymous groups before combining the anonymized information between the two sources of data. We refer to this method as the Yahtzee procedure, and show that it performs as predicted by theoretical analysis when we apply it to data from Facebook and public voter records.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schock, A.; Noravian, H.; Or, C.
1997-12-31
This paper presents the background and introduction to the OSC AMTEC (Alkali Metal Thermal-to-Electrical Conversion) studies, which were conducted for the Department of energy (DOE) and NASA`s jet Propulsion Laboratory (JPL). After describing the basic principle of AMTEC, the paper describes and explains the operation of multi-tube vapor/vapor cells, which have been under development by AMPS (Advance Modular Power Systems, Inc.) for the Air Force Phillips Laboratory (AFPL) and JPL for possible application to the Europa Orbiter, Pluto Express, and other space missions. It then describes a novel OSC-generated methodology for analyzing the performance of such cells. This methodology consistsmore » of an iterative procedure for the coupled solution of the interdependent thermal, electrical, and fluid flow differential and integral equations governing the performance of AMTEC cells and generators, taking proper account of the non-linear axial variations of temperature, pressure, open-circuit voltage, inter-electrode voltages, current density, axial current, sodium mass flow rate, and power density. The paper illustrates that analytical procedure by applying it to OSC`s latest cell design and by presenting detailed analytical results for that design. The OSC-developed analytic methodology constitutes a unique and powerful tool for accurate parametric analyses and design optimizations of the multi-tube AMTEC cells and of radioisotope power systems. This is illustrated in two companion papers in these proceedings. The first of those papers applies the OSC-derived program to determine the effect of various design parameters on the performance of single AMTEC cells with adiabatic side walls, culminating in an OSC-recommended revised cell design. And the second describes a number of OSC-generated AMTEC generator designs consisting of 2 and 3 GPHS heat source modules, 16 multi-tube converter cells, and a hybrid insulation design, and presents the results of applying the above analysis program to determine the applicability of those generators to possible future missions under consideration by NASA.« less
Kaur, Ravneet Ruby; Glick, Jaimie B.; Siegel, Daniel
2013-01-01
As dermatological procedures continue to become increasingly complex, improved methods and tools to achieve appropriate hemostasis become necessary. The methods for achieving adequate hemostasis are variable and depend greatly on the type of procedure performed and the unique characteristics of the individual patient. In Part 1 of this review, we discuss the preoperative, intraoperative, and postoperative management of patients undergoing dermatologic surgery. We address oral medications and supplements that affect hemostasis, hemostatic anesthesia, and intraoperative interventions such as suture ligation and heat-generating cautery devices. In Part 2 of this review, we will discuss topical hemostats. The authors conducted an extensive literature review using the following keywords: “hemostasis,” “dermatology,” “dermatological surgery,” “dermatologic sutures,” “electrosurgery,” “hemostatic anesthesia,” and “laser surgery.” Sources for this article were identified by searching the English literature in the Pubmed database for the time period from 1940 to March 2012. A thorough bibliography search was also conducted and key references were examined. PMID:23741660
Liposuction devices: technology update
Shridharani, Sachin M; Broyles, Justin M; Matarasso, Alan
2014-01-01
Since its introduction by Illouz and others over 30 years ago, suction-assisted lipectomy/liposuction/lipoplasty has evolved tremendously and has developed into one of the most popular procedures in aesthetic plastic surgery. Liposuction is an effective procedure employed to treat localized adipose deposits in patients not suffering from generalized obesity. These accumulations of subcutaneous fat often occur in predictable distributions in both men and women. A cannula connected to a suction-generating source allows for small incisions to be strategically placed and large volumes of fat to be removed. This fat removal leads to improved harmonious balance of a patient’s physique and improved body contour. Various surgical techniques are available and have evolved as technology has improved. Current technology for liposuction includes suction-assisted lipectomy, ultrasound-assisted, power-assisted, laser-assisted, and radiofrequency-assisted. The choice of technology and technique often depends on patient characteristics and surgeon preference. The objective of this review is to provide a thorough assessment of current technologies available to plastic surgeons performing liposuction. PMID:25093000
48 CFR 6.102 - Use of competitive procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ACQUISITION PLANNING COMPETITION REQUIREMENTS Full and Open Competition 6.102 Use of competitive procedures. The competitive procedures available for use in fulfilling the requirement for full and open... procedures (e.g., two-step sealed bidding). (d) Other competitive procedures. (1) Selection of sources for...
Natural language generation of surgical procedures.
Wagner, J C; Rogers, J E; Baud, R H; Scherrer, J R
1998-01-01
The GALEN-IN-USE project has developed a compositional scheme for the conceptual representation of surgical operative procedure rubrics. The complex representations which result are translated back to surface language by a tool for multilingual natural language generation. This generator can be adapted to the specific characteristics of the scheme by introducing particular definitions of concepts and relationships. We discuss how the generator uses such definitions to bridge between the modelling 'style' of the GALEN scheme and natural language.
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almeida, G. L.; Silvani, M. I.; Lopes, R. T.
Two main parameters rule the performance of an Image Acquisition System, namely, spatial resolution and contrast. For radiographic systems using cone beam arrangements, the farther the source, the better the resolution, but the contrast would diminish due to the lower statistics. A closer source would yield a higher contrast but it would no longer reproduce the attenuation map of the object, as the incoming beam flux would be reduced by unequal large divergences and attenuation factors. This work proposes a procedure to correct these effects when the object is comprised of a hull - or encased in it - possessingmore » a shape capable to be described in analytical geometry terms. Such a description allows the construction of a matrix containing the attenuation factors undergone by the beam from the source until its final destination at each coordinate on the 2D detector. Each matrix element incorporates the attenuation suffered by the beam after its travel through the hull wall, as well as its reduction due to the square of distance to the source and the angle it hits the detector surface. When the pixel intensities of the original image are corrected by these factors, the image contrast, reduced by the overall attenuation in the exposure phase, are recovered, allowing one to see details otherwise concealed due to the low contrast. In order to verify the soundness of this approach, synthetic images of objects of different shapes, such as plates and tubes, incorporating defects and statistical fluctuation, have been generated, recorded for further comparison and afterwards processed to improve their contrast. The developed algorithm which, generates processes and plots the images has been written in Fortran 90 language. As the resulting final images exhibit the expected improvements, it therefore seemed worthwhile to carry out further tests with actual experimental radiographies.« less
Olivares, Ela I; Lage-Castellanos, Agustín; Bobes, María A; Iglesias, Jaime
2018-01-01
We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs) in both intra-domain (face-feature) and cross-domain (face-occupation) matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called "Fusiform Face Area", "FFA" and "Occipital Face Area", "OFA", respectively), the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.
40 CFR 63.2852 - What is a startup, shutdown, and malfunction plan?
Code of Federal Regulations, 2010 CFR
2010-07-01
... may come from plans you developed for other purposes such as a Standard Operating Procedure manual or... source is operational. The SSM plan provides detailed procedures for operating and maintaining your...) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...
40 CFR 63.2852 - What is a startup, shutdown, and malfunction plan?
Code of Federal Regulations, 2011 CFR
2011-07-01
... may come from plans you developed for other purposes such as a Standard Operating Procedure manual or... source is operational. The SSM plan provides detailed procedures for operating and maintaining your...) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...
48 CFR 570.304 - General source selection procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... procedures. 570.304 Section 570.304 Federal Acquisition Regulations System GENERAL SERVICES ADMINISTRATION... Leasehold Interests in Real Property Over the Simplified Lease Acquisition Threshold 570.304 General source... disadvantaged business concerns in performance of the contract, and other factors as required by FAR 15.304 as...
NASA Astrophysics Data System (ADS)
Delpueyo, D.; Balandraud, X.; Grédiac, M.
2013-09-01
The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-04
... Performance for Greenhouse Gas Emissions for New Stationary Sources: Electric Utility Generating Units AGENCY... Greenhouse Gas Emissions for New Stationary Sources: Electric Utility Generating Units.'' The EPA is making... for Greenhouse Gas Emissions for New Stationary Sources: Electric Utility Generating Units, and...
EGRET Diffuse Gamma Ray Maps Between 30 MeV and 10 GeV
NASA Technical Reports Server (NTRS)
Cillis, A, N.; Hartman, R. C.
2004-01-01
This paper presents all-sky maps of diffuse gamma radiation in various energy ranges between 30 MeV and 10 GeV, based on data collected by the EGRET instrument on the Compton Gamma Ray Observatory. Although the maps can be used for a variety of applications. the immediate goal is the generation of diffuse gamma-ray maps which can be used as a diffuse background/foreground for point source analysis of the data to be obtained from new high-energy gamma-ray missions like GLAST and AGILE. To generate the diffuse gamma maps from the raw EGRET maps, the point sources in the Third EGRET Catalog were subtracted out using the appropriate point spread function for each energy range. After that, smoothing was performed to minimize the effects of photon statistical noise. A smoothing length of 1 deg vas used for the Galactic plane maps. For the all-sky maps, a procedure was used which resulted in a smoothing length roughly equivalent to 4 deg. The result of this work is 16 maps of different energy intervals for absolute value of b < or equal to 20 deg, and 32 all-sky maps, 16 in equatorial coordinates (J2000) and 16 in Galactic coordinates.
EGRET Diffuse Gamma Ray Maps Between 30 MeV and 10 GeV
NASA Technical Reports Server (NTRS)
Cillis, A. N.; Hartman, R. C.
2004-01-01
This paper presents all-sky maps of diffuse gamma radiation in various energy ranges between 30 MeV and 10 GeV, based on data collected by the EGRET instrument on the Compton Gamma Ray Observatory. Although the maps can be used for a variety of applications, the immediate goal is the generation of diffuse gamma-ray maps which can be used as a diffuse background/foreground for point source analysis of the data to be obtained from new high-energy gamma-ray missions like GLAST and AGILE. To generate the diffuse gamma maps from the raw EGRET maps, the point sources in the Third EGRET Catalog were subtracted out using the appropriate point spread function for each energy range. After that, smoothing was performed to minimize the effects of photon statistical noise. A smoothing length of 1deg was used for the Galactic plane maps. For the all-sky maps, a procedure was used which resulted in a smoothing length roughly equivalent to 4deg. The result of this work is 16 maps of different energy intervals for [b]less than or equal to 20deg, and 32 all-sky maps, 16 in equatorial coordinates (J2000) and 16 in Galactic coordinates.
Micro-thermocouple on nano-membrane: thermometer for nanoscale measurements.
Balčytis, Armandas; Ryu, Meguya; Juodkazis, Saulius; Morikawa, Junko
2018-04-20
A thermocouple of Au-Ni with only 2.5-μm-wide electrodes on a 30-nm-thick Si 3 N 4 membrane was fabricated by a simple low-resolution electron beam lithography and lift off procedure. The thermocouple is shown to be sensitive to heat generated by laser as well as an electron beam. Nano-thin membrane was used to reach a high spatial resolution of energy deposition and to realise a heat source of sub-1 μm diameter. This was achieved due to a limited generation of secondary electrons, which increase a lateral energy deposition. A low thermal capacitance of the fabricated devices is useful for the real time monitoring of small and fast temperature changes, e.g., due to convection, and can be detected through an optical and mechanical barrier of the nano-thin membrane. Temperature changes up to ~2 × 10 5 K/s can be measured at 10 kHz rate. A simultaneous down-sizing of both, the heat detector and heat source strongly required for creation of thermal microscopy is demonstrated. Peculiarities of Seebeck constant (thermopower) dependence on electron injection into thermocouple are discussed. Modeling of thermal flows on a nano-membrane with presence of a micro-thermocouple was carried out to compare with experimentally measured temporal response.
Characterizing Sorghum Panicles using 3D Point Clouds
NASA Astrophysics Data System (ADS)
Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.
2017-12-01
To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.
Transcatheter Aortic Valve Replacement in Pure Native Aortic Valve Regurgitation.
Yoon, Sung-Han; Schmidt, Tobias; Bleiziffer, Sabine; Schofer, Niklas; Fiorina, Claudia; Munoz-Garcia, Antonio J; Yzeiraj, Ermela; Amat-Santos, Ignacio J; Tchetche, Didier; Jung, Christian; Fujita, Buntaro; Mangieri, Antonio; Deutsch, Marcus-Andre; Ubben, Timm; Deuschl, Florian; Kuwata, Shingo; De Biase, Chiara; Williams, Timothy; Dhoble, Abhijeet; Kim, Won-Keun; Ferrari, Enrico; Barbanti, Marco; Vollema, E Mara; Miceli, Antonio; Giannini, Cristina; Attizzani, Guiherme F; Kong, William K F; Gutierrez-Ibanes, Enrique; Jimenez Diaz, Victor Alfonso; Wijeysundera, Harindra C; Kaneko, Hidehiro; Chakravarty, Tarun; Makar, Moody; Sievert, Horst; Hengstenberg, Christian; Prendergast, Bernard D; Vincent, Flavien; Abdel-Wahab, Mohamed; Nombela-Franco, Luis; Silaschi, Miriam; Tarantini, Giuseppe; Butter, Christian; Ensminger, Stephan M; Hildick-Smith, David; Petronio, Anna Sonia; Yin, Wei-Hsian; De Marco, Federico; Testa, Luca; Van Mieghem, Nicolas M; Whisenant, Brian K; Kuck, Karl-Heinz; Colombo, Antonio; Kar, Saibal; Moris, Cesar; Delgado, Victoria; Maisano, Francesco; Nietlispach, Fabian; Mack, Michael J; Schofer, Joachim; Schaefer, Ulrich; Bax, Jeroen J; Frerker, Christian; Latib, Azeem; Makkar, Raj R
2017-12-05
Limited data exist about safety and efficacy of transcatheter aortic valve replacement (TAVR) in patients with pure native aortic regurgitation (AR). This study sought to compare the outcomes of TAVR with early- and new-generation devices in symptomatic patients with pure native AR. From the pure native AR TAVR multicenter registry, procedural and clinical outcomes were assessed according to VARC-2 criteria and compared between early- and new-generation devices. A total of 331 patients with a mean STS score of 6.7 ± 6.7 underwent TAVR. The early- and new-generation devices were used in 119 patients (36.0%) and 212 patients (64.0%), respectively. STS score tended to be lower in the new-generation device group (6.2 ± 6.7 vs. 7.6 ± 6.7; p = 0.08), but transfemoral access was more frequently used in the early-generation device group (87.4% vs. 60.8%; p < 0.001). Compared with the early-generation devices, the new-generation devices were associated with a significantly higher device success rate (81.1% vs. 61.3%; p < 0.001) due to lower rates of second valve implantation (12.7% vs. 24.4%; p = 0.007) and post-procedural AR ≥ moderate (4.2% vs. 18.8%; p < 0.001). There were no significant differences in major 30-day endpoints between the 2 groups. The cumulative rates of all-cause and cardiovascular death at 1-year follow-up were 24.1% and 15.6%, respectively. The 1-year all-cause mortality rate was significantly higher in the patients with post-procedural AR ≥ moderate compared with those with post-procedural AR ≤ mild (46.1% vs. 21.8%; log-rank p = 0.001). On multivariable analysis, post-procedural AR ≥ moderate was independently associated with 1-year all-cause mortality (hazard ratio: 2.85; 95% confidence interval: 1.52 to 5.35; p = 0.001). Compared with the early-generation devices, TAVR using the new-generation devices was associated with improved procedural outcomes in treating patients with pure native AR. In patients with pure native AR, significant post-procedural AR was independently associated with increased mortality. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
46 CFR 111.10-4 - Power requirements, generating sources.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Power Supply § 111.10-4 Power requirements, generating sources. (a... generators which supply both ship's service and propulsion power do not need additional ship's service... 46 Shipping 4 2010-10-01 2010-10-01 false Power requirements, generating sources. 111.10-4 Section...
46 CFR 111.10-4 - Power requirements, generating sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Power Supply § 111.10-4 Power requirements, generating sources. (a... generators which supply both ship's service and propulsion power do not need additional ship's service... 46 Shipping 4 2011-10-01 2011-10-01 false Power requirements, generating sources. 111.10-4 Section...
Conditioning and Repackaging of Spent Radioactive Cs-137 and Co-60 Sealed Sources in Egypt - 13490
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, M.A.; Selim, Y.T.; El-Zakla, T.
2013-07-01
Radioactive Sealed sources (RSSs) are widely use all over the world in medicine, agriculture, industry, research, etc. The accidental misuse and exposure to RSSs has caused significant environmental contamination, serious injuries and many deaths. The high specific activity of the materials in many RSSs means that the spread of as little as microgram quantities can generate significant risk to human health and inhibit the use of buildings and land. Conditioning of such sources is a must to protect humans and environment from the hazard of ionizing radiation and contamination. Conditioning is also increase the security of these sources by decreasingmore » the probability of stolen and/or use in terrorist attacks. According to the law No.7/2010, Egyptian atomic energy authority represented in the hot laboratories and waste management center (centralized waste facility, HLWMC) has the responsibility of collecting, conditioning, storing and management of all types of radioactive waste from all Egyptian territory including spent radioactive sealed sources (SRSSs). This paper explains the conditioning procedures for two of the most common SRSSs, Cs{sup 137} and Co{sup 60} sources which make up more than 90% of the total spent radioactive sealed sources stored in our centralized waste facility as one of the major activities of hot laboratories and waste management center. Conditioning has to meet three main objectives, be acceptable for storage, enable their safe transport, and comply with disposal requirements. (authors)« less
Hansel, Marc C; Gramignoli, Roberto; Blake, William; Davila, Julio; Skvorak, Kristen; Dorko, Kenneth; Tahan, Veysel; Lee, Brian R; Tafaleng, Edgar; Guzman-Lepe, Jorge; Soto-Gutierrez, Alejandro; Fox, Ira J; Strom, Stephen C
2014-01-01
Hepatocyte transplantation has been used to treat liver disease. The availability of cells for these procedures is quite limited. Human embryonic stem cells (hESCs) and induced pluripotent stem cells (hiPSCs) may be a useful source of hepatocytes for basic research and transplantation if efficient and effective differentiation protocols were developed and problems with tumorigenicity could be overcome. Recent evidence suggests that the cell of origin may affect hiPSC differentiation. Thus, hiPSCs generated from hepatocytes may differentiate back to hepatocytes more efficiently than hiPSCs from other cell types. We examined the efficiency of reprogramming adult and fetal human hepatocytes. The present studies report the generation of 40 hiPSC lines from primary human hepatocytes under feeder-free conditions. Of these, 37 hiPSC lines were generated from fetal hepatocytes, 2 hiPSC lines from normal hepatocytes, and 1 hiPSC line from hepatocytes of a patient with Crigler-Najjar syndrome, type 1. All lines were confirmed reprogrammed and expressed markers of pluripotency by gene expression, flow cytometry, immunocytochemistry, and teratoma formation. Fetal hepatocytes were reprogrammed at a frequency over 50-fold higher than adult hepatocytes. Adult hepatocytes were only reprogrammed with six factors, while fetal hepatocytes could be reprogrammed with three (OCT4, SOX2, NANOG) or four factors (OCT4, SOX2, NANOG, LIN28 or OCT4, SOX2, KLF4, C-MYC). The increased reprogramming efficiency of fetal cells was not due to increased transduction efficiency or vector toxicity. These studies confirm that hiPSCs can be generated from adult and fetal hepatocytes including those with genetic diseases. Fetal hepatocytes reprogram much more efficiently than adult hepatocytes, although both could serve as useful sources of hiPSC-derived hepatocytes for basic research or transplantation.
Pacemakers and implantable cardioverter defibrillators--general and anesthetic considerations.
Rapsang, Amy G; Bhattacharyya, Prithwis
2014-01-01
A pacemaking system consists of an impulse generator and lead or leads to carry the electrical impulse to the patient's heart. Pacemaker and implantable cardioverter defibrillator codes were made to describe the type of pacemaker or implantable cardioverter defibrillator implanted. Indications for pacing and implantable cardioverter defibrillator implantation were given by the American College of Cardiologists. Certain pacemakers have magnet-operated reed switches incorporated; however, magnet application can have serious adverse effects; hence, devices should be considered programmable unless known otherwise. When a device patient undergoes any procedure (with or without anesthesia), special precautions have to be observed including a focused history/physical examination, interrogation of pacemaker before and after the procedure, emergency drugs/temporary pacing and defibrillation, reprogramming of pacemaker and disabling certain pacemaker functions if required, monitoring of electrolyte and metabolic disturbance and avoiding certain drugs and equipments that can interfere with pacemaker function. If unanticipated device interactions are found, consider discontinuation of the procedure until the source of interference can be eliminated or managed and all corrective measures should be taken to ensure proper pacemaker function should be done. Post procedure, the cardiac rate and rhythm should be monitored continuously and emergency drugs and equipments should be kept ready and consultation with a cardiologist or a pacemaker-implantable cardioverter defibrillator service may be necessary. Copyright © 2013 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.
Computer-oriented emissions inventory procedure for urban and industrial sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Runca, E.; Zannetti, P.; Melli, P.
1978-06-01
A knowledge of the rate of emission of atmospheric pollutants is essential for the enforcement of air quality control policies. A computer-oriented emission inventory procedure has been developed and applied to Venice, Italy. By using optically readable forms this procedure avoids many of the errors inherent in the transcription and punching steps typical of approaches applied so far. Moreover, this procedure allows an easy updating of the inventory. Emission patterns of SO/sub 2/ in the area of Venice showed that the total urban emissions were about 6% of those emitted by industrial sources.
Attenuation of harmonic noise in vibroseis data using Simulated Annealing
NASA Astrophysics Data System (ADS)
Sharma, S. P.; Tildy, Peter; Iranpour, Kambiz; Scholtz, Peter
2009-04-01
Processing of high productivity vibroseis seismic data (such as slip-sweep acquisition records) suffers from the well known disadvantage of harmonic distortion. Harmonic distortions are observed after cross-correlation of the recorded seismic signal with the pilot sweep and affect the signals in negative time (before the actual strong reflection event). Weak reflection events of the earlier sweeps falling in the negative time window of the cross-correlation sequence are being masked by harmonic distortions. Though the amplitude of the harmonic distortion is small (up to 10-20 %) compared to the fundamental amplitude of the reflection events, but it is significant enough to mask weak reflected signals. Elimination of harmonic noise due to source signal distortion from the cross-correlated seismic trace is a challenging task since the application of vibratory sources started and it still needs improvement. An approach has been worked out that minimizes the level of harmonic distortion by designing the signal similar to the harmonic distortion. An arbitrary length filter is optimized using the Simulated Annealing global optimization approach to design a harmonic signal. The approach deals with the convolution of a ratio trace (ratio of the harmonics with respect to the fundamental sweep) with the correlated "positive time" recorded signal and an arbitrary filter. Synthetic data study has revealed that this procedure of designing a signal similar to the desired harmonics using convolution of a suitable filter with theoretical ratio of harmonics with fundamental sweep helps in reducing the problem of harmonic distortion. Once we generate a similar signal for a vibroseis source using an optimized filter, then, this filter could be used to generate harmonics, which can be subtracted from the main cross-correlated trace to get the better, undistorted image of the subsurface. Designing the predicted harmonics to reduce the energy in the trace by considering weak reflection and observed harmonics together yields the desired result (resolution of weak reflected signal from the harmonic distortion). As optimization steps proceeds forward it is possible to observe from the difference plots of desired and predicted harmonics how weak reflections evolved from the harmonic distortion gradually during later iterations of global optimization. The procedure is applied in resolving weak reflections from a number of traces considered together. For a more precise design of harmonics SA procedure needs longer computation time which is impractical to deal with voluminous seismic data. However, the objective of resolving weak reflection signal in the strong harmonic noise can be achieved with fast computation using faster cooling schedule and less number of iterations and number of moves in simulated annealing procedure. This process could help in reducing the harmonics distortion and achieving the objective of resolving the lost weak reflection events in the cross-correlated seismic traces. Acknowledgements: The research was supported under the European Marie Curie Host Fellowships for Transfer of Knowledge (TOK) Development Host Scheme (contract no. MTKD-CT-2006-042537).
40 CFR 63.2852 - What is a startup, shutdown, and malfunction plan?
Code of Federal Regulations, 2012 CFR
2012-07-01
... the procedures may come from plans you developed for other purposes such as a Standard Operating... as long as the source is operational. The SSM plan provides detailed procedures for operating and...) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...
40 CFR 63.2852 - What is a startup, shutdown, and malfunction plan?
Code of Federal Regulations, 2013 CFR
2013-07-01
... the procedures may come from plans you developed for other purposes such as a Standard Operating... as long as the source is operational. The SSM plan provides detailed procedures for operating and...) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...
40 CFR 63.2852 - What is a startup, shutdown, and malfunction plan?
Code of Federal Regulations, 2014 CFR
2014-07-01
... the procedures may come from plans you developed for other purposes such as a Standard Operating... as long as the source is operational. The SSM plan provides detailed procedures for operating and...) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...
40 CFR 63.1544 - Standards for fugitive dust sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... according to, a standard operating procedures manual that describes in detail the measures that will be put... (c) of this section, the standard operating procedures manual shall be submitted to the Administrator... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Standards for fugitive dust sources. 63...
40 CFR 63.1544 - Standards for fugitive dust sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... according to, a standard operating procedures manual that describes in detail the measures that will be put... (c) of this section, the standard operating procedures manual shall be submitted to the Administrator... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Standards for fugitive dust sources. 63...
Air core detectors for Cerenkov-free scintillation dosimetry of brachytherapy β-sources.
Eichmann, Marion; Thomann, Benedikt
2017-09-01
Plastic scintillation detectors are used for dosimetry in small radiation fields with high dose gradients, e.g., provided by β-emitting sources like 106 Ru/ 106 Rh eye plaques. A drawback is a background signal caused by Cerenkov radiation generated by electrons passing the optical fibers (light guides) of this dosimetry system. Common approaches to correct for the Cerenkov signal are influenced by uncertainties resulting from detector positioning and calibration procedures. A different approach to avoid any correction procedure is to suppress the Cerenkov signal by replacing the solid core optical fiber with an air core light guide, previously shown for external beam therapy. In this study, the air core concept is modified and applied to the requirements of dosimetry in brachytherapy, proving its usability for measuring water energy doses in small radiation fields. Three air core detectors with different air core lengths are constructed and their performance in dosimetry for brachytherapy β-sources is compared with a standard two-fiber system, which uses a second fiber for Cerenkov correction. The detector systems are calibrated with a 90 Sr/ 90 Y secondary standard and tested for their angular dependence as well as their performance in depth dose measurements of 106 Ru/ 106 Rh sources. The signal loss relative to the standard detector increases with increasing air core length to a maximum value of 58.3%. At the same time, however, the percentage amount of Cerenkov light in the total signal is reduced from at least 12.1% to a value below 1.1%. There is a linear correlation between induced dose and measured signal current. The air core detectors determine the dose rates for 106 Ru/ 106 Rh sources without any form of correction for the Cerenkov signal. The air core detectors show advantages over the standard two-fiber system especially when measuring in radiation fields with high dose gradients. They can be used as simple one-fiber systems and allow for an almost Cerenkov-free scintillation dosimetry of brachytherapy β-sources. © 2017 American Association of Physicists in Medicine.
Australasian brachytherapy audit: results of the 'end-to-end' dosimetry pilot study.
Haworth, Annette; Wilfert, Lisa; Butler, Duncan; Ebert, Martin A; Todd, Stephen; Bucci, Joseph; Duchesne, Gillian M; Joseph, David; Kron, Tomas
2013-08-01
We present the results of a pilot study to test the feasibility of a brachytherapy dosimetry audit. The feasibility study was conducted at seven sites from four Australian states in both public and private centres. A purpose-built cylindrical water phantom was imaged using the local imaging protocol and a treatment plan was generated to deliver 1 Gy to the central (1 of 3) thermoluminescent dosimeter (TLD) from six dwell positions. All centres completed the audit, consisting of three consecutive irradiations, within a 2-h time period, with the exception of one centre that uses a pulsed dose rate brachytherapy unit. All TLD results were within 4.5% of the predicted value, with the exception of one subset where the dwell position step size was incorrectly applied. While the limited data collected in the study demonstrated considerable heterogeneity in clinical practice, the study proved a brachytherapy dosimetry audit to be feasible. Future studies should include verification of source strength using a Standard Dosimetry Laboratory calibrated chamber, a phantom that more closely mimics the clinical situation, a more comprehensive review of safety and quality assurance (QA) procedures including source dwell time and position accuracy, and a review of patient treatment QA procedures such as applicator position verification. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.
Vibration response of buildings to rail transit groundborne vibration
NASA Astrophysics Data System (ADS)
Phillips, James
2005-09-01
The FTA guidelines for detailed analysis and prediction of groundborne noise and vibration generated by rail transit systems are based on established empirical methods. The procedures for the measurement of vehicle/track system source strength and the attenuation of vibration as it propagates with distance through the ground are generally accepted practice at this time. However, characterization of the building response is open to debate, due in part to the wide array of building construction encountered adjacent to transit systems. Numerous measurements that have been obtained in a variety of building construction types are presented and preliminary conclusions are drawn regarding the responses of several common building types to rail transit groundborne vibration.
Piezoelectric and electromagnetic respiratory effort energy harvesters.
Shahhaidar, Ehsaneh; Padasdao, Bryson; Romine, R; Stickley, C; Boric-Lubecke, Olga
2013-01-01
The movements of the torso due to normal breathing could be harvested as an alternative, and renewable power source for an ultra-low power electronic device. The same output signal could also be recorded as a physiological signal containing information about breathing, thus enabling self-powered wearable biosensors/harvesters. In this paper, the selection criteria for such a biosensor, optimization procedure, trade-offs, and challenges as a sensor and harvester are presented. The empirical data obtained from testing different modules on a mechanical torso and a human subject demonstrated that an electromagnetic generator could be used as an unobtrusive self-powered medical sensor by harvesting more power, offering reasonable amount of output voltage for rectification purposes, and detecting respiratory effort.
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
Stey, Anne M; Ko, Clifford Y; Hall, Bruce Lee; Louie, Rachel; Lawson, Elise H; Gibbons, Melinda M; Zingmond, David S; Russell, Marcia M
2014-08-01
Identifying iatrogenic injuries using existing data sources is important for improved transparency in the occurrence of intraoperative events. There is evidence that procedure codes are reliably recorded in claims data. The objective of this study was to assess whether concurrent splenic procedure codes in patients undergoing colectomy procedures are reliably coded in claims data as compared with clinical registry data. Patients who underwent colectomy procedures in the absence of neoplastic diagnosis codes were identified from American College of Surgeons (ACS) NSQIP data linked with Medicare inpatient claims data file (2005 to 2008). A κ statistic was used to assess coding concordance between ACS NSQIP and Medicare inpatient claims, with ACS NSQIP serving as the reference standard. A total of 11,367 colectomy patients were identified from 212 hospitals. There were 114 patients (1%) who had a concurrent splenic procedure code recorded in either ACS NSQIP or Medicare inpatient claims. There were 7 patients who had a splenic injury diagnosis code recorded in either data source. Agreement of splenic procedure codes between the data sources was substantial (κ statistic 0.72; 95% CI, 0.64-0.79). Medicare inpatient claims identified 81% of the splenic procedure codes recorded in ACS NSQIP, and 99% of the patients without a splenic procedure code. It is feasible to use Medicare claims data to identify splenic injuries occurring during colectomy procedures, as claims data have moderate sensitivity and excellent specificity for capturing concurrent splenic procedure codes compared with ACS NSQIP. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Zhang, Mingjing; Wen, Ming; Zhang, Zhi-Min; Lu, Hongmei; Liang, Yizeng; Zhan, Dejian
2015-03-01
Retention time shift is one of the most challenging problems during the preprocessing of massive chromatographic datasets. Here, an improved version of the moving window fast Fourier transform cross-correlation algorithm is presented to perform nonlinear and robust alignment of chromatograms by analyzing the shifts matrix generated by moving window procedure. The shifts matrix in retention time can be estimated by fast Fourier transform cross-correlation with a moving window procedure. The refined shift of each scan point can be obtained by calculating the mode of corresponding column of the shifts matrix. This version is simple, but more effective and robust than the previously published moving window fast Fourier transform cross-correlation method. It can handle nonlinear retention time shift robustly if proper window size has been selected. The window size is the only one parameter needed to adjust and optimize. The properties of the proposed method are investigated by comparison with the previous moving window fast Fourier transform cross-correlation and recursive alignment by fast Fourier transform using chromatographic datasets. The pattern recognition results of a gas chromatography mass spectrometry dataset of metabolic syndrome can be improved significantly after preprocessing by this method. Furthermore, the proposed method is available as an open source package at https://github.com/zmzhang/MWFFT2. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de; Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT; Brookes, Mike
In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In thismore » paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole sources. • Inverse solutions are based on longitudinal and transverse line integral measurements. • Transverse line integral measurements are used as a sparsity constraint. • Numerical procedure to approximate the line integrals is described in detail. • Patterns of the studied electric fields are correctly estimated.« less
Validation and extraction of molecular-geometry information from small-molecule databases.
Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N
2017-02-01
A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.
Experimental investigation by laser ultrasonics for high speed train axle diagnostics.
Cavuto, A; Martarelli, M; Pandarese, G; Revel, G M; Tomasini, E P
2015-01-01
The present paper demonstrates the applicability of a laser-ultrasonic procedure to improve the performances of train axle ultrasonic inspection. The method exploits an air-coupled ultrasonic probe that detects the ultrasonic waves generated by a high-power pulsed laser. As a result, the measurement chain is completely non-contact, from generation to detection, this making it possible to considerably speed up inspection time and make the set-up more flexible. The main advantage of the technique developed is that it works in thermo-elastic regime and it therefore can be considered as a non-destructive method. The laser-ultrasonic procedure investigated has been applied for the inspection of a real high speed train axle provided by the Italian railway company (Trenitalia), on which typical fatigue defects have been expressly created according to standard specifications. A dedicated test bench has been developed so as to rotate the axle with the angle control and to speed up the inspection of the axle surface. The laser-ultrasonic procedure proposed can be automated and is potentially suitable for regular inspection of train axles. The main achievements of the activity described in this paper are: – the study of the effective applicability of laser-ultrasonics for the diagnostic of train hollow axles with variable sections by means of a numerical FE model, – the carrying out of an automated experiment on a real train axle, – the analysis of the sensitivity to experimental parameters, like laser source – receiving probe distance and receiving probe angular position, – the demonstration that the technique is suitable for the detection of surface defects purposely created on the train axle. Copyright © 2014 Elsevier B.V. All rights reserved.
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button
2010-01-01
Background There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. Methods The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS’ generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This ‘model-driven’ method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. Results In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist’s satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the ‘ExtractModel’ procedure. Conclusions The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org. PMID:21210979
Monitoring item and source information: evidence for a negative generation effect in source memory.
Jurica, P J; Shimamura, A P
1999-07-01
Item memory and source memory were assessed in a task that simulated a social conversation. Participants generated answers to questions or read statements presented by one of three sources (faces on a computer screen). Positive generation effects were observed for item memory. That is, participants remembered topics of conversation better if they were asked questions about the topics than if they simply read statements about topics. However, a negative generation effect occurred for source memory. That is, remembering the source of some information was disrupted if participants were required to answer questions pertaining to that information. These findings support the notion that item and source memory are mediated, as least in part, by different processes during encoding.
40 CFR 792.81 - Standard operating procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 33 2013-07-01 2013-07-01 false Standard operating procedures. 792.81... operating procedures. (a) A testing facility shall have standard operating procedures in writing, setting... data generated in the course of a study. All deviations in a study from standard operating procedures...
40 CFR 792.81 - Standard operating procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 33 2012-07-01 2012-07-01 false Standard operating procedures. 792.81... operating procedures. (a) A testing facility shall have standard operating procedures in writing, setting... data generated in the course of a study. All deviations in a study from standard operating procedures...
40 CFR 792.81 - Standard operating procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 32 2014-07-01 2014-07-01 false Standard operating procedures. 792.81... operating procedures. (a) A testing facility shall have standard operating procedures in writing, setting... data generated in the course of a study. All deviations in a study from standard operating procedures...
Financial implications of nonoperative fracture care at an academic trauma center.
Appleton, Paul; Chacko, Aron; Rodriguez, Edward K
2012-11-01
To determine if nonoperative fracture Current Procedural Technology codes generate a significant portion of annual revenues in an academic practice. Retrospective review of an orthopaedic trauma practice billings during fiscal year 2008. An urban level-1 trauma center. Outpatient clinic, and all consults, to the orthopaedic trauma service in the emergency room and hospital wards staffed by an attending traumatologist. An analysis was made of relative value units (RVUs) generated by operative and nonoperative care, separating the later into clinic, consults, and closed (nonoperative) fracture treatment. A total of 19,815 RVUs were generated by the trauma service during the 2008 fiscal year. Emergency department and ward consults generated 2176 (11%) of RVUs, whereas outpatient clinic generated an additional 1313 (7%) of RVUs. Nonoperative (closed) fracture care generated 2725 (14%) RVUs, whereas surgical procedures were responsible for the remaining 13,490 (68%) of RVUs. In terms of overall financial reimbursement, nonoperative management, consults, and office visits generated 31% of income for the trauma service. Although the largest financial contribution to a busy surgical practice is operative procedures, 1 must not overlook the important impact of nonoperative fracture care and consults. In our academic center, nearly one-third of all income was generated from nonsurgical procedures. In the current medical/financial climate, 1 must be diligent in optimizing the finances of trauma care to sustain an economically viable practice. Economic Level IV. See Instructions for Authors for a complete description of levels of evidence.
2008-02-13
procedures not ratified by the United States, commanders should evaluate and follow the multinational command’s doctrine and procedures, where applicable... human intelligence sources to be effective and implementation of appropriate force protection measures regardless of the operational...intelligence requirements needed to support the anticipated operation. Human intelligence often may provide the most useful source of information. Even
A novel system for commissioning brachytherapy applicators: example of a ring applicator
NASA Astrophysics Data System (ADS)
Fonseca, Gabriel P.; Van den Bosch, Michiel R.; Voncken, Robert; Podesta, Mark; Verhaegen, Frank
2017-11-01
A novel system was developed to improve commissioning and quality assurance of brachytherapy applicators used in high dose rate (HDR). It employs an imaging panel to create reference images and to measure dwell times and dwell positions. As an example: two ring applicators of the same model were evaluated. An applicator was placed on the surface of an imaging panel and a HDR 192Ir source was positioned in an imaging channel above the panel to generate an image of the applicator, using the gamma photons of the brachytherapy source. The applicator projection image was overlaid with the images acquired by capturing the gamma photons emitted by the source dwelling inside the applicator. We verified 0.1, 0.2, 0.5 and 1.0 cm interdwell distances for different offsets, applicator inclinations and transfer tube curvatures. The data analysis was performed using in-house developed software capable of processing the data in real time, defining catheters and creating movies recording the irradiation procedure. One applicator showed up to 0.3 cm difference from the expected position for a specific dwell position. The problem appeared intermittently. The standard deviations of the remaining dwell positions (40 measurements) were less than 0.05 cm. The second ring applicator had a similar reproducibility with absolute coordinate differences from expected values ranging from -0.10 up to 0.18 cm. The curvature of the transfer tube can lead to differences larger than 0.1 cm whilst the inclination of the applicator showed a negligible effect. The proposed method allows the verification of all steps of the irradiation, providing accurate information about dwell positions and dwell times. It allows the verification of small interdwell positions (⩽0.1 cm) and reduces measurement time. In addition, no additional radiation source is necessary since the HDR 192Ir source is used to generate an image of the applicator.
Automated system for generation of soil moisture products for agricultural drought assessment
NASA Astrophysics Data System (ADS)
Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.
Self-Calibration of CMB Polarimeters
NASA Astrophysics Data System (ADS)
Keating, Brian
2013-01-01
Precision measurements of the polarization of the cosmic microwave background (CMB) radiation, especially experiments seeking to detect the odd-parity "B-modes", have far-reaching implications for cosmology. To detect the B-modes generated during inflation the flux response and polarization angle of these experiments must be calibrated to exquisite precision. While suitable flux calibration sources abound, polarization angle calibrators are deficient in many respects. Man-made polarized sources are often not located in the antenna's far-field, have spectral properties that are radically different from the CMB's, are cumbersome to implement and may be inherently unstable over the (long) duration these searches require to detect the faint signature of the inflationary epoch. Astrophysical sources suffer from time, frequency and spatial variability, are not visible from all CMB observatories, and none are understood with sufficient accuracy to calibrate future CMB polarimeters seeking to probe inflationary energy scales of ~1000 TeV. CMB TB and EB modes, expected to identically vanish in the standard cosmological model, can be used to calibrate CMB polarimeters. By enforcing the observed EB and TB power spectra to be consistent with zero, CMB polarimeters can be calibrated to levels not possible with man-made or astrophysical sources. All of this can be accomplished without any loss of observing time using a calibration source which is spectrally identical to the CMB B-modes. The calibration procedure outlined here can be used for any CMB polarimeter.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
40 CFR 600.111-93 - Test procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Test procedures. 600.111-93 Section... Emission Regulations for 1978 and Later Model Year Automobiles-Test Procedures § 600.111-93 Test procedures. (a) The test procedures to be followed for generation of the city fuel economy data are those...
40 CFR 600.111-80 - Test procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Test procedures. 600.111-80 Section... Emission Regulations for 1978 and Later Model Year Automobiles-Test Procedures § 600.111-80 Test procedures. (a) The test procedures to be followed for generation of the city fuel economy data are those...
Notions of "Generation" in Rhetorical Studies.
ERIC Educational Resources Information Center
Young, Richard
A study of the meanings of "generation," a popular term in current rhetorical jargon, reveals important developments in the art and theory of rhetoric. As now used, it refers without clear distinction to rule-governed, heuristic, and trial-and-error procedures. The rule-governed procedures of transformation grammar are being employed to…
Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.
Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B
2015-06-01
Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.
PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-06-01
The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).
Chou, Feng-Cheng; Huang, Shing-Hwa; Sytwu, Huey-Kang
2012-01-01
Islet transplantation is a promising therapy for patients with type 1 diabetes that can provide moment-to-moment metabolic control of glucose and allow them to achieve insulin independence. However, two major problems need to be overcome: (1) detrimental immune responses, including inflammation induced by the islet isolation/transplantation procedure, recurrence autoimmunity, and allorejection, can cause graft loss and (2) inadequate numbers of organ donors. Several gene therapy approaches and pharmaceutical treatments have been demonstrated to prolong the survival of pancreatic islet grafts in animal models; however, the clinical applications need to be investigated further. In addition, for an alternative source of pancreatic β-cell replacement therapy, the ex vivo generation of insulin-secreting cells from diverse origins of stem/progenitor cells has become an attractive option in regenerative medicine. This paper focuses on the genetic manipulation of islets during transplantation therapy and summarizes current strategies to obtain functional insulin-secreting cells from stem/progenitor cells. PMID:22690214
Mucke, M; Zhaunerchyk, V; Frasinski, L J; ...
2015-07-01
Few-photon ionization and relaxation processes in acetylene (C 2H 2) and ethane (C 2H 6) were investigated at the linac coherent light source x-ray free electron laser (FEL) at SLAC, Stanford using a highly efficient multi-particle correlation spectroscopy technique based on a magnetic bottle. The analysis method of covariance mapping has been applied and enhanced, allowing us to identify electron pairs associated with double core hole (DCH) production and competing multiple ionization processes including Auger decay sequences. The experimental technique and the analysis procedure are discussed in the light of earlier investigations of DCH studies carried out at the samemore » FEL and at third generation synchrotron radiation sources. In particular, we demonstrate the capability of the covariance mapping technique to disentangle the formation of molecular DCH states which is barely feasible with conventional electron spectroscopy methods.« less
Early post-tsunami disaster medical assistance to Banda Aceh: a personal account.
Garner, Alan A; Harrison, Ken
2006-02-01
The south Asian tsunami on 26 December, 2004, saw Australia deploy civilian teams to an international disaster in large numbers for the first time. The logistics of supporting such teams in both a self sustainability capacity and medical equipment had not previously been planned for or tested. For the first Australian team deployed to Banda Aceh, which arrived on the fourth day after the tsunami, equipment sourced from the New South Wales Fire Brigades Urban Search and Rescue (US&R) cache supplied all food, water, tents, generators and sleeping equipment. The medical equipment was largely sourced from the CareFlight US&R medical cache. There were significant deficits in surgical equipment as the medical cache had not been designed to provide a stand alone surgical capability. This resulted in the need for substantial improvisation by the surgical teams during the deployment. Despite this, the team performed nearly 140 major procedures in austere circumstances and significantly contributed to the early international response to this major humanitarian disaster.
Hybrid Energy: Combining Nuclear and Other Energy Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jong Suk; Garcia, Humberto E.
2015-02-01
The leading cause of global climate change is generally accepted to be growing emissions of greenhouse gas (GHG) as a result of increased use of fossil fuels [1]. Among various sources of GHG, the global electricity supply sector generates the largest share of GHG emissions (37.5% of total CO2 emissions) [2]. Since the current electricity production heavily relies on fossil fuels, it is envisioned that bolstering generation technologies based on non-emitting energy sources, i.e., nuclear and/or renewables could reduce future GHG emissions. Integrated nuclear-renewable hybrid energy systems HES) are very-low-emitting options, but they are capital-intensive technologies that should operate atmore » full capacities to maximize profits. Hence, electricity generators often pay the grid to take electricity when demand is low, resulting in negative profits for many hours per year. Instead of wasting an excess generation capacity at negative profit during off-peak hours when electricity prices are low, nuclear-renewable HES could result in positive profits by storing and/or utilizing surplus thermal and/or electrical energy to produce useful storable products to meet industrial and transportation demands. Consequently, it is necessary (1) to identify key integrated system options based on specific regions and (2) to propose optimal operating strategy to economically produce products on demand. In prioritizing region-specific HES options, available resources, markets, existing infrastructures, and etc. need to be researched to identify attractive system options. For example, the scarcity of water (market) and the availability of abundant solar radiation make solar energy (resource) a suitable option to mitigate the water deficit the Central-Southern region of the U.S. Thus, a solar energy-driven desalination process would be an attractive option to be integrated into a nuclear power plant to support the production of fresh water in this region. In this work, we introduce a particular HES option proposed for a specific U.S. region and briefly describe our modeling assumptions and procedure utilized for its analysis. Preliminary simulation results are also included addressing several technical characteristics of the proposed nuclear-renewable HES.« less
Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,
2016-09-15
Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less
NASA Technical Reports Server (NTRS)
Tan, Choon-Sooi; Suder, Kenneth (Technical Monitor)
2003-01-01
A framework for an effective computational methodology for characterizing the stability and the impact of distortion in high-speed multi-stage compressor is being developed. The methodology consists of using a few isolated-blade row Navier-Stokes solutions for each blade row to construct a body force database. The purpose of the body force database is to replace each blade row in a multi-stage compressor by a body force distribution to produce same pressure rise and flow turning. To do this, each body force database is generated in such a way that it can respond to the changes in local flow conditions. Once the database is generated, no hrther Navier-Stokes computations are necessary. The process is repeated for every blade row in the multi-stage compressor. The body forces are then embedded as source terms in an Euler solver. The method is developed to have the capability to compute the performance in a flow that has radial as well as circumferential non-uniformity with a length scale larger than a blade pitch; thus it can potentially be used to characterize the stability of a compressor under design. It is these two latter features as well as the accompanying procedure to obtain the body force representation that distinguish the present methodology from the streamline curvature method. The overall computational procedures have been developed. A dimensional analysis was carried out to determine the local flow conditions for parameterizing the magnitudes of the local body force representation of blade rows. An Euler solver was modified to embed the body forces as source terms. The results from the dimensional analysis show that the body forces can be parameterized in terms of the two relative flow angles, the relative Mach number, and the Reynolds number. For flow in a high-speed transonic blade row, they can be parameterized in terms of the local relative Mach number alone.
Code of Federal Regulations, 2013 CFR
2013-07-01
... special procedures for declassification of records pertaining to intelligence activities and intelligence... procedures for declassification of records pertaining to intelligence activities and intelligence sources or... Intelligence is responsible for issuing special procedures for declassification of classified records...
System for recovery of daughter isotopes from a source material
Tranter, Troy J [Idaho Falls, ID; Todd, Terry A [Aberdeen, ID; Lewis, Leroy C [Idaho Falls, ID; Henscheid, Joseph P [Idaho Falls, ID
2009-08-04
A method of separating isotopes from a mixture containing at least two isotopes in a solution is disclosed. A first isotope is precipitated and is collected from the solution. A daughter isotope is generated and collected from the first isotope. The invention includes a method of producing an actinium-225/bismuth-213 product from a material containing thorium-229 and thorium-232. A solution is formed containing nitric acid and the material containing thorium-229 and thorium-232, and iodate is added to form a thorium iodate precipitate. A supernatant is separated from the thorium iodate precipitate and a second volume of nitric acid is added to the thorium iodate precipitate. The thorium iodate precipitate is stored and a decay product comprising actinium-225 and bismuth-213 is generated in the second volume of nitric acid, which is then separated from the thorium iodate precipitate, filtered, and treated using at least one chromatographic procedure. A system for producing an actinium-225/bismuth-213 product is also disclosed.
González-Burguera, Imanol; Ricobaraza, Ana; Aretxabala, Xabier; Barrondo, Sergio; García del Caño, Gontzal; López de Jesús, Maider; Sallés, Joan
2016-03-01
The human NTERA2/D1 (NT2) cells generate postmitotic neurons (NT2N cells) upon retinoic acid (RA) treatment and are functionally integrated in the host tissue following grafting into the rodent and human brain, thus representing a promising source for neuronal replacement therapy. Yet the major limitations of this model are the lengthy differentiation procedure and its low efficiency, although recent studies suggest that the differentiation process can be shortened to less than 1 week using nucleoside analogues. To explore whether short-term exposure of NT2 cells to the nucleoside analogue cytosine β-d-arabinofuranoside (AraC) could be a suitable method to efficiently generate mature neurons, we conducted a neurochemical and morphometric characterization of AraC-differentiated NT2N (AraC/NT2N) neurons and improved the differentiation efficiency by modifying the cell culture schedule. Moreover, we analyzed the neurotransmitter phenotypes of AraC/NT2N neurons. Cultures obtained by treatment with AraC were highly enriched in postmitotic neurons and essentially composed of dual glutamatergic/cholinergic neurons, which contrasts with the preferential GABAergic phenotype that we found after RA differentiation. Taken together, our results further reinforce the notion NT2 cells are a versatile source of neuronal phenotypes and provide a new encouraging platform for studying mechanisms of neuronal differentiation and for exploring neuronal replacement strategies. Copyright © 2016 University of Texas at Austin Dell Medical School. Published by Elsevier B.V. All rights reserved.
A balloon system for profiling smoke plumes from forest fires
Paul W. Ryan; Charles D. Tangren; Charles K. McMahon
1979-01-01
This paper is directed to those interested in techniques for measuring emission rates and emission factors for forest fires and other open combustion sources. A source-sampling procedure that involved the use of a vertical array of lightweight, battery-operated instruments suspended from a helium-filled aerodynamic balloon is described. In this procedure, plume...
ERIC Educational Resources Information Center
Lalonde, Kaylah; Holt, Rachael Frush
2014-01-01
Purpose: This preliminary investigation explored potential cognitive and linguistic sources of variance in 2- year-olds' speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method: Twenty typically…
46 CFR 111.10-4 - Power requirements, generating sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 111.10-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Power Supply § 111.10-4 Power requirements, generating sources. (a) The aggregate capacity of the electric ship's service generating sources required in § 111.10-3 must...
46 CFR 111.10-4 - Power requirements, generating sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 111.10-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Power Supply § 111.10-4 Power requirements, generating sources. (a) The aggregate capacity of the electric ship's service generating sources required in § 111.10-3 must...
46 CFR 111.10-4 - Power requirements, generating sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 111.10-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Power Supply § 111.10-4 Power requirements, generating sources. (a) The aggregate capacity of the electric ship's service generating sources required in § 111.10-3 must...
Multispectral Remote Sensing of the Earth and Environment Using KHawk Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Gowravaram, Saket
This thesis focuses on the development and testing of the KHawk multispectral remote sensing system for environmental and agricultural applications. KHawk Unmanned Aircraft System (UAS), a small and low-cost remote sensing platform, is used as the test bed for aerial video acquisition. An efficient image geotagging and photogrammetric procedure for aerial map generation is described, followed by a comprehensive error analysis on the generated maps. The developed procedure is also used for generation of multispectral aerial maps including red, near infrared (NIR) and colored infrared (CIR) maps. A robust Normalized Difference Vegetation index (NDVI) calibration procedure is proposed and validated by ground tests and KHawk flight test. Finally, the generated aerial maps and their corresponding Digital Elevation Models (DEMs) are used for typical application scenarios including prescribed fire monitoring, initial fire line estimation, and tree health monitoring.
Airborne gamma-ray spectra processing: Extracting photopeaks.
Druker, Eugene
2018-07-01
The acquisition of information from the airborne gamma-ray spectra is based on the ability to evaluate photopeak areas in regular spectra from natural and other sources. In airborne gamma-ray spectrometry, extraction of photopeaks of radionuclides from regular one-second spectra is a complex problem. In the region of higher energies, difficulties are associated with low signal level, i.e. low count rates, whereas at lower energies difficulties are associated with high noises due to a high signal level. In this article, a new procedure is proposed for processing the measured spectra up to and including the extraction of evident photopeaks. The procedure consists of reducing the noise in the energy channels along the flight lines, transforming the spectra into the spectra of equal resolution, removing the background from each spectrum, sharpening the details, and transforming the spectra back to the original energy scale. The resulting spectra are better suited for examining and using the photopeaks. No assumptions are required regarding the number, locations, and magnitudes of photopeaks. The procedure does not generate negative photopeaks. The resolution of the spectrometer is used for the purpose. The proposed methodology, apparently, will contribute also to study environmental problems, soil characterization, and other near-surface geophysical methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Buda, M; Wicks, S; Charton, E
2016-01-01
For more than twenty years, the European Pharmacopoeia (Ph. Eur.) monographs for biotherapeutic proteins have been elaborated using the multisource approach (Procedure 1), which has led to robust quality standards for many of the first-generation biotherapeutics. In 2008, the Ph. Eur. opened up the way towards an alternative mechanism for the elaboration of monographs (Procedure 4-BIO pilot phase), which is applied to substances still under patent protection, based on a close collaboration with the Innovator company, to ensure a harmonised global standard and strengthen the quality of the upcoming products. This article describes the lessons learned during the P4-BIO pilot phase and addresses the current thinking on monograph elaboration in the field of biotherapeutics. Case studies are described to illustrate the standardisation challenges associated with the complexity of biotherapeutics and of analytical procedures, as well as the approaches that help ensure expectations are met when setting monograph specifications and allow for compatibility with the development of biosimilars. Emphasis is put on monograph flexibility, notably by including tests that measure process-dependent microheterogeneity (e.g. glycosylation) in the Production section of the monograph. The European Pharmacopoeia successfully concluded the pilot phase of the P4-BIO during its 156 th session on 22-23 November 2016.
Overview of Hydrometeorologic Forecasting Procedures at BC Hydro
NASA Astrophysics Data System (ADS)
McCollor, D.
2004-12-01
Energy utility companies must balance production from limited sources with increasing demand from industrial, business, and residential consumers. The utility planning process requires a balanced, efficient, and effective distribution of energy from source to consumer. Therefore utility planners must consider the impact of weather on energy production and consumption. Hydro-electric companies should be particularly tuned to weather because their source of energy is water, and water supply depends on precipitation. BC Hydro operates as the largest hydro-electric company in western Canada, managing over 30 reservoirs within the province of British Columbia, and generating electricity for 1.6 million people. BC Hydro relies on weather forecasts of watershed precipitation and temperature to drive hydrologic reservoir inflow models and of urban temperatures to meet energy demand requirements. Operations and planning specialists in the company rely on current, value-added weather forecasts for extreme high-inflow events, daily reservoir operations planning, and long-term water resource management. Weather plays a dominant role for BC Hydro financial planners in terms of sensitive economic responses. For example, a two percent change in hydropower generation, due in large part to annual precipitation patterns, results in an annual net change of \\50 million in earnings. A five percent change in temperature produces a \\5 million change in yearly earnings. On a daily basis, significant precipitation events or temperature extremes involve potential profit/loss decisions in the tens of thousands of dollars worth of power generation. These factors are in addition to environmental and societal costs that must be considered equally as part of a triple bottom line reporting structure. BC Hydro water resource managers require improved meteorological information from recent advancements in numerical weather prediction. At BC Hydro, methods of providing meteorological forecast data are changing as new downscaling and ensemble techniques evolve to improve environmental information supplied to water managers.
Atomic physics research with second and third generation synchrotron light sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, B.M.
1990-10-01
This contribution to these proceedings is intended to provide an introduction and overview for other contributions on atomic (and related) physics research at existing and planned synchrotron light sources. The emphasis will be on research accomplishments and future opportunities, but a comparison will be given of operating characteristics for first, second, and third generation machines. First generation light sources were built to do research with the primary electron and positron beams, rather than with the synchrotron radiation itself. Second generation machines were specifically designed to be dedicated synchrotron-radiation facilities, with an emphasis on the use of bending-magnet radiation. The newmore » third generation light sources are being designed to optimize radiation from insertion devices, such as undulators and wigglers. Each generation of synchrotron light source offers useful capabilities for forefront research in atomic physics and many other disciplines. 27 refs., 1 fig., 3 tabs.« less
NASA Astrophysics Data System (ADS)
Shinoj, V. K.; Murukeshan, V. M.; Hong, Jesmond; Baskaran, M.; Aung, Tin
2015-07-01
Noninvasive medical imaging techniques have generated great interest and high potential in the research and development of ocular imaging and follow up procedures. It is well known that angle closure glaucoma is one of the major ocular diseases/ conditions that causes blindness. The identification and treatment of this disease are related primarily to angle assessment techniques. In this paper, we illustrate a probe-based imaging approach to obtain the images of the angle region in eye. The proposed probe consists of a micro CCD camera and LED/NIR laser light sources and they are configured at the distal end to enable imaging of iridocorneal region inside eye. With this proposed dualmodal probe, imaging is performed in light (white visible LED ON) and dark (NIR laser light source alone) conditions and the angle region is noticeable in both cases. The imaging using NIR sources have major significance in anterior chamber imaging since it evades pupil constriction due to the bright light and thereby the artificial altering of anterior chamber angle. The proposed methodology and developed scheme are expected to find potential application in glaucoma disease detection and diagnosis.
Cascaded H-bridge multilevel inverter for renewable energy generation
NASA Astrophysics Data System (ADS)
Pandey, Ravikant; Nath Tripathi, Ravi; Hanamoto, Tsuyoshi
2016-04-01
In this paper cascaded H-bridge multilevel inverter (CHBMLI) has been investigated for the application of renewable energy generation. Energy sources like solar, wind, hydro, biomass or combination of these can be manipulated to obtain alternative sources for renewable energy generation. These renewable energy sources have different electrical characteristics like DC or AC level so it is challenging to use generated power by connecting to grid or load directly. The renewable energy source require specific power electronics converter as an interface for conditioning generated power .The multilevel inverter can be utilized for renewable energy sources in two different modes, the power generation mode (stand-alone mode), and compensator mode (statcom). The performance of the multilevel inverter has been compared with two level inverter. In power generation mode CHBMLI supplies the active and reactive power required by the different loads. For operation in compensator mode the indirect current control based on synchronous reference frame theory (SRFT) ensures the grid operating in unity power factor and compensate harmonics and reactive power.
Controlling flows in microchannels with patterned surface charge and topography.
Stroock, Abraham D; Whitesides, George M
2003-08-01
This Account reviews two procedures for controlling the flow of fluids in microchannels. The first procedure involves patterning the density of charge on the inner surfaces of a channel. These patterns generate recirculating electroosmotic flows in the presence of a steady electric field. The second procedure involves patterning topography on an inner surface of a channel. These patterns generate recirculation in the cross-section of steady, pressure-driven flows. This Account summarizes applications of these flow to mixing and to controlling dispersion (band broadening).
Progress of High Efficiency Centrifugal Compressor Simulations Using TURBO
NASA Technical Reports Server (NTRS)
Kulkarni, Sameer; Beach, Timothy A.
2017-01-01
Three-dimensional, time-accurate, and phase-lagged computational fluid dynamics (CFD) simulations of the High Efficiency Centrifugal Compressor (HECC) stage were generated using the TURBO solver. Changes to the TURBO Parallel Version 4 source code were made in order to properly model the no-slip boundary condition along the spinning hub region for centrifugal impellers. A startup procedure was developed to generate a converged flow field in TURBO. This procedure initialized computations on a coarsened mesh generated by the Turbomachinery Gridding System (TGS) and relied on a method of systematically increasing wheel speed and backpressure. Baseline design-speed TURBO results generally overpredicted total pressure ratio, adiabatic efficiency, and the choking flow rate of the HECC stage as compared with the design-intent CFD results of Code Leo. Including diffuser fillet geometry in the TURBO computation resulted in a 0.6 percent reduction in the choking flow rate and led to a better match with design-intent CFD. Diffuser fillets reduced annulus cross-sectional area but also reduced corner separation, and thus blockage, in the diffuser passage. It was found that the TURBO computations are somewhat insensitive to inlet total pressure changing from the TURBO default inlet pressure of 14.7 pounds per square inch (101.35 kilopascals) down to 11.0 pounds per square inch (75.83 kilopascals), the inlet pressure of the component test. Off-design tip clearance was modeled in TURBO in two computations: one in which the blade tip geometry was trimmed by 12 mils (0.3048 millimeters), and another in which the hub flow path was moved to reflect a 12-mil axial shift in the impeller hub, creating a step at the hub. The one-dimensional results of these two computations indicate non-negligible differences between the two modeling approaches.
Acoustics of laminar boundary layers breakdown
NASA Technical Reports Server (NTRS)
Wang, Meng
1994-01-01
Boundary layer flow transition has long been suggested as a potential noise source in both marine (sonar-dome self noise) and aeronautical (aircraft cabin noise) applications, owing to the highly transient nature of process. The design of effective noise control strategies relies upon a clear understanding of the source mechanisms associated with the unsteady flow dynamics during transition. Due to formidable mathematical difficulties, theoretical predictions either are limited to early linear and weakly nonlinear stages of transition, or employ acoustic analogy theories based on approximate source field data, often in the form of empirical correlation. In the present work, an approach which combines direct numerical simulation of the source field with the Lighthill acoustic analogy is utilized. This approach takes advantage of the recent advancement in computational capabilities to obtain detailed information about the flow-induced acoustic sources. The transitional boundary layer flow is computed by solving the incompressible Navier-Stokes equations without model assumptions, thus allowing a direct evaluation of the pseudosound as well as source functions, including the Lighthill stress tensor and the wall shear stress. The latter are used for calculating the radiated pressure field based on the Curle-Powell solution of the Lighthill equation. This procedure allows a quantitative assessment of noise source mechanisms and the associated radiation characteristics during transition from primary instability up to the laminar breakdown stage. In particular, one is interested in comparing the roles played by the fluctuating volume Reynolds stress and the wall-shear-stresses, and in identifying specific flow processes and structures that are effective noise generators.
48 CFR 208.7403 - Acquisition procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SYSTEM, DEPARTMENT OF DEFENSE ACQUISITION PLANNING REQUIRED SOURCES OF SUPPLIES AND SERVICES Enterprise Software Agreements 208.7403 Acquisition procedures. Follow the procedures at PGI 208.7403 when acquiring commercial software and related services. [71 FR 39005, July 11, 2006] ...
NASA Astrophysics Data System (ADS)
Parisot, Charles R.; Channin, David S.; Avrin, David E.; Lindop, Christopher
2001-08-01
In a simple, typical radiology workflow process, an order generates a single procedure, which in turn generates a single data set, from which, one radiology report is generated. There are, however, occasions when a single order consists of more than one procedure each with a separate report, yet the procedures are accomplished by one physical acquisition of data. The prototypical example of this is the request for computed tomographic evaluation of the chest, abdomen and pelvis. The study is accomplished, with modern day scanners, by a single helical acquisition, yet there are typically three codable and billable procedures involved, and these may be reported independently either for administrative or academic reasons. This grouping of procedures remained up to now a challenge to automate across integrated modalities, PACS and RIS. This paper discusses a number of other practical cases where this situation occurs and reviews the capabilities of the Presentation of Grouped Procedures IHE Integration Profile in solving this problem. The DICOM services used are evaluated as well as the strengths and weaknesses of this IHE Integration Profile. The implementation experience gained on both a CT and an MR for the IHE Demonstration at RSNA 2000 and HIMSS 2001 is also reviewed. In conclusion, the resulting clinical and operational benefits are discussed.
46 CFR 111.10-3 - Two generating sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Two generating sources. 111.10-3 Section 111.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... drilling unit must have at least two electric generating sources. [CGD 94-108, 61 FR 28276, June 4, 1996] ...
46 CFR 111.10-3 - Two generating sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Two generating sources. 111.10-3 Section 111.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... drilling unit must have at least two electric generating sources. [CGD 94-108, 61 FR 28276, June 4, 1996] ...
46 CFR 111.10-3 - Two generating sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Two generating sources. 111.10-3 Section 111.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... drilling unit must have at least two electric generating sources. [CGD 94-108, 61 FR 28276, June 4, 1996] ...
46 CFR 111.10-3 - Two generating sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Two generating sources. 111.10-3 Section 111.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Power Supply § 111.10-3 Two generating sources. In addition to the emergency power...
46 CFR 111.10-3 - Two generating sources.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Two generating sources. 111.10-3 Section 111.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Power Supply § 111.10-3 Two generating sources. In addition to the emergency power...
46 CFR 112.05-5 - Emergency power source.
Code of Federal Regulations, 2011 CFR
2011-10-01
... generator must be either a diesel engine or a gas turbine. [CGD 74-125A, 47 FR 15267, Apr. 8, 1982, as... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...
46 CFR 112.05-5 - Emergency power source.
Code of Federal Regulations, 2010 CFR
2010-10-01
... generator must be either a diesel engine or a gas turbine. [CGD 74-125A, 47 FR 15267, Apr. 8, 1982, as... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...
A computerized procedure for teaching the relationship between graphic symbols and their referents.
Isaacson, Mick; Lloyd, Lyle L
2013-01-01
Many individuals with little or no functional speech communicate through graphic symbols. Communication is enhanced when the relationship between symbols and their referents are learned to such a degree that retrieval is effortless, resulting in fluent communication. Developing fluency is a time consuming endeavor for special educators and speech-language pathologists (SLPs). It would be beneficial for these professionals to have an automated procedure based on the most efficacious method for teaching the relationship between symbols and referent. Hence, this study investigated whether a procedure based on the generation effect would promote learning the association between symbols and their referents. Results show that referent generation produces the best long-term retention of this relationship. These findings provide evidence that software based on referent generation would provide special educators and SLPs with an efficacious automated procedure, requiring minimal direct supervision, to facilitate symbol/referent learning and the development of communicative fluency.
Trip generation characteristics of special generators
DOT National Transportation Integrated Search
2010-03-01
Special generators are introduced in the sequential four-step modeling procedure to represent certain types of facilities whose trip generation characteristics are not fully captured by the standard trip generation module. They are also used in the t...
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas
2017-10-01
In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
2001-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
1999-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
History of surgery for atrial fibrillation.
Edgerton, Zachary J; Edgerton, James R
2009-12-01
There is a rich history of surgery for atrial fibrillation. Initial procedures were aimed at controlling the ventricular response rate. Later procedures were directed at converting atrial fibrillation to normal sinus rhythm. These culminated in the Cox Maze III procedure. While highly effective, the complexity and morbidity of the cut and sew Maze III limited its adoption. Enabling technology has developed alternate energy sources designed to produce a transmural atrial scar without cutting and sewing. Termed the Maze IV, this lessened the morbidity of the procedure and widened the applicability. Further advances in minimal access techniques are now being developed to allow totally thorascopic placement of all the left atrial lesions on the full, beating heart, using alternate energy sources.
D-D neutron generator development at LBNL.
Reijonen, J; Gicquel, F; Hahto, S K; King, M; Lou, T-P; Leung, K-N
2005-01-01
The plasma and ion source technology group in Lawrence Berkeley National Laboratory is developing advanced, next generation D-D neutron generators. There are three distinctive developments, which are discussed in this presentation, namely, multi-stage, accelerator-based axial neutron generator, high-output co-axial neutron generator and point source neutron generator. These generators employ RF-induction discharge to produce deuterium ions. The distinctive feature of RF-discharge is its capability to generate high atomic hydrogen species, high current densities and stable and long-life operation. The axial neutron generator is designed for applications that require fast pulsing together with medium to high D-D neutron output. The co-axial neutron generator is aimed for high neutron output with cw or pulsed operation, using either the D-D or D-T fusion reaction. The point source neutron generator is a new concept, utilizing a toroidal-shaped plasma generator. The beam is extracted from multiple apertures and focus to the target tube, which is located at the middle of the generator. This will generate a point source of D-D, T-T or D-T neutrons with high output flux. The latest development together with measured data will be discussed in this article.
ASD Closure in Structural Heart Disease.
Wiktor, Dominik M; Carroll, John D
2018-04-17
While the safety and efficacy of percutaneous ASD closure has been established, new data have recently emerged regarding the negative impact of residual iatrogenic ASD (iASD) following left heart structural interventions. Additionally, new devices with potential advantages have recently been studied. We will review here the potential indications for closure of iASD along with new generation closure devices and potential late complications requiring long-term follow-up. With the expansion of left-heart structural interventions and large-bore transseptal access, there has been growing experience gained with management of residual iASD. Some recently published reports have implicated residual iASD after these procedures as a potential source of diminished clinical outcomes and mortality. Additionally, recent trials investigating new generation closure devices as well as expanding knowledge regarding late complications of percutaneous ASD closure have been published. While percutaneous ASD closure is no longer a novel approach to managing septal defects, there are several contemporary issues related to residual iASD following large-bore transseptal access and new generation devices which serve as an impetus for this review. Ongoing attention to potential late complications and decreasing their incidence with ongoing study is clearly needed.
Towards consistent generation of pancreatic lineage progenitors from human pluripotent stem cells.
Rostovskaya, Maria; Bredenkamp, Nicholas; Smith, Austin
2015-10-19
Human pluripotent stem cells can in principle be used as a source of any differentiated cell type for disease modelling, drug screening, toxicology testing or cell replacement therapy. Type I diabetes is considered a major target for stem cell applications due to the shortage of primary human beta cells. Several protocols have been reported for generating pancreatic progenitors by in vitro differentiation of human pluripotent stem cells. Here we first assessed one of these protocols on a panel of pluripotent stem cell lines for capacity to engender glucose sensitive insulin-producing cells after engraftment in immunocompromised mice. We observed variable outcomes with only one cell line showing a low level of glucose response. We, therefore, undertook a systematic comparison of different methods for inducing definitive endoderm and subsequently pancreatic differentiation. Of several protocols tested, we identified a combined approach that robustly generated pancreatic progenitors in vitro from both embryo-derived and induced pluripotent stem cells. These findings suggest that, although there are intrinsic differences in lineage specification propensity between pluripotent stem cell lines, optimal differentiation procedures may consistently direct a substantial fraction of cells into pancreatic specification. © 2015 The Authors.
Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2004-01-01
A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
Operator procedure verification with a rapidly reconfigurable simulator
NASA Technical Reports Server (NTRS)
Iwasaki, Yumi; Engelmore, Robert; Fehr, Gary; Fikes, Richard
1994-01-01
Generating and testing procedures for controlling spacecraft subsystems composed of electro-mechanical and computationally realized elements has become a very difficult task. Before a spacecraft can be flown, mission controllers must envision a great variety of situations the flight crew may encounter during a mission and carefully construct procedures for operating the spacecraft in each possible situation. If, despite extensive pre-compilation of control procedures, an unforeseen situation arises during a mission, the mission controller must generate a new procedure for the flight crew in a limited amount of time. In such situations, the mission controller cannot systematically consider and test alternative procedures against models of the system being controlled, because the available simulator is too large and complex to reconfigure, run, and analyze quickly. A rapidly reconfigurable simulation environment that can execute a control procedure and show its effects on system behavior would greatly facilitate generation and testing of control procedures both before and during a mission. The How Things Work project at Stanford University has developed a system called DME (Device Modeling Environment) for modeling and simulating the behavior of electromechanical devices. DME was designed to facilitate model formulation and behavior simulation of device behavior including both continuous and discrete phenomena. We are currently extending DME for use in testing operator procedures, and we have built a knowledge base for modeling the Reaction Control System (RCS) of the space shuttle as a testbed. We believe that DME can facilitate design of operator procedures by providing mission controllers with a simulation environment that meets all these requirements.
Akhoun, Idrick; McKay, Colette; El-Deredy, Wael
2015-01-15
Independent-components-analysis (ICA) successfully separated electrically-evoked compound action potentials (ECAPs) from the stimulation artefact and noise (ECAP-ICA, Akhoun et al., 2013). This paper shows how to automate the ECAP-ICA artefact cancellation process. Raw-ECAPs without artefact rejection were consecutively recorded for each stimulation condition from at least 8 intra-cochlear electrodes. Firstly, amplifier-saturated recordings were discarded, and the data from different stimulus conditions (different current-levels) were concatenated temporally. The key aspect of the automation procedure was the sequential deductive source categorisation after ICA was applied with a restriction to 4 sources. The stereotypical aspect of the 4 sources enables their automatic classification as two artefact components, a noise and the sought ECAP based on theoretical and empirical considerations. The automatic procedure was tested using 8 cochlear implant (CI) users and one to four stimulus electrodes. The artefact and noise sources were successively identified and discarded, leaving the ECAP as the remaining source. The automated ECAP-ICA procedure successfully extracted the correct ECAPs compared to standard clinical forward masking paradigm in 22 out of 26 cases. ECAP-ICA does not require extracting the ECAP from a combination of distinct buffers as it is the case with regular methods. It is an alternative that does not have the possible bias of traditional artefact rejections such as alternate-polarity or forward-masking paradigms. The ECAP-ICA procedure bears clinical relevance, for example as the artefact rejection sub-module of automated ECAP-threshold detection techniques, which are common features of CI clinical fitting software. Copyright © 2014. Published by Elsevier B.V.
Simulation of load-sharing in standalone distributed generation system
NASA Astrophysics Data System (ADS)
Ajewole, Titus O.; Craven, Robert P. M.; Kayode, Olakunle; Babalola, Olufisayo S.
2018-05-01
This paper presents a study on load-sharing among the component generating units of a multi-source electric microgrid that is operated as an autonomous ac supply-mode system. Emerging trend in power system development permits deployment of microgrids for standalone or stand-by applications, thereby requiring active- and reactive power sharing among the discrete generating units contained in hybrid-source microgrids. In this study, therefore, a laboratory-scale model of a microgrid energized with three renewable energy-based sources is employed as a simulation platform to investigate power sharing among the power-generating units. Each source is represented by a source emulator that captures the real operational characteristics of the mimicked generating unit and, with implementation of real-life weather data and load profiles on the model; the sharing of the load among the generating units is investigated. There is a proportionate generation of power by the three source emulators, with their frequencies perfectly synchronized at the point of common coupling as a result of balance flow of power among them. This hybrid topology of renewable energy-based microgrid could therefore be seamlessly adapted into national energy mix by the indigenous electric utility providers in Nigeria.
Mixed Element Type Unstructured Grid Generation for Viscous Flow Applications
NASA Technical Reports Server (NTRS)
Marcum, David L.; Gaither, J. Adam
2000-01-01
A procedure is presented for efficient generation of high-quality unstructured grids suitable for CFD simulation of high Reynolds number viscous flow fields. Layers of anisotropic elements are generated by advancing along prescribed normals from solid boundaries. The points are generated such that either pentahedral or tetrahedral elements with an implied connectivity can be be directly recovered. As points are generated they are temporarily attached to a volume triangulation of the boundary points. This triangulation allows efficient local search algorithms to be used when checking merging layers, The existing advancing-front/local-reconnection procedure is used to generate isotropic elements outside of the anisotropic region. Results are presented for a variety of applications. The results demonstrate that high-quality anisotropic unstructured grids can be efficiently and consistently generated for complex configurations.
ERIC Educational Resources Information Center
Dauenhauer, Brian D.; Keating, Xiaofen D.; Lambdin, Dolly
2018-01-01
Purpose: This study aimed to conduct an in-depth investigation into physical education data sources and collection procedures in a district that was awarded a Physical Education Program (PEP) grant. Method: A qualitative, multi-site case study was conducted in which a single school district was the overarching case and eight schools served as…
Code of Federal Regulations, 2011 CFR
2011-04-01
... geothermal energy, the energy source must be an inexhaustible energy supply. Accordingly, wood and... approved list of energy-conserving components or renewable energy sources. 1.23-6 Section 1.23-6 Internal... During A Taxable Year § 1.23-6 Procedure and criteria for additions to the approved list of energy...
Code of Federal Regulations, 2010 CFR
2010-04-01
... geothermal energy, the energy source must be an inexhaustible energy supply. Accordingly, wood and... approved list of energy-conserving components or renewable energy sources. 1.23-6 Section 1.23-6 Internal... During A Taxable Year § 1.23-6 Procedure and criteria for additions to the approved list of energy...
Traceable measurements of the electrical parameters of solid-state lighting products
NASA Astrophysics Data System (ADS)
Zhao, D.; Rietveld, G.; Braun, J.-P.; Overney, F.; Lippert, T.; Christensen, A.
2016-12-01
In order to perform traceable measurements of the electrical parameters of solid-state lighting (SSL) products, it is necessary to technically adequately define the measurement procedures and to identify the relevant uncertainty sources. The present published written standard for SSL products specifies test conditions, but it lacks an explanation of how adequate these test conditions are. More specifically, both an identification of uncertainty sources and a quantitative uncertainty analysis are absent. This paper fills the related gap in the present written standard. New uncertainty sources with respect to conventional lighting sources are determined and their effects are quantified. It shows that for power measurements, the main uncertainty sources are temperature deviation, power supply voltage distortion, and instability of the SSL product. For current RMS measurements, the influence of bandwidth, shunt resistor, power supply source impedance and ac frequency flatness are significant as well. The measurement uncertainty depends not only on the test equipment but is also a function of the characteristics of the device under test (DUT), for example, current harmonics spectrum and input impedance. Therefore, an online calculation tool is provided to help non-electrical experts. Following our procedures, unrealistic uncertainty estimations, unnecessary procedures and expensive equipment can be prevented.
Graphic representations: keys to disclose the codex of nature
NASA Astrophysics Data System (ADS)
Caramelo, Liliana; Gonçalves, Norberto; Pereira, Mário; Soares, Armando; Naia, Marco
2010-05-01
Undergraduate and university level students present some difficulties to understand and interpret many of the geosciences concepts, in particular those represented by vector and scalar fields. Our experience reveals that these difficulties are associated with a lack in the development of their abstraction and mental picturing abilities. On the other hand, these students have easy access to communication and information technology software which can be used to built graphic representations of experimental data, time series and vector and scalar fields. This transformation allows an easiest extraction, interpretation and summary of the most important characteristics in the data. There is already commercial and open source software with graphical tools that can be used for this purpose but commercial software packs with user friendly interfaces but their price is not negligible. Open source software can circumvent this difficulty even if, in general, their graphical user interface hasn't reached the desirable level of the commercial ones. We will show a simple procedure to generate an image from the data that characterizes the generation of the suitable images illustrating the key concepts in study, using a freeware code, exactly as it is presented to the students in our open teaching sessions to the general student community. Our experience demonstrated that the students are very enthusiastic using this approach. Furthermore, the use of this software can easily be adopted by teachers and students of secondary schools as part of curricular activities.
Femtosecond noncollinear SFG dynamics in autocorrelator setup at low level of photons
NASA Astrophysics Data System (ADS)
Tenishev, Vladimir P.; Persson, A.; Larsson, J.
2004-06-01
We report here the characteristics of noncollinear sum frequency generation in nonlinear KDP crystals by ultrashort (80 fsec) IR pulses irradiated by the intense Ti:Sapphire laser and their behavior in single shot auto-crosscorrelator (ACC) configuration. In particular we study the case where one of the beams is very weak. Our aim is to develop a procedure to provide delay time signal between light pulses for time resolved pump probe experiments based on the extraction of the phase-matched SHG spatial distribution by means of pulse shape analysis technique. We intend to apply these results to synchronize a weak short-pulse source and an intense Ti:Sapphire laser and to measure the pulse time jitter between them.
Ion mobility analysis of lipoproteins
Benner, W Henry [Danville, CA; Krauss, Ronald M [Berkeley, CA; Blanche, Patricia J [Berkeley, CA
2007-08-21
A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.
Aerosol preparation of intact lipoproteins
Benner, W Henry [Danville, CA; Krauss, Ronald M [Berkeley, CA; Blanche, Patricia J [Berkeley, CA
2012-01-17
A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.
Enhanced orbit determination filter: Inclusion of ground system errors as filter parameters
NASA Technical Reports Server (NTRS)
Masters, W. C.; Scheeres, D. J.; Thurman, S. W.
1994-01-01
The theoretical aspects of an orbit determination filter that incorporates ground-system error sources as model parameters for use in interplanetary navigation are presented in this article. This filter, which is derived from sequential filtering theory, allows a systematic treatment of errors in calibrations of transmission media, station locations, and earth orientation models associated with ground-based radio metric data, in addition to the modeling of the spacecraft dynamics. The discussion includes a mathematical description of the filter and an analytical comparison of its characteristics with more traditional filtering techniques used in this application. The analysis in this article shows that this filter has the potential to generate navigation products of substantially greater accuracy than more traditional filtering procedures.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf
2013-08-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
Doubleday, Alison F; Wille, Sarah J
2014-01-01
Video and photography are often used for delivering content within the anatomical sciences. However, instructors typically produce these resources to provide instructional or procedural information. Although the benefits of learner-generated content have been explored within educational research, virtually no studies have investigated the use of learner-generated video and photograph content within anatomy dissection laboratories. This study outlines an activity involving learner-generated video diaries and learner-generated photograph assignments produced during anatomy laboratory sessions. The learner-generated photographs and videos provided instructors with a means of formative assessment and allowed instructors to identify evidence of collaborative behavior in the laboratory. Student questionnaires (n = 21) and interviews (n = 5), as well as in-class observations, were conducted to examine student perspectives on the laboratory activities. The quantitative and qualitative data were examined using the framework of activity theory to identify contradictions between student expectations of, and engagement with, the activity and the actual experiences of the students. Results indicate that learner-generated photograph and video content can act as a rich source of data on student learning processes and can be used for formative assessment, for observing collaborative behavior, and as a starting point for class discussions. This study stresses the idea that technology choice for activities must align with instructional goals. This research also highlights the utility of activity theory as a framework for assessing classroom and laboratory activities, demonstrating that this approach can guide the development of laboratory activities. © 2014 American Association of Anatomists.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Additionally sulfated xylomannan sulfates from Scinaia hatei and their antiviral activities.
Ray, Sayani; Pujol, Carlos A; Damonte, Elsa B; Ray, Bimalendu
2015-10-20
Herpes simplex viruses (HSVs) display affinity for cell-surface heparan sulfate proteoglycans with biological relevance in virus entry. This study demonstrates the potential of chemically engineered sulfated xylomannans from Scinaia hatei as antiHSV drug candidate. Particularly, a dimethylformamide -SO3/pyridine based procedure has been employed for the generation of anionic polysaccharides. This one-step procedure has the power of providing a spectrum of xylomannans with varying molecular masses (<12-74kDa), sulfate content (1-50%) and glycosyl composition. Especially, the sulfated xylomannans S1F1 and S2F1 possessed altered activity against HSV-1 and HSV-2 compared to the parental compound (F1) and that too in the absence of drug-induced cytotoxicity. Regarding methodological facet, the directive decoration of hydroxyl functionality with sulfate group plus changes in the molecular mass and sugar composition during isolation by the used reagent opens a door for the production of new molecular entity with altered biological activity from other natural sources. Copyright © 2015 Elsevier Ltd. All rights reserved.
Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.
Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng
2018-03-04
With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).
Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei
2014-07-20
We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.
Optimization of Ocean Color Algorithms: Application to Satellite Data Merging
NASA Technical Reports Server (NTRS)
Ritorena, Stephane; Siegel, David A.; Morel, Andre
2004-01-01
The objective of the program is to develop and validate a procedure for ocean color data merging, which is one of the major goals of the SIMBIOS project. As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized water-leaving radiances L((sub wN)(lambda)) from single or multiple sensors and uses them in the inversion of a semi-analytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) It works with single or multiple data sources regardless of their specific bands; 2) It exploits band redundancies and band differences; 3) It accounts for uncertainties in the L((sub wN)(lambda)) data; 4) It provides uncertainty estimates for the retrieved variables.
Visual accommodation trainer-tester
NASA Technical Reports Server (NTRS)
Randle, Robert J. (Inventor)
1988-01-01
An apparatus for training the human visual accommodation system is described. Specifically, the apparatus is useful for training personnel to volitionally control focus to the far point (normally infinity) from a position of myopia due to functional causes. The functional causes could be due, for example, to a behavioral accommodative spasm or the effects of an empty field. The device may also be used to measure accommodation, the accommodation resting position and the near and far points of vision. The device comprises a number of optical elements arranged on a single optical axis. Several of the elements are arranged in order on a movable stage in fixed relationship to each other: a light source, a lens, a target, an aperture and/or a second lens. On a base and in fixed relationship to each other are eyepiece and third lens. A stage generates an image of the target and the stage is movable with respect to the base by means of a knob. The device is utilized for the various training and test functions by following a series of procedural steps, and interchanging the apertures as necessary for the selected procedure.
Theoretical modeling of PEB procedure on EUV resist using FDM formulation
NASA Astrophysics Data System (ADS)
Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo
2018-03-01
Semiconductor manufacturing industry has reduced the size of wafer for enhanced productivity and performance, and Extreme Ultraviolet (EUV) light source is considered as a promising solution for downsizing. A series of EUV lithography procedures contain complex photo-chemical reaction on photoresist, and it causes technical difficulties on constructing theoretical framework which facilitates rigorous investigation of underlying mechanism. Thus, we formulated finite difference method (FDM) model of post exposure bake (PEB) process on positive chemically amplified resist (CAR), and it involved acid diffusion coupled-deprotection reaction. The model is based on Fick's second law and first-order chemical reaction rate law for diffusion and deprotection, respectively. Two kinetic parameters, diffusion coefficient of acid and rate constant of deprotection, which were obtained by experiment and atomic scale simulation were applied to the model. As a result, we obtained time evolutional protecting ratio of each functional group in resist monomer which can be used to predict resulting polymer morphology after overall chemical reactions. This achievement will be the cornerstone of multiscale modeling which provides fundamental understanding on important factors for EUV performance and rational design of the next-generation photoresist.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
Dual echelon femtosecond single-shot spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Taeho; Wolfson, Johanna W.; Teitelbaum, Samuel W.
We have developed a femtosecond single-shot spectroscopic technique to measure irreversible changes in condensed phase materials in real time. Crossed echelons generate a two-dimensional array of time-delayed pulses with one femtosecond probe pulse. This yields 9 ps of time-resolved data from a single laser shot, filling a gap in currently employed measurement methods. We can now monitor ultrafast irreversible dynamics in solid-state materials or other samples that cannot be flowed or replenished between laser shots, circumventing limitations of conventional pump-probe methods due to sample damage or product buildup. Despite the absence of signal-averaging in the single-shot measurement, an acceptable signal-to-noisemore » level has been achieved via background and reference calibration procedures. Pump-induced changes in relative reflectivity as small as 0.2%−0.5% are demonstrated in semimetals, with both electronic and coherent phonon dynamics revealed by the data. The optical arrangement and the space-to-time conversion and calibration procedures necessary to achieve this level of operation are described. Sources of noise and approaches for dealing with them are discussed.« less
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Ontology-based data integration between clinical and research systems.
Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich; Bürkle, Thomas; Ganslandt, Thomas
2015-01-01
Data from the electronic medical record comprise numerous structured but uncoded elements, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of relevant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it.
Correlation of Thermally Induced Pores with Microstructural Features Using High Energy X-rays
NASA Astrophysics Data System (ADS)
Menasche, David B.; Shade, Paul A.; Lind, Jonathan; Li, Shiu Fai; Bernier, Joel V.; Kenesei, Peter; Schuren, Jay C.; Suter, Robert M.
2016-11-01
Combined application of a near-field High Energy Diffraction Microscopy measurement of crystal lattice orientation fields and a tomographic measurement of pore distributions in a sintered nickel-based superalloy sample allows pore locations to be correlated with microstructural features. Measurements were carried out at the Advanced Photon Source beamline 1-ID using an X-ray energy of 65 keV for each of the measurement modes. The nickel superalloy sample was prepared in such a way as to generate significant thermally induced porosity. A three-dimensionally resolved orientation map is directly overlaid with the tomographically determined pore map through a careful registration procedure. The data are shown to reliably reproduce the expected correlations between specific microstructural features (triple lines and quadruple nodes) and pore positions. With the statistics afforded by the 3D data set, we conclude that within statistical limits, pore formation does not depend on the relative orientations of the grains. The experimental procedures and analysis tools illustrated are being applied to a variety of materials problems in which local heterogeneities can affect materials properties.
Response Functions for Neutron Skyshine Analyses
NASA Astrophysics Data System (ADS)
Gui, Ah Auu
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.
Spectral characteristics of light sources for S-cone stimulation.
Schlegelmilch, F; Nolte, R; Schellhorn, K; Husar, P; Henning, G; Tornow, R P
2002-11-01
Electrophysiological investigations of the short-wavelength sensitive pathway of the human eye require the use of a suitable light source as a S-cone stimulator. Different light sources with their spectral distribution properties were investigated and compared with the ideal S-cone stimulator. First, the theoretical background of the calculation of relative cone energy absorption from the spectral distribution function of the light source is summarized. From the results of the calculation, the photometric properties of the ideal S-cone stimulator will be derived. The calculation procedure was applied to virtual light sources (computer generated spectral distribution functions with different medium wavelengths and spectrum widths) and to real light sources (blue and green light emitting diodes, blue phosphor of CRT-monitor, multimedia projector, LCD monitor and notebook display). The calculated relative cone absorbencies are compared to the conditions of an ideal S-cone stimulator. Monochromatic light sources with wavelengths of less than 456 nm are close to the conditions of an ideal S-cone stimulator. Spectrum widths up to 21 nm do not affect the S-cone activation significantly (S-cone activation change < 0.2%). Blue light emitting diodes with peak wavelength at 448 nm and spectrum bandwidth of 25 nm are very useful for S-cone stimulation (S-cone activation approximately 95%). A suitable display for S-cone stimulation is the Trinitron computer monitor (S-cone activation approximately 87%). The multimedia projector has a S-cone activation up to 91%, but their spectral distribution properties depends on the selected intensity. LCD monitor and notebook displays have a lower S-cone activation (< or = 74%). Carefully selecting the blue light source for S-cone stimulation can reduce the unwanted L-and M-cone activation down to 4% for M-cones and 1.5% for L-cones.
Zhang, Y.-S.; Collins, A.L.; Horowitz, A.J.
2012-01-01
Reliable information on catchment scale suspended sediment sources is required to inform the design of management strategies for helping abate the numerous environmental issues associated with enhanced sediment mobilization and off-site loadings. Since sediment fingerprinting techniques avoid many of the logistical constraints associated with using more traditional indirect measurement methods at catchment scale, such approaches have been increasingly reported in the international literature and typically use data sets collected specifically for sediment source apportionment purposes. There remains scope for investigating the potential for using geochemical data sets assembled by routine monitoring programmes to fingerprint sediment provenance. In the United States, routine water quality samples are collected as part of the US Geological Survey's revised National Stream Quality Accounting Network programme. Accordingly, the geochemistry data generated from these samples over a 10-year period (1996-2006) were used as the basis for a fingerprinting exercise to assess the key tributary sub-catchment spatial sources of contemporary suspended sediment transported by the Ohio River. Uncertainty associated with the spatial source estimates was quantified using a Monte Carlo approach in conjunction with mass balance modelling. Relative frequency weighted means were used as an alternative way of summarizing the spatial source contributions, thereby avoiding the need to use confidence limits. The results should be interpreted in the context of the routine, but infrequent nature, of the suspended sediment samples used to assemble geochemistry as a basis for the sourcing exercise. Nonetheless, the study demonstrates how routine monitoring samples can be used to provide some preliminary information on sediment provenance in large drainage basins. ?? 2011 John Wiley & Sons, Ltd.
[Peritonitis in diverticulitis: the Bern concept].
Seiler, C A; Brügger, L; Maurer, C A; Renzulli, P; Büchler, M W
1998-01-01
The colon is the most frequent origine for a diffuse peritonitis and diverticular perforation is again the most common source of a spontaneous secondary peritonitis. This paper first focuses on the treatment of peritonitis and secondly on the strategies of source control in peritonitis with special emphasis on the tactics (primary anastomosis vs. Hartmann procedure with colostomy) for surgical source control. Prospective analysis of 404 patients suffering from peritonitis (11/93-2/98), treated with an uniform treatment concept including early operation, source control and extensive intraoperative lavage (20 to 30 liters) as a standard procedure. Other treatment measures were added in special indications "on demand" only. Peritonitis was graded with the Mannheim Peritonitis Index (MPI). Tactics of source control in peritonitis due to diverticulitis were performed according to "general condition" respectively the MPI of the patient. The 404 patients averaged a MPI of 19 (0-35) in "local" peritonitis and a MPI of 26 (11-43) in "diffuse" peritonitis. The colon as a source of peritonitis resulted in MPI of 16 (0-33) in the case of "local" respectively 27 (11-43) in "diffuse" peritonitis. From 181 patients suffering from diverticulitis 144 needed an operation and in 78 (54%) peritonitis was present. Fourty-six percent (36) of the patients suffered from "local", 54% (42) from "diffuse" peritonitis. Resection with primary anastomosis was performed in 26% (20/78) whereas in 74% (58/78) of the patients a Hartmann procedure with colostomy was performed. The correlating MPI was 16 (0-28) vs. 23 (16-27) respectively. The analysis of complications and mortality based on the MPI showed a decent discrimination potential for primary anastomosis vs Hartmann procedure: morbidity 35% vs. 41%; reoperation 5% vs. 5%; mortality 0% vs. 14%. In case of peritonitis due to diverticulitis the treatment of peritonitis comes first. Thanks to advances in intensive care and improved anti-inflammatory care, a more conservative surgical concept nowadays is accepted. In the case of diverticulitis the MPI is helpful to choose between primary anastomosis vs. Hartmann procedure with colostomy as source control. The MPI includes the "general condition" of the patient into the tactical decision how to attain source control.
Synchronization of the DOE/NASA 100-kilowatt wind turbine generator with a large utility network
NASA Technical Reports Server (NTRS)
Gilbert, L. J.
1977-01-01
The DOE/NASA 100 kilowatt wind turbine generator system was synchronized with a large utility network. The system equipments and procedures associated with the synchronization process were described. Time history traces of typical synchronizations were presented indicating that power and current transients resulting from the synchronizing procedure are limited to acceptable magnitudes.
Oklahoma | Solar Research | NREL
customer-generators who install net-metered distributed generation. Utilities and cooperatives are not required to purchase monthly net excess generation from customers. A customer-generator's net excess has not adopted standardized interconnection procedures. Potential customer-generators should contact
Diversity of fuel sources for electricity generation in an evolving U.S. power sector
NASA Astrophysics Data System (ADS)
DiLuccia, Janelle G.
Policymakers increasingly have shown interest in options to boost the relative share of renewable or clean electricity generating sources in order to reduce negative environmental externalities from fossil fuels, guard against possible resource constraints, and capture economic advantages from developing new technologies and industries. Electric utilities and non-utility generators make decisions regarding their generation mix based on a number of different factors that may or may not align with societal goals. This paper examines the makeup of the electric power sector to determine how the type of generator and the presence (or lack) of competition in electricity markets at the state level may relate to the types of fuel sources used for generation. Using state-level electricity generation data from the U.S. Energy Information Administration from 1990 through 2010, this paper employs state and time fixed-effects regression modeling to attempt to isolate the impacts of state-level restructuring policies and the emergence of non-utility generators on states' generation from coal, from fossil fuel and from renewable sources. While the analysis has significant limitations, I do find that state-level electricity restructuring has a small but significant association with lowering electricity generation from coal specifically and fossil fuels more generally. Further research into the relationship between competition and fuel sources would aid policymakers considering legislative options to influence the generation mix.
Collins, A.L; Pulley, S.; Foster, I.D.L; Gellis, Allen; Porto, P.; Horowitz, A.J.
2017-01-01
The growing awareness of the environmental significance of fine-grained sediment fluxes through catchment systems continues to underscore the need for reliable information on the principal sources of this material. Source estimates are difficult to obtain using traditional monitoring techniques, but sediment source fingerprinting or tracing procedures, have emerged as a potentially valuable alternative. Despite the rapidly increasing numbers of studies reporting the use of sediment source fingerprinting, several key challenges and uncertainties continue to hamper consensus among the international scientific community on key components of the existing methodological procedures. Accordingly, this contribution reviews and presents recent developments for several key aspects of fingerprinting, namely: sediment source classification, catchment source and target sediment sampling, tracer selection, grain size issues, tracer conservatism, source apportionment modelling, and assessment of source predictions using artificial mixtures. Finally, a decision-tree representing the current state of knowledge is presented, to guide end-users in applying the fingerprinting approach.
Idaho National Engineering Laboratory code assessment of the Rocky Flats transuranic waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
This report is an assessment of the content codes associated with transuranic waste shipped from the Rocky Flats Plant in Golden, Colorado, to INEL. The primary objective of this document is to characterize and describe the transuranic wastes shipped to INEL from Rocky Flats by item description code (IDC). This information will aid INEL in determining if the waste meets the waste acceptance criteria (WAC) of the Waste Isolation Pilot Plant (WIPP). The waste covered by this content code assessment was shipped from Rocky Flats between 1985 and 1989. These years coincide with the dates for information available in themore » Rocky Flats Solid Waste Information Management System (SWIMS). The majority of waste shipped during this time was certified to the existing WIPP WAC. This waste is referred to as precertified waste. Reassessment of these precertified waste containers is necessary because of changes in the WIPP WAC. To accomplish this assessment, the analytical and process knowledge available on the various IDCs used at Rocky Flats were evaluated. Rocky Flats sources for this information include employee interviews, SWIMS, Transuranic Waste Certification Program, Transuranic Waste Inspection Procedure, Backlog Waste Baseline Books, WIPP Experimental Waste Characterization Program (headspace analysis), and other related documents, procedures, and programs. Summaries are provided of: (a) certification information, (b) waste description, (c) generation source, (d) recovery method, (e) waste packaging and handling information, (f) container preparation information, (g) assay information, (h) inspection information, (i) analytical data, and (j) RCRA characterization.« less
Thunder-induced ground motions: 1. Observations
NASA Astrophysics Data System (ADS)
Lin, Ting-L.; Langston, Charles A.
2009-04-01
Acoustic pressure from thunder and its induced ground motions were investigated using a small array consisting of five three-component short-period surface seismometers, a three-component borehole seismometer, and five infrasound microphones. We used the array to constrain wave parameters of the incident acoustic and seismic waves. The incident slowness differences between acoustic pressure and ground motions suggest that ground reverberations were first initiated somewhat away from the array. Using slowness inferred from ground motions is preferable to obtain the seismic source parameters. We propose a source equalization procedure for acoustic/seismic deconvolution to generate the time domain transfer function, a procedure similar to that of obtaining teleseismic earthquake receiver functions. The time domain transfer function removes the incident pressure time history from the seismogram. An additional vertical-to-radial ground motion transfer function was used to identify the Rayleigh wave propagation mode of induced seismic waves complementing that found using the particle motions and amplitude variations in the borehole. The initial motions obtained by the time domain transfer functions suggest a low Poisson's ratio for the near-surface layer. The acoustic-to-seismic transfer functions show a consistent reverberation series at frequencies near 5 Hz. This gives an empirical measure of site resonance that depends on the ratio of the layer velocity to layer thickness for earthquake P and S waves. The time domain transfer function approach by transferring a spectral division into the time domain provides an alternative method for studying acoustic-to-seismic coupling.
Modular approach to achieving the next-generation X-ray light source
NASA Astrophysics Data System (ADS)
Biedron, S. G.; Milton, S. V.; Freund, H. P.
2001-12-01
A modular approach to the next-generation light source is described. The "modules" include photocathode, radio-frequency, electron guns and their associated drive-laser systems, linear accelerators, bunch-compression systems, seed laser systems, planar undulators, two-undulator harmonic generation schemes, high-gain harmonic generation systems, nonlinear higher harmonics, and wavelength shifting. These modules will be helpful in distributing the next-generation light source to many more laboratories than the current single-pass, high-gain free-electron laser designs permit, due to both monetary and/or physical space constraints.
Sources of Information as Determinants of Product and Process Innovation.
Gómez, Jaime; Salazar, Idana; Vargas, Pilar
2016-01-01
In this paper we use a panel of manufacturing firms in Spain to examine the extent to which they use internal and external sources of information (customers, suppliers, competitors, consultants and universities) to generate product and process innovation. Our results show that, although internal sources are influential, external sources of information are key to achieve innovation performance. These results are in line with the open innovation literature because they show that firms that are opening up their innovation process and that use different information sources have a greater capacity to generate innovations. We also find that the importance of external sources of information varies depending on the type of innovation (product or process) considered. To generate process innovation, firms mainly rely on suppliers while, to generate product innovation, the main contribution is from customers. The potential simultaneity between product and process innovation is also taken into consideration. We find that the generation of both types of innovation is not independent.
Sources of Information as Determinants of Product and Process Innovation
2016-01-01
In this paper we use a panel of manufacturing firms in Spain to examine the extent to which they use internal and external sources of information (customers, suppliers, competitors, consultants and universities) to generate product and process innovation. Our results show that, although internal sources are influential, external sources of information are key to achieve innovation performance. These results are in line with the open innovation literature because they show that firms that are opening up their innovation process and that use different information sources have a greater capacity to generate innovations. We also find that the importance of external sources of information varies depending on the type of innovation (product or process) considered. To generate process innovation, firms mainly rely on suppliers while, to generate product innovation, the main contribution is from customers. The potential simultaneity between product and process innovation is also taken into consideration. We find that the generation of both types of innovation is not independent. PMID:27035456
High-Fidelity Roadway Modeling and Simulation
NASA Technical Reports Server (NTRS)
Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit
2010-01-01
Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.
Laser-Generated Ultrasonic Source for a Real-Time Dry-Contact Imaging System
NASA Astrophysics Data System (ADS)
Petculescu, G.; Zhou, Y.; Komsky, I.; Krishnaswamy, S.
2006-03-01
A laser-generated ultrasonic source, to be used with a real-time imaging device, was developed. The ultrasound is generated in the thermoelastic regime, in a composite layer composed of absorbing particles (carbon) and silicone rubber. The composite layer plays three roles: of absorption, constriction and dry-coupling. The central frequency of the generated pulse was controlled by varying the absorption depth of the generation layer. The maximum peak frequency obtained was 4MHz. When additional constriction was provided to the composite layer, the amplitude of the generated signal increased further, due to the large thermal expansion coefficient of the silicone. Images using the laser-generated ultrasonic source were taken.
Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Olson, William S.
2003-01-01
A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.
Practice patterns of academic general thoracic and adult cardiac surgeons.
Ingram, Michael T; Wisner, David H; Cooke, David T
2014-10-01
We hypothesized that academic adult cardiac surgeons (CSs) and general thoracic surgeons (GTSs) would have distinct practice patterns of, not just case-mix, but also time devoted to outpatient care, involvement in critical care, and work relative value unit (wRVU) generation for the procedures they perform. We queried the University Health System Consortium-Association of American Medical Colleges Faculty Practice Solution Center database for fiscal years 2007-2008, 2008-2009, and 2009-2010 for the frequency of inpatient and outpatient current procedural terminology coding and wRVU data of academic GTSs and CSs. The Faculty Practice Solution Center database is a compilation of productivity and payer data from 86 academic institutions. The greatest wRVU generating current procedural terminology codes for CSs were, in order, coronary artery bypass grafting, aortic valve replacement, and mitral valve replacement. In contrast, open lobectomy, video-assisted thoracic surgery wedge, and video-assisted thoracic surgery lobectomy were greatest for GTSs. The 10 greatest wRVU-generating procedures for CSs generated more wRVUs than those for GTSs (P<.001). Although CSs generated significantly more hospital inpatient evaluation and management (E & M) wRVUs than did GTSs (P<.001), only 2.5% of the total wRVUs generated by CSs were from E & M codes versus 18.8% for GTSs. Critical care codes were 1.5% of total evaluation and management billing for both CSs and GTSs. Academic CSs and GTSs have distinct practice patterns. CSs receive greater reimbursement for services because of the greater wRVUs of the procedures performed compared with GTSs, and evaluation and management coding is a more important wRVU generator for GTSs. The results of our study could guide academic CS and GTS practice structure and time prioritization. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Putting humans in the loop: Using crowdsourced snow information to inform water management
NASA Astrophysics Data System (ADS)
Fedorov, Roman; Giuliani, Matteo; Castelletti, Andrea; Fraternali, Piero
2016-04-01
The unprecedented availability of user generated data on the Web due to the advent of online services, social networks, and crowdsourcing, is opening new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatio-temporally dense, possibly contributing to our ability of making better decisions. In this work, we contribute a novel crowdsourcing procedure for computing virtual snow indexes from public web images, either produced by users or generated by touristic webcams, which is based on a complex architecture designed for automatically crawling content from multiple web data sources. The procedure retains only geo-tagged images containing a mountain skyline, identifies the visible peaks in each image using a public online digital terrain model, and classifies the mountain image pixels as snow or no-snow. This operation yields a snow mask per image, from which it is possible to extract time series of virtual snow indexes representing a proxy of the snow covered area. The value of the obtained virtual snow indexes is estimated in a real world water management problem. We consider the snow-dominated catchment of Lake Como, a regulated lake in Northern Italy, where snowmelt represents the most important contribution to seasonal lake storage, and we used the virtual snow indexes for informing the daily operation of the lake's dam. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance.
A direct biocombinatorial strategy toward next generation, mussel-glue inspired saltwater adhesives.
Wilke, Patrick; Helfricht, Nicolas; Mark, Andreas; Papastavrou, Georg; Faivre, Damien; Börner, Hans G
2014-09-10
Biological materials exhibit remarkable, purpose-adapted properties that provide a source of inspiration for designing new materials to meet the requirements of future applications. For instance, marine mussels are able to attach to a broad spectrum of hard surfaces under hostile conditions. Controlling wet-adhesion of synthetic macromolecules by analogue processes promises to strongly impact materials sciences by offering advanced coatings, adhesives, and glues. The de novo design of macromolecules to mimic complex aspects of mussel adhesion still constitutes a challenge. Phage display allows material scientists to design specifically interacting molecules with tailored affinity to material surfaces. Here, we report on the integration of enzymatic processing steps into phage display biopanning to expand the biocombinatorial procedure and enable the direct selection of enzymatically activable peptide adhesion domains. Adsorption isotherms and single molecule force spectroscopy show that those de novo peptides mimic complex aspects of bioadhesion, such as enzymatic activation (by tyrosinase), the switchability from weak to strong binders, and adsorption under hostile saltwater conditions. Furthermore, peptide-poly(ethylene oxide) conjugates are synthesized to generate protective coatings, which possess anti-fouling properties and suppress irreversible interactions with blood-plasma protein cocktails. The extended phage display procedure provides a generic way to non-natural peptide adhesion domains, which not only mimic nature but also improve biological sequence sections extractable from mussel-glue proteins. The de novo peptides manage to combine several tasks in a minimal 12-mer sequence and thus pave the way to overcome major challenges of technical wet glues.
NASA Astrophysics Data System (ADS)
Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David
2016-04-01
One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual catchment was constructed, composed by three sub-catchments each of 500 x 500 m size. We assumed that there was no selectivity in sediment detachment or transport. A numerical excercise was performed considering these variables: 1) variability of element concentration: three levels with CVs of 20 %, 50 % and 80 %; 2) sampling density: 10, 25 and 50 "samples" per sub-catchment and element; and 3) number of elements included in the mixing model: two (determined), and five (overdetermined). This resulted in a total of 18 (3 x 3 x 2) possible combinations. The five fingerprinting elements considered in the study were: C, N, 40K, Al and Pavail, and their average values, taken from the literature, were: sub-catchment 1: 4.0 %, 0.35 %, 0.50 ppm, 5.0 ppm, 1.42 ppm, respectively; sub-catchment 2: 2.0 %, 0.18 %, 0.20 ppm, 10.0 ppm, 0.20 ppm, respectively; and sub-catchment 3: 1.0 %, 0.06 %, 1.0 ppm, 16.0 ppm, 7.8 ppm, respectively. For each sub-catchment, three maps of the spatial distribution of each element was generated using the random generator of Mejia and Rodriguez-Iturbe (1974) as described in Freeze (1980), using the average value and the three different CVs defined above. Each map for each source area and property was generated for a 100 x 100 square grid, each grid cell being 5 m x 5 m. Maps were randomly generated for each property and source area. In doing so, we did not consider the possibility of cross correlation among properties. Spatial autocorrelation was assumed to be weak. The reason for generating the maps was to create a "virtual" situation where all the element concentration values at each point are known. Simultaneously, we arbitrarily determined the percentage of sediment coming from sub-catchments. These values were 30 %, 10 % and 60 %, for sub-catchments 1, 2 and 3, respectively. Using these values, we determined the element concentrations in the sediment. The exercise consisted of creating different sampling strategies in a virtual environment to determine an average value for each of the different maps of element concentration and sub-catchment, under different sampling densities: 200 different average values for the "high" sampling density (average of 50 samples); 400 different average values for the "medium" sampling density (average of 25 samples); and 1,000 different average values for the "low" sampling density (average of 10 samples). All these combinations of possible values of element concentrations in the source areas were solved for the concentration in the sediment already determined for the "true" solution using limSolve (Soetaert et al., 2014) in R language. The sediment source solutions found for the different situations and values were analyzed in order to: 1) evaluate the uncertainty in the sediment source attribution; and 2) explore strategies to detect the most probable solutions that might lead to improved methods for constructing the most robust mixing models. Preliminary results on these will be presented and discussed in this communication. Key words: sediment, fingerprinting, uncertainty, variability, mixing model. References Collins, A.L., Zhang, Y., McChesney, D., Walling, D.E., Haley, S.M., Smith, P. 2012. Sediment source tracing in a lowland agricultural catchment in southern England using a modified procedure combining statistical analysis and numerical modelling. Science of the Total Environment 414: 301-317. Freeze, R.A. 1980. A stochastic-conceptual analysis of rainfall-runoff processes on a hillslope. Water Resources Research 16: 391-408.
Photoacoustic effect generated by moving optical sources: Motion in one dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Wenyu; Diebold, Gerald J.
2016-03-28
Although the photoacoustic effect is typically generated by pulsed or amplitude modulated optical beams, it is clear from examination of the wave equation for pressure that motion of an optical source in space will result in the production of sound as well. Here, the properties of the photoacoustic effect generated by moving sources in one dimension are investigated. The cases of a moving Gaussian beam, an oscillating delta function source, and an accelerating Gaussian optical sources are reported. The salient feature of one-dimensional sources in the linear acoustic limit is that the amplitude of the beam increases in time withoutmore » bound.« less
NASA Technical Reports Server (NTRS)
kaul, Upender K.
2008-01-01
A procedure for generating smooth uniformly clustered single-zone grids using enhanced elliptic grid generation has been demonstrated here for the Mars Science Laboratory (MSL) geometries such as aeroshell and canopy. The procedure obviates the need for generating multizone grids for such geometries, as reported in the literature. This has been possible because the enhanced elliptic grid generator automatically generates clustered grids without manual prescription of decay parameters needed with the conventional approach. In fact, these decay parameters are calculated as decay functions as part of the solution, and they are not constant over a given boundary. Since these decay functions vary over a given boundary, orthogonal grids near any arbitrary boundary can be clustered automatically without having to break up the boundaries and the corresponding interior domains into various zones for grid generation.
Pappas, D.S.
1987-07-31
The apparatus of this invention may comprise a system for generating laser radiation from a high-energy neutron source. The neutron source is a tokamak fusion reactor generating a long pulse of high-energy neutrons and having a temperature and magnetic field effective to generate a neutron flux of at least 10/sup 15/ neutrons/cm/sup 2//center dot/s. Conversion means are provided adjacent the fusion reactor at a location operable for converting the high-energy neutrons to an energy source with an intensity and energy effective to excite a preselected lasing medium. A lasing medium is spaced about and responsive to the energy source to generate a population inversion effective to support laser oscillations for generating output radiation. 2 figs., 2 tabs.
Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure
NASA Astrophysics Data System (ADS)
Xie, J. "; Schaff, D. P.
2010-12-01
Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.
Reynolds-averaged Navier-Stokes based ice accretion for aircraft wings
NASA Astrophysics Data System (ADS)
Lashkajani, Kazem Hasanzadeh
This thesis addresses one of the current issues in flight safety towards increasing icing simulation capabilities for prediction of complex 2D and 3D glaze ice shapes over aircraft surfaces. During the 1980's and 1990's, the field of aero-icing was established to support design and certification of aircraft flying in icing conditions. The multidisciplinary technologies used in such codes were: aerodynamics (panel method), droplet trajectory calculations (Lagrangian framework), thermodynamic module (Messinger model) and geometry module (ice accretion). These are embedded in a quasi-steady module to simulate the time-dependent ice accretion process (multi-step procedure). The objectives of the present research are to upgrade the aerodynamic module from Laplace to Reynolds-Average Navier-Stokes equations solver. The advantages are many. First, the physical model allows accounting for viscous effects in the aerodynamic module. Second, the solution of the aero-icing module directly provides the means for characterizing the aerodynamic effects of icing, such as loss of lift and increased drag. Third, the use of a finite volume approach to solving the Partial Differential Equations allows rigorous mesh and time convergence analysis. Finally, the approaches developed in 2D can be easily transposed to 3D problems. The research was performed in three major steps, each providing insights into the overall numerical approaches. The most important realization comes from the need to develop specific mesh generation algorithms to ensure feasible solutions in very complex multi-step aero-icing calculations. The contributions are presented in chronological order of their realization. First, a new framework for RANS based two-dimensional ice accretion code, CANICE2D-NS, is developed. A multi-block RANS code from U. of Liverpool (named PMB) is providing the aerodynamic field using the Spalart-Allmaras turbulence model. The ICEM-CFD commercial tool is used for the iced airfoil remeshing and field smoothing. The new coupling is fully automated and capable of multi-step ice accretion simulations via a quasi-steady approach. In addition, the framework allows for flow analysis and aerodynamic performance prediction of the iced airfoils. The convergence of the quasi-steady algorithm is verified and identifies the need for an order of magnitude increase in the number of multi-time steps in icing simulations to achieve solver independent solutions. Second, a Multi-Block Navier-Stokes code, NSMB, is coupled with the CANICE2D icing framework. Attention is paid to the roughness implementation of the ONERA roughness model within the Spalart-Allmaras turbulence model, and to the convergence of the steady and quasi-steady iterative procedure. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases. The results of CANICE2D-NS show good agreement with experimental data both in terms of predicted ice shapes as well as aerodynamic analysis of predicted and experimental ice shapes. Third, an efficient single-block structured Navier-Stokes CFD code, NSCODE, is coupled with the CANICE2D-NS icing framework. Attention is paid to the roughness implementation of the Boeing model within the Spalart-Allmaras turbulence model, and to acceleration of the convergence of the steady and quasi-steady iterative procedures. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases, including code to code comparisons with the same framework coupled with the NSMB Navier-Stokes solver. The efficiency of the J-multigrid approach to solve the flow equations on complex iced geometries is demonstrated. Since it was noted in all these calculations that the ICEM-CFD grid generation package produced a number of issues such as inefficient mesh quality and smoothing deficiencies (notably grid shocks), a fourth study proposes a new mesh generation algorithm. A PDE based multi-block structured grid generation code, NSGRID, is developed for this purpose. The study includes the developments of novel mesh generation algorithms over complex glaze ice shapes containing multi-curvature ice accretion geometries, such as single/double ice horns. The twofold approaches tackle surface geometry discretization as well as field mesh generation. An adaptive curvilinear curvature control algorithm is constructed solving a 1D elliptic PDE equation with periodic source terms. This method controls the arclength grid spacing so that high convex and concave curvature regions around ice horns are appropriately captured and is shown to effectively treat the grid shock problem. Then, a novel blended method is developed by defining combinations of source terms with 2D elliptic equations. The source terms include two common control functions, Sorenson and Spekreijse, and an additional third source term to improve orthogonality. This blended method is shown to be very effective for improving grid quality metrics for complex glaze ice meshes with RANS resolution. The performance in terms of residual reduction per non-linear iteration of several solution algorithms (Point-Jacobi, Gauss-Seidel, ADI, Point and Line SOR) are discussed within the context of a full Multi-grid operator. Details are given on the various formulations used in the linearization process. It is shown that the performance of the solution algorithm depends on the type of control function used. Finally, the algorithms are validated on standard complex experimental ice shapes, demonstrating the applicability of the methods. Finally, the automated framework of RANS based two-dimensional multi-step ice accretion, CANICE2D-NS is developed, coupled with a Multi-Block Navier-Stokes CFD code, NSCODE2D, a Multi-Block elliptic grid generation code, NSGRID2D, and a Multi-Block Eulerian droplet solver, NSDROP2D (developed at Polytechnique Montreal). The framework allows Lagrangian and Eulerian droplet computations within a chimera approach treating multi-elements geometries. The code was tested on public and confidential validation test cases including standard NATO cases. In addition, up to 10 times speedup is observed in the mesh generation procedure by using the implicit line SOR and ADI smoothers within a multigrid procedure. The results demonstrate the benefits and robustness of the new framework in predicting ice shapes and aerodynamic performance parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvie, D.M.; Elsinger, R.J.; Inden, R.F.
1996-06-01
Recent successes in the Lodgepole Waulsortian Mound play have resulted in the reevaluation of the Williston Basin petroleum systems. It has been postulated that hydrocarbons were generated from organic-rich Bakken Formation source rocks in the Williston Basin. However, Canadian geoscientists have indicated that the Lodgepole Formation is responsible for oil entrapped in Lodgepole Formation and other Madison traps in portions of the Canadian Williston Basin. Furthermore, geoscientists in the U.S. have recently shown oils from mid-Madison conventional reservoirs in the U.S. Williston Basin were not derived from Bakken Formation source rocks. Kinetic data showing the rate of hydrocarbon formation frommore » petroleum source rocks were measured on source rocks from the Lodgepole, False Bakken, and Bakken Formations. These results show a wide range of values in the rate of hydrocarbon generation. Oil prone facies within the Lodgepole Formation tend to generate hydrocarbons earlier than the oil prone facies in the Bakken Formation and mixed oil/gas prone and gas prone facies in the Lodgepole Formation. A comparison of these source rocks using a geological model of hydrocarbon generation reveals differences in the timing of generation and the required level of maturity to generate significant amounts of hydrocarbons.« less
20 CFR 416.919m - Diagnostic tests or procedures.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Diagnostic tests or procedures. 416.919m... for Report Content § 416.919m Diagnostic tests or procedures. We will request the results of any diagnostic tests or procedures that have been performed as part of a workup by your treating source or other...
ERIC Educational Resources Information Center
Koontz, F. R.
The purpose of this study was to obtain current data on practices and procedures in the administration of distance learning programs in the areas of: (1) needs assessment; (2) student demographics; (3) telecourse acquisition procedures and sources; (4) criteria used to evaluate credit telecourses; (5) institutional approval procedures; (6)…
Hybrid Grid Techniques for Propulsion Applications
NASA Technical Reports Server (NTRS)
Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.
1996-01-01
During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
40 CFR 246.202-3 - Recommended procedures: Market study.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... techniques. (b) Directly contacting buyers and determining the buyers' quality specifications, potential...
40 CFR 246.201-4 - Recommended procedures: Market study.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... research techniques. (b) Directly contacting buyers and determining the buyers' quality specifications...
40 CFR 246.202-3 - Recommended procedures: Market study.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... techniques. (b) Directly contacting buyers and determining the buyers' quality specifications, potential...
40 CFR 246.202-3 - Recommended procedures: Market study.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... techniques. (b) Directly contacting buyers and determining the buyers' quality specifications, potential...
40 CFR 246.201-4 - Recommended procedures: Market study.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... research techniques. (b) Directly contacting buyers and determining the buyers' quality specifications...
40 CFR 246.201-4 - Recommended procedures: Market study.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... research techniques. (b) Directly contacting buyers and determining the buyers' quality specifications...
Self-motion perception: assessment by computer-generated animations
NASA Technical Reports Server (NTRS)
Parker, D. E.; Harm, D. L.; Sandoz, G. R.; Skinner, N. C.
1998-01-01
The goal of this research is more precise description of adaptation to sensory rearrangements, including microgravity, by development of improved procedures for assessing spatial orientation perception. Thirty-six subjects reported perceived self-motion following exposure to complex inertial-visual motion. Twelve subjects were assigned to each of 3 perceptual reporting procedures: (a) animation movie selection, (b) written report selection and (c) verbal report generation. The question addressed was: do reports produced by these procedures differ with respect to complexity and reliability? Following repeated (within-day and across-day) exposures to 4 different "motion profiles," subjects either (a) selected movies presented on a laptop computer, or (b) selected written descriptions from a booklet, or (c) generated self-motion verbal descriptions that corresponded most closely with their motion experience. One "complexity" and 2 reliability "scores" were calculated. Contrary to expectations, reliability and complexity scores were essentially equivalent for the animation movie selection and written report selection procedures. Verbal report generation subjects exhibited less complexity than did subjects in the other conditions and their reports were often ambiguous. The results suggest that, when selecting from carefully written descriptions and following appropriate training, people may be better able to describe their self-motion experience with words than is usually believed.
ERIC Educational Resources Information Center
Lee, Eunjung
2013-01-01
The purpose of this research was to compare the equating performance of various equating procedures for the multidimensional tests. To examine the various equating procedures, simulated data sets were used that were generated based on a multidimensional item response theory (MIRT) framework. Various equating procedures were examined, including…
Software Selection: A Primer on Source and Evaluation.
ERIC Educational Resources Information Center
Burston, Jack
2003-01-01
Provides guidance on making decisions regarding the selection of foreign language instructional software. Identifies sources of foreign language software, indicates sources of foreign language software reviews, and outlines essential procedures of software evaluation. (Author/VWL)
Nuclear Power as a Basis for Future Electricity Generation
NASA Astrophysics Data System (ADS)
Pioro, Igor; Buruchenko, Sergey
2017-12-01
It is well known that electrical-power generation is the key factor for advances in industry, agriculture, technology and the level of living. Also, strong power industry with diverse energy sources is very important for country independence. In general, electrical energy can be generated from: 1) burning mined and refined energy sources such as coal, natural gas, oil, and nuclear; and 2) harnessing energy sources such as hydro, biomass, wind, geothermal, solar, and wave power. Today, the main sources for electrical-energy generation are: 1) thermal power - primarily using coal and secondarily - natural gas; 2) “large” hydro power from dams and rivers and 3) nuclear power from various reactor designs. The balance of the energy sources is from using oil, biomass, wind, geothermal and solar, and have visible impact just in some countries. In spite of significant emphasis in the world on using renewables sources of energy, in particular, wind and solar, they have quite significant disadvantages compared to “traditional” sources for electricity generation such as thermal, hydro, and nuclear. These disadvantages include low density of energy, which requires large areas to be covered with wind turbines or photovoltaic panels or heliostats, and dependence of these sources on Mother Nature, i.e., to be unreliable ones and to have low (20 - 40%) or very low (5 - 15%) capacity factors. Fossil-fueled power plants represent concentrated and reliable source of energy. Also, they operate usually as “fast-response” plants to follow rapidly changing electrical-energy consumption during a day. However, due to combustion process they emit a lot of carbon dioxide, which contribute to the climate change in the world. Moreover, coal-fired power plants, as the most popular ones, create huge amount of slag and ash, and, eventually, emit other dangerous and harmful gases. Therefore, Nuclear Power Plants (NPPs), which are also concentrated and reliable source of energy, moreover, the energy source, which does not emit carbon dioxide into atmosphere, are considered as the energy source for basic loads in an electrical grid. Currently, the vast majority of NPPs are used only for electricity generation. However, there are possibilities to use NPPs also for district heating or for desalination of water. In spite of all current advances in nuclear power, NPPs have the following deficiencies: 1) Generate radioactive wastes; 2) Have relatively low thermal efficiencies, especially, watercooled NPPs; 3) Risk of radiation release during severe accidents; and 4) Production of nuclear fuel is not an environment-friendly process. Therefore, all these deficiencies should be addressed in the next generation or Generation-IV reactors. Generation-IV reactors will be hightemperature reactors and multipurpose ones, which include electricity generation, hydrogen cogeneration, process heat, district heating, desalination, etc.
46 CFR 112.05-5 - Emergency power source.
Code of Federal Regulations, 2013 CFR
2013-10-01
... with § 112.05-1(c). Table 112.05-5(a) Size of vessel and service Type of emergency power source or... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...
46 CFR 112.05-5 - Emergency power source.
Code of Federal Regulations, 2012 CFR
2012-10-01
... with § 112.05-1(c). Table 112.05-5(a) Size of vessel and service Type of emergency power source or... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...
46 CFR 112.05-5 - Emergency power source.
Code of Federal Regulations, 2014 CFR
2014-10-01
... with § 112.05-1(c). Table 112.05-5(a) Size of vessel and service Type of emergency power source or... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
NASA Technical Reports Server (NTRS)
Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward
2007-01-01
A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.
Code of Federal Regulations, 2011 CFR
2011-07-01
... issuing special procedures for declassification of information pertaining to intelligence activities... procedures for declassification of information pertaining to intelligence activities, sources and methods, or of classified cryptologic information in NARA's holdings? (a) The Director of National Intelligence...
Code of Federal Regulations, 2010 CFR
2010-07-01
... issuing special procedures for declassification of information pertaining to intelligence activities... procedures for declassification of information pertaining to intelligence activities, sources and methods, or of classified cryptologic information in NARA's holdings? (a) The Director of National Intelligence...
Production of knock-in mice in a single generation from embryonic stem cells.
Ukai, Hideki; Kiyonari, Hiroshi; Ueda, Hiroki R
2017-12-01
The system-level identification and analysis of molecular networks in mammals can be accelerated by 'next-generation' genetics, defined as genetics that does not require crossing of multiple generations of animals in order to achieve the desired genetic makeup. We have established a highly efficient procedure for producing knock-in (KI) mice within a single generation, by optimizing the genome-editing protocol for KI embryonic stem (ES) cells and the protocol for the generation of fully ES-cell-derived mice (ES mice). Using this protocol, the production of chimeric mice is eliminated, and, therefore, there is no requirement for the crossing of chimeric mice to produce mice that carry the KI gene in all cells of the body. Our procedure thus shortens the time required to produce KI ES mice from about a year to ∼3 months. Various kinds of KI ES mice can be produced with a minimized amount of work, facilitating the elucidation of organism-level phenomena using a systems biology approach. In this report, we describe the basic technologies and protocols for this procedure, and discuss the current challenges for next-generation mammalian genetics in organism-level systems biology studies.
Urine-derived induced pluripotent stem cells as a modeling tool to study rare human diseases
Shi, Liang; Cui, Yazhou; Luan, Jing; Zhou, Xiaoyan; Han, Jinxiang
2016-01-01
Summary Rare diseases with a low prevalence are a key public health issue because the causes of those diseases are difficult to determine and those diseases lack a clearly established or curative treatment. Thus, investigating the molecular mechanisms that underlie the pathology of rare diseases and facilitating the development of novel therapies using disease models is crucial. Human induced pluripotent stem cells (iPSCs) are well suited to modeling rare diseases since they have the capacity for self-renewal and pluripotency. In addition, iPSC technology provides a valuable tool to generate patient-specific iPSCs. These cells can be differentiated into cell types that have been affected by a disease. These cells would circumvent ethical concerns and avoid immunological rejection, so they could be used in cell replacement therapy or regenerative medicine. To date, human iPSCs could have been generated from multiple donor sources, such as skin, adipose tissue, and peripheral blood. However, these cells are obtained via invasive procedures. In contrast, several groups of researchers have found that urine may be a better source for producing iPSCs from normal individuals or patients. This review discusses urinary iPSC (UiPSC) as a candidate for modeling rare diseases. Cells obtained from urine have overwhelming advantages compared to other donor sources since they are safely, affordably, and frequently obtained and they are readily obtained from patients. The use of iPSC-based models is also discussed. UiPSCs may prove to be a key means of modeling rare diseases and they may facilitate the treatment of those diseases in the future. PMID:27672542
Infrasound Generation from the HH Seismic Hammer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Kyle Richard
2014-10-01
The HH Seismic hammer is a large, "weight-drop" source for active source seismic experiments. This system provides a repetitive source that can be stacked for subsurface imaging and exploration studies. Although the seismic hammer was designed for seismological studies it was surmised that it might produce energy in the infrasonic frequency range due to the ground motion generated by the 13 metric ton drop mass. This study demonstrates that the seismic hammer generates a consistent acoustic source that could be used for in-situ sensor characterization, array evaluation and surface-air coupling studies for source characterization.
NASA Astrophysics Data System (ADS)
Meirova, T.; Shapira, A.; Eppelbaum, L.
2018-05-01
In this study, we updated and modified the SvE approach of Shapira and van Eck (Nat Hazards 8:201-215, 1993) which may be applied as an alternative to the conventional probabilistic seismic hazard assessment (PSHA) in Israel and other regions of low and moderate seismicity where measurements of strong ground motions are scarce. The new computational code SvE overcomes difficulties associated with the description of the earthquake source model and regional ground-motion scaling. In the modified SvE procedure, generating suites of regional ground motion is based on the extended two-dimensional source model of Motazedian and Atkinson (Bull Seism Soc Amer 95:995-1010, 2005a) and updated regional ground-motion scaling (Meirova and Hofstteter, Bull Earth Eng 15:3417-3436, 2017). The analytical approach of Mavroeidis and Papageorgiou (Bull Seism Soc Amer 93:1099-1131, 2003) is used to simulate the near-fault acceleration with the near-fault effects. The comparison of hazard estimates obtained by using the conventional method implemented in the National Building Code for Design provisions for earthquake resistance of structures and the modified SvE procedure for rock-site conditions indicates a general agreement with some perceptible differences at the periods of 0.2 and 0.5 s. For the periods above 0.5 s, the SvE estimates are systematically greater and can increase by a factor of 1.6. For the soft-soil sites, the SvE hazard estimates at the period of 0.2 s are greater than those based on the CB2008 ground-motion prediction equation (GMPE) by a factor of 1.3-1.6. We suggest that the hazard estimates for the sites with soft-soil conditions calculated by the modified SvE procedure are more reliable than those which can be found by means of the conventional PSHA. This result agrees with the opinion that the use of a standard GMPE applying the NEHRP soil classification based on the V s, 30 parameter may be inappropriate for PSHA at many sites in Israel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haseler, Luke J., E-mail: l.haseler@griffith.edu.au; Sibbitt, Randy R., E-mail: THESIBB2@aol.com; Sibbitt, Wilmer L., E-mail: wsibbitt@salud.unm.edu
Purpose: Syringes are used for diagnostic fluid aspiration and fine-needle aspiration biopsy in interventional procedures. We determined the benefits, disadvantages, and patient safety implications of syringe and needle size on vacuum generation, hand force requirements, biopsy/fluid yield, and needle control during aspiration procedures. Materials and Methods: Different sizes (1, 3, 5, 10, and 20 ml) of the conventional syringe and aspirating mechanical safety syringe, the reciprocating procedure device, were studied. Twenty operators performed aspiration procedures with the following outcomes measured: (1) vacuum (torr), (2) time to vacuum (s), (3) hand force to generate vacuum (torr-cm{sup 2}), (4) operator difficulty duringmore » aspiration, (5) biopsy yield (mg), and (6) operator control of the needle tip position (mm). Results: Vacuum increased tissue biopsy yield at all needle diameters (P < 0.002). Twenty-milliliter syringes achieved a vacuum of -517 torr but required far more strength to aspirate, and resulted in significant loss of needle control (P < 0.002). The 10-ml syringe generated only 15% less vacuum (-435 torr) than the 20-ml device and required much less hand strength. The mechanical syringe generated identical vacuum at all syringe sizes with less hand force (P < 0.002) and provided significantly enhanced needle control (P < 0.002). Conclusions: To optimize patient safety and control of the needle, and to maximize fluid and tissue yield during aspiration procedures, a two-handed technique and the smallest syringe size adequate for the procedure should be used. If precise needle control or one-handed operation is required, a mechanical safety syringe should be considered.« less
2016-11-29
AFRL-AFOSR-VA-TR-2016-0365 Long Wavelength Electromagnetic Light Bullets Generated by a 10.6 micron CO2 Ultrashort Pulsed Source Jerome Moloney...SUBTITLE "Long Wavelength Electromagnetic Light Bullets Generated by a 10.6 micron CO2 Ultrashort Pulsed Source 5a. CONTRACT NUMBER FA9550-15-1-0272 5b...afosr.reports.sgizmo.com/s3/> Subject: Final Report to Dr. Arje Nachman Contract/Grant Title: Long Wavelength Electromagnetic Light Bullets Generated by a 10.6
40 CFR 246.200-3 - Recommended procedures: Market study.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... techniques; (b) Directly contacting buyers, and determining the buyers' quality specifications, the exact...
40 CFR 246.200-3 - Recommended procedures: Market study.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... techniques; (b) Directly contacting buyers, and determining the buyers' quality specifications, the exact...
40 CFR 246.200-3 - Recommended procedures: Market study.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) SOLID WASTES SOURCE SEPARATION FOR MATERIALS RECOVERY GUIDELINES Requirements and Recommended Procedures... techniques; (b) Directly contacting buyers, and determining the buyers' quality specifications, the exact...
Higley, D.K.; Lewan, M.D.; Roberts, L.N.R.; Henry, M.
2009-01-01
The Lower Cretaceous Mannville Group oil sands of northern Alberta have an estimated 270.3 billion m3 (BCM) (1700 billion bbl) of in-place heavy oil and tar. Our study area includes oil sand accumulations and downdip areas that partially extend into the deformation zone in western Alberta. The oil sands are composed of highly biodegraded oil and tar, collectively referred to as bitumen, whose source remains controversial. This is addressed in our study with a four-dimensional (4-D) petroleum system model. The modeled primary trap for generated and migrated oil is subtle structures. A probable seal for the oil sands was a gradual updip removal of the lighter hydrocarbon fractions as migrated oil was progressively biodegraded. This is hypothetical because the modeling software did not include seals resulting from the biodegradation of oil. Although the 4-D model shows that source rocks ranging from the Devonian-Mississippian Exshaw Formation to the Lower Cretaceous Mannville Group coals and Ostracode-zone-contributed oil to Mannville Group reservoirs, source rocks in the Jurassic Fernie Group (Gordondale Member and Poker Chip A shale) were the initial and major contributors. Kinetics associated with the type IIS kerogen in Fernie Group source rocks resulted in the early generation and expulsion of oil, as early as 85 Ma and prior to the generation from the type II kerogen of deeper and older source rocks. The modeled 50% peak transformation to oil was reached about 75 Ma for the Gordondale Member and Poker Chip A shale near the west margin of the study area, and prior to onset about 65 Ma from other source rocks. This early petroleum generation from the Fernie Group source rocks resulted in large volumes of generated oil, and prior to the Laramide uplift and onset of erosion (???58 Ma), which curtailed oil generation from all source rocks. Oil generation from all source rocks ended by 40 Ma. Although the modeled study area did not include possible western contributions of generated oil to the oil sands, the amount generated by the Jurassic source rocks within the study area was 475 BCM (2990 billion bbl). Copyright ?? 2009. The American Association of Petroleum Geologists. All rights reserved.
Olivares, Ela I.; Lage-Castellanos, Agustín; Bobes, María A.; Iglesias, Jaime
2018-01-01
We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs) in both intra-domain (face-feature) and cross-domain (face-occupation) matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called “Fusiform Face Area”, “FFA” and “Occipital Face Area”, “OFA”, respectively), the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands. PMID:29628877
Janssen, Sander J C; Porter, Cheryl H; Moore, Andrew D; Athanasiadis, Ioannis N; Foster, Ian; Jones, James W; Antle, John M
2017-07-01
Agricultural modeling has long suffered from fragmentation in model implementation. Many models are developed, there is much redundancy, models are often poorly coupled, model component re-use is rare, and it is frequently difficult to apply models to generate real solutions for the agricultural sector. To improve this situation, we argue that an open, self-sustained, and committed community is required to co-develop agricultural models and associated data and tools as a common resource. Such a community can benefit from recent developments in information and communications technology (ICT). We examine how such developments can be leveraged to design and implement the next generation of data, models, and decision support tools for agricultural production systems. Our objective is to assess relevant technologies for their maturity, expected development, and potential to benefit the agricultural modeling community. The technologies considered encompass methods for collaborative development and for involving stakeholders and users in development in a transdisciplinary manner. Our qualitative evaluation suggests that as an overall research challenge, the interoperability of data sources, modular granular open models, reference data sets for applications and specific user requirements analysis methodologies need to be addressed to allow agricultural modeling to enter in the big data era. This will enable much higher analytical capacities and the integrated use of new data sources. Overall agricultural systems modeling needs to rapidly adopt and absorb state-of-the-art data and ICT technologies with a focus on the needs of beneficiaries and on facilitating those who develop applications of their models. This adoption requires the widespread uptake of a set of best practices as standard operating procedures.
Autonomous perception and decision making in cyber-physical systems
NASA Astrophysics Data System (ADS)
Sarkar, Soumik
2011-07-01
The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and buildings, distributed energy systems, advanced health care procedures and future ground and air transportation systems.
NASA Technical Reports Server (NTRS)
Pla, Frederic G.; Hu, Ziqiang; Sutliff, Daniel L.
1996-01-01
This report describes the Active Noise Cancellation (ANC) System designed by General Electric and tested in the NASA Lewis Research Center's (LERC) 48 inch Active Noise Control Fan (ANCF). The goal of this study is to assess the feasibility of using wall mounted secondary acoustic sources and sensors within the duct of a high bypass turbofan aircraft engine for global active noise cancellation of fan tones. The GE ANC system is based on a modal control approach. A known acoustic mode propagating in the fan duct is canceled using an array of flush-mounted compact sound sources. The canceling modal signal is generated by a modal controller. Inputs to the controller are signals from a shaft encoder and from a microphone array which senses the residual acoustic mode in the duct. The key results are that the (6,0) was completely eliminated at the 920 Hz design frequency and substantially reduced elsewhere. The total tone power was reduced 6.8 dB (out of a possible 9.8 dB). Farfield reductions of 15 dB (SPL) were obtained. The (4,0) and (4,1) modes were reduced simultaneously yielding a 15 dB PWL decrease. The results indicate that global attenuation of PWL at the target frequency was obtained in the aft quadrant using an ANC actuator and sensor system totally contained within the duct. The quality of the results depended on precise mode generation. High spillover into spurious modes generated by the ANC actuator array caused less than optimum levels of PWL reduction. The variation in spillover is believed to be due to calibration procedure, but must be confirmed in subsequent tests.
Methods for detection, identification and specification of listerias
Bochner, Barry
1992-01-01
The present invention relates generally to differential carbon source metabolism in the genus Listeria, metabolic, biochemical, immunological and genetic procedures to measure said differential carbon source metabolism and the use of these produces to detect, isolate and/or distinguish species of the genus Listeria as well as detect, isolate and/or distinguish strains of species of Listeria. The present invention also contemplates test kits and enrichment media to facilitate these procedures.
Ciftci, Harun; Er, Cigdem
2013-03-01
In the present study, a separation/preconcentration procedure for determination of aluminum in water samples has been developed by using a new atomic absorption spectrometer concept with a high-intensity xenon short-arc lamp as continuum radiation source, a high-resolution double-echelle monochromator, and a charge-coupled device array detector. Sample solution pH, sample volume, flow rate of sample solution, volume, and concentration of eluent for solid-phase extraction of Al chelates with 4-[(dicyanomethyl)diazenyl] benzoic acid on polymeric resin (Duolite XAD-761) have been investigated. The adsorbed aluminum on resin was eluted with 5 mL of 2 mol L(-1) HNO(3) and its concentration was determined by high-resolution continuum source flame atomic absorption spectrometry (HR-CS FAAS). Under the optimal conditions, limit of detection obtained with HR-CS FAAS and Line Source FAAS (LS-FAAS) were 0.49 μg L(-1) and 3.91 μg L(-1), respectively. The accuracy of the procedure was confirmed by analyzing certified materials (NIST SRM 1643e, Trace elements in water) and spiked real samples. The developed procedure was successfully applied to water samples.
Miniature, low-power X-ray tube using a microchannel electron generator electron source
NASA Technical Reports Server (NTRS)
Elam, Wm. Timothy (Inventor); Kelliher, Warren C. (Inventor); Hershyn, William (Inventor); DeLong, David P. (Inventor)
2011-01-01
Embodiments of the invention provide a novel, low-power X-ray tube and X-ray generating system. Embodiments of the invention use a multichannel electron generator as the electron source, thereby increasing reliability and decreasing power consumption of the X-ray tube. Unlike tubes using a conventional filament that must be heated by a current power source, embodiments of the invention require only a voltage power source, use very little current, and have no cooling requirements. The microchannel electron generator comprises one or more microchannel plates (MCPs), Each MCP comprises a honeycomb assembly of a plurality of annular components, which may be stacked to increase electron intensity. The multichannel electron generator used enables directional control of electron flow. In addition, the multichannel electron generator used is more robust than conventional filaments, making the resulting X-ray tube very shock and vibration resistant.
Analysis of a Temperature-Controlled Exhaust Thermoelectric Generator During a Driving Cycle
NASA Astrophysics Data System (ADS)
Brito, F. P.; Alves, A.; Pires, J. M.; Martins, L. B.; Martins, J.; Oliveira, J.; Teixeira, J.; Goncalves, L. M.; Hall, M. J.
2016-03-01
Thermoelectric generators can be used in automotive exhaust energy recovery. As car engines operate under wide variable loads, it is a challenge to design a system for operating efficiently under these variable conditions. This means being able to avoid excessive thermal dilution under low engine loads and being able to operate under high load, high temperature events without the need to deflect the exhaust gases with bypass systems. The authors have previously proposed a thermoelectric generator (TEG) concept with temperature control based on the operating principle of the variable conductance heat pipe/thermosiphon. This strategy allows the TEG modules’ hot face to work under constant, optimized temperature. The variable engine load will only affect the number of modules exposed to the heat source, not the heat transfer temperature. This prevents module overheating under high engine loads and avoids thermal dilution under low engine loads. The present work assesses the merit of the aforementioned approach by analysing the generator output during driving cycles simulated with an energy model of a light vehicle. For the baseline evaporator and condenser configuration, the driving cycle averaged electrical power outputs were approximately 320 W and 550 W for the type-approval Worldwide harmonized light vehicles test procedure Class 3 driving cycle and for a real-world highway driving cycle, respectively.
4th Generation ECR Ion Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyneis, Claude M.; Leitner, D.; Todd, D.S.
2008-12-01
The concepts and technical challenges related to developing a 4th generation ECR ion source with an RF frequency greater than 40 GHz and magnetic confinement fields greater than twice Becr will be explored in this paper. Based on the semi-empirical frequency scaling of ECR plasma density with the square of operating frequency, there should be significant gains in performance over current 3rd generation ECR ion sources, which operate at RF frequencies between 20 and 30 GHz. While the 3rd generation ECR ion sources use NbTi superconducting solenoid and sextupole coils, the new sources will need to use different superconducting materialsmore » such as Nb3Sn to reach the required magnetic confinement, which scales linearly with RF frequency. Additional technical challenges include increased bremsstrahlung production, which may increase faster than the plasma density, bremsstrahlung heating of the cold mass and the availability of high power continuous wave microwave sources at these frequencies. With each generation of ECR ion sources, there are new challenges to be mastered, but the potential for higher performance and reduced cost of the associated accelerator continue to make this a promising avenue for development.« less
Source-Independent Quantum Random Number Generation
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng
2016-01-01
Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .
PM SOURCE APPORTIONMENT/RECEPTOR MODELING
Source apportionment (receptor) models are mathematical procedures for identifying and quantifying the sources of ambient air pollutants and their effects at a site (the receptor), primarily on the basis of species concentration measurements at the receptor, and generally without...
Near Real-Time Imaging of the Galactic Plane with BATSE
NASA Technical Reports Server (NTRS)
Harmon, B. A.; Zhang, S. N.; Robinson, C. R.; Paciesas, W. S.; Barret, D.; Grindlay, J.; Bloser, P.; Monnelly, C.
1997-01-01
The discovery of new transient or persistent sources in the hard X-ray regime with the BATSE Earth occultation Technique has been limited previously to bright sources of about 200 mCrab or more. While monitoring known source locations is not a problem to a daily limiting sensitivity of about 75 mCrab, the lack of a reliable background model forces us to use more intensive computer techniques to find weak, previously unknown emission from hard X-ray/gamma sources. The combination of Radon transform imaging of the galactic plane in 10 by 10 degree fields and the Harvard/CFA-developed Image Search (CBIS) allows us to straightforwardly search the sky for candidate sources in a +/- 20 degree latitude band along the plane. This procedure has been operating routinely on a weekly basis since spring 1997. We briefly describe the procedure, then concentrate on the performance aspects of the technique and candidate source results from the search.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... specifications and operating instructions, if available, or standard operating procedures must be developed by... manufacturer's specifications and operating instructions, if available, or standard operating procedures must... operating specifications or standard operating procedures developed by the prepared feeds manufacturer be...
Beeres, Martin; Bucher, Andreas M; Wichmann, Julian L; Frellesen, Claudia; Scholtz, Jan E; Albrecht, Moritz; Bodelle, Boris; Nour-Eldin, Nour-Eldin A; Lee, Clara; Kaup, Moritz; Vogl, Thomas J; Gruber-Rouh, Tatjana
2016-07-01
Evaluation of the intimal flap visibility comparing 2nd and 3rd generation dual-source high-pitch CT. Twenty-five consecutive patients with aortic dissection underwent CT angiography on a second and third generation dual-source CT scanner using prospective ECG-gated high-pitch dual-source CT acquisition mode. Contrast material, saline flush and flow rate were kept equal for optimum comparability. The visibility of the intimal flap as well as the delineation of the different vascular structures was evaluated. In 3rd generation dual-source high-pitch CT we could show a significant improvement of intimal flap visibility in aortic dissection. Especially, the far end of the dissection membrane could be better evaluated in 3rd generation high-pitch CT, reaching statistical significance (P < 0.01). 3rd Generation high-pitch CT angiography shows a better delineation of the aortic intimal flap in a small patient cohort, especially in the far ends of the dissection membrane. This might be due to higher tube power in this CT generation. However, to generalise these findings larger trials are needed.
Synthesis of labeled compounds using recovered tritium from expired beta light sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matei, L.; Postolache, C.; Bubueanu, G.
2008-07-15
In this paper, the technological procedures for extracting tritium from beta light source are highlighted. The recovered tritium was used in the synthesis of organically labeled compounds and in the preparation of tritiated water (HTO) with high specific activity. Technological procedures for treatment of beta light sources consist of: envelope breaking into evacuated enclosure, the radioactive gaseous mixture pumping and its storage on metallic sodium. The mixtures of T{sub 2} and {sup 3}He were used in the synthesis of tritium labeled steroid hormones, nucleosides analogues and for the preparation of HTO with high radioactivity concentrations. (authors)