Sample size of the reference sample in a case-augmented study.
Ghosh, Palash; Dewanji, Anup
2017-05-01
The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Simulation and analysis of support hardware for multiple instruction rollback
NASA Technical Reports Server (NTRS)
Alewine, Neil J.
1992-01-01
Recently, a compiler-assisted approach to multiple instruction retry was developed. In this scheme, a read buffer of size 2N, where N represents the maximum instruction rollback distance, is used to resolve one type of data hazard. This hardware support helps to reduce code growth, compilation time, and some of the performance impacts associated with hazard resolution. The 2N read buffer size requirement of the compiler-assisted approach is worst case, assuring data redundancy for all data required but also providing some unnecessary redundancy. By adding extra bits in the operand field for source 1 and source 2 it becomes possible to design the read buffer to save only those values required, thus reducing the read buffer size requirement. This study measures the effect on performance of a DECstation 3100 running 10 application programs using 6 read buffer configurations at varying read buffer sizes.
Pulsed x-ray sources for characterization of gated framing cameras
NASA Astrophysics Data System (ADS)
Filip, Catalin V.; Koch, Jeffrey A.; Freeman, Richard R.; King, James A.
2017-08-01
Gated X-ray framing cameras are used to measure important characteristics of inertial confinement fusion (ICF) implosions such as size and symmetry, with 50 ps time resolution in two dimensions. A pulsed source of hard (>8 keV) X-rays, would be a valuable calibration device, for example for gain-droop measurements of the variation in sensitivity of the gated strips. We have explored the requirements for such a source and a variety of options that could meet these requirements. We find that a small-size dense plasma focus machine could be a practical single-shot X-ray source for this application if timing uncertainties can be overcome.
NASA Astrophysics Data System (ADS)
Geddes, Cameron G. R.; Rykovanov, Sergey; Matlis, Nicholas H.; Steinke, Sven; Vay, Jean-Luc; Esarey, Eric H.; Ludewigt, Bernhard; Nakamura, Kei; Quiter, Brian J.; Schroeder, Carl B.; Toth, Csaba; Leemans, Wim P.
2015-05-01
Near-monoenergetic photon sources at MeV energies offer improved sensitivity at greatly reduced dose for active interrogation, and new capabilities in treaty verification, nondestructive assay of spent nuclear fuel and emergency response. Thomson (also referred to as Compton) scattering sources are an established method to produce appropriate photon beams. Applications are however restricted by the size of the required high-energy electron linac, scattering (photon production) system, and shielding for disposal of the high energy electron beam. Laser-plasma accelerators (LPAs) produce GeV electron beams in centimeters, using the plasma wave driven by the radiation pressure of an intense laser. Recent LPA experiments are presented which have greatly improved beam quality and efficiency, rendering them appropriate for compact high-quality photon sources based on Thomson scattering. Designs for MeV photon sources utilizing the unique properties of LPAs are presented. It is shown that control of the scattering laser, including plasma guiding, can increase photon production efficiency. This reduces scattering laser size and/or electron beam current requirements to scale compatible with the LPA. Lastly, the plasma structure can decelerate the electron beam after photon production, reducing the size of shielding required for beam disposal. Together, these techniques provide a path to a compact photon source system.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
40 CFR 52.233 - Review of new sources and modifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... requiring the source to be provided with: (i) Sampling ports of a size, number, and location as the Administrator may require, (ii) Safe access to each port, (iii) Instrumentation to monitor and record emission... more than 1 MBtu/h (250 Mg-cal/h) and burns only distillate oil; or has a heat input of not more than...
40 CFR 52.233 - Review of new sources and modifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... requiring the source to be provided with: (i) Sampling ports of a size, number, and location as the Administrator may require, (ii) Safe access to each port, (iii) Instrumentation to monitor and record emission... more than 1 MBtu/h (250 Mg-cal/h) and burns only distillate oil; or has a heat input of not more than...
EQ-10 electrodeless Z-pinch EUV source for metrology applications
NASA Astrophysics Data System (ADS)
Gustafson, Deborah; Horne, Stephen F.; Partlow, Matthew J.; Besen, Matthew M.; Smith, Donald K.; Blackborow, Paul A.
2011-11-01
With EUV Lithography systems shipping, the requirements for highly reliable EUV sources for mask inspection and resist outgassing are becoming better defined, and more urgent. The sources needed for metrology applications are very different than that needed for lithography; brightness (not power) is the key requirement. Suppliers for HVM EUV sources have all resources working on high power and have not entered the smaller market for metrology. Energetiq Technology has been shipping the EQ-10 Electrodeless Z-pinchTM light source since 19951. The source is currently being used for metrology, mask inspection, and resist development2-4. These applications require especially stable performance in both output power and plasma size and position. Over the last 6 years Energetiq has made many source modifications which have included better thermal management to increase the brightness and power of the source. We now have introduced a new source that will meet requirements of some of the mask metrology first generation tools; this source will be reviewed.
40 CFR 52.780 - Review of new sources and modifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
...,000 Btu per hour (88.2 Mg-cal/h) and 1,500,000 Btu per hour (378.0 MG cal/h), the construction of... requiring the source to be provided with: (i) Sampling ports of a size, number, and location as the Administrator may require, (ii) Safe access to each port, (iii) Instrumentation to monitor and record emission...
40 CFR 52.780 - Review of new sources and modifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
...,000 Btu per hour (88.2 Mg-cal/h) and 1,500,000 Btu per hour (378.0 MG cal/h), the construction of... requiring the source to be provided with: (i) Sampling ports of a size, number, and location as the Administrator may require, (ii) Safe access to each port, (iii) Instrumentation to monitor and record emission...
Murphy, M R; Whetstone, H D; Davis, C L
1983-12-01
We examined effects of source and particle size of supplemental defluorinated rock phosphate, to meet phosphorus requirements, on rumen function of 195-kg Holstein steers fed high concentrate. Two sources and two particle sizes of each source were evaluated in a 5 X 5 Latin square with 14-day periods. There was no effect of source on ruminal mH [- log (mean (H+)]; however, ruminal mH was higher in animals fed supplements of larger particle size. This effect was also evident when rumen pH versus time curves were integrated below pH 6. Animals fed supplements of larger particle size had less area below pH 6 than those fed supplements of smaller size. Ruminal buffering capacity at pH 7 was affected by diet; however, orthogonal comparisons between treatment means were not significant. Neither source nor particle size of the supplement affected ruminal fluid osmolality, total volatile fatty acid concentration, or fecal starch. Water intake and ruminal dry matter on HyCal supplemented diets; however, there was also a trend toward increasing rumen fluid volume. The net effect was little change of dilution rate of ruminal fluid. This may explain why rumen fermentation was not affected greatly. Conventional phosphate supplements may have potential as rumen buffering agents, but higher levels of feeding should be studied.
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
NASA Astrophysics Data System (ADS)
Collier, J. D.; Tingay, S. J.; Callingham, J. R.; Norris, R. P.; Filipović, M. D.; Galvin, T. J.; Huynh, M. T.; Intema, H. T.; Marvil, J.; O'Brien, A. N.; Roper, Q.; Sirothia, S.; Tothill, N. F. H.; Bell, M. E.; For, B.-Q.; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kapińska, A. D.; Lenc, E.; Morgan, J.; Procopio, P.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Heywood, I.; Popping, A.
2018-06-01
We present very long baseline interferometry observations of a faint and low-luminosity (L1.4 GHz < 1027 W Hz-1) gigahertz-peaked spectrum (GPS) and compact steep-spectrum (CSS) sample. We select eight sources from deep radio observations that have radio spectra characteristic of a GPS or CSS source and an angular size of θ ≲ 2 arcsec, and detect six of them with the Australian Long Baseline Array. We determine their linear sizes, and model their radio spectra using synchrotron self-absorption (SSA) and free-free absorption (FFA) models. We derive statistical model ages, based on a fitted scaling relation, and spectral ages, based on the radio spectrum, which are generally consistent with the hypothesis that GPS and CSS sources are young and evolving. We resolve the morphology of one CSS source with a radio luminosity of 10^{25} W Hz^{-1}, and find what appear to be two hotspots spanning 1.7 kpc. We find that our sources follow the turnover-linear size relation, and that both homogeneous SSA and an inhomogeneous FFA model can account for the spectra with observable turnovers. All but one of the FFA models do not require a spectral break to account for the radio spectrum, while all but one of the alternative SSA and power-law models do require a spectral break to account for the radio spectrum. We conclude that our low-luminosity sample is similar to brighter samples in terms of their spectral shape, turnover frequencies, linear sizes, and ages, but cannot test for a difference in morphology.
Intensity distribution of the x ray source for the AXAF VETA-I mirror test
NASA Technical Reports Server (NTRS)
Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann
1992-01-01
The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.
Giant impactors - Plausible sizes and populations
NASA Technical Reports Server (NTRS)
Hartmann, William K.; Vail, S. M.
1986-01-01
The largest sizes of planetesimals required to explain spin properties of planets are investigated in the context of the impact-trigger hypothesis of lunar origin. Solar system models with different large impactor sources are constructed and stochastic variations in obliquities and rotation periods resulting from each source are studied. The present study finds it highly plausible that earth was struck by a body of about 0.03-0.12 earth masses with enough energy and angular momentum to dislodge mantle material and form the present earth-moon system.
Hydrocyclonic separation of invasive New Zealand mudsnails from an aquaculture water source
Nielson, R. Jordan; Moffitt, Christine M.; Watten, Barnaby J.
2012-01-01
Invasive New Zealand mudsnails (Potamopyrgus antipodarum, NZMS) have infested freshwater aquaculture facilities in the western United States and disrupted stocking or fish transportation activities because of the risk of transporting NZMS to naive locations. We tested the efficacy of a gravity-fed, hydrocyclonicseparation system to remove NZMS from an aquaculture water source at two design flows: 367 L/min and 257 L/min. The hydrocyclone effectively filtered all sizes of snails (including newly emerged neonates) from inflows. We modeled cumulative recovery of three sizes of snails, and determined that both juvenile and adult sized snails were transported similarly through the filtration system, but the transit of neonates was faster and similar to the transport of water particles. We found that transit times through the filtration system were different between the two flows regardless of snail size, and the hydrocyclone filter operated more as a plug flow system with dispersion, especially when transporting and removing the larger sized adult and juvenile sized snails. Our study supports hydrocyclonic filtration as an important tool to provide snail free water for aquaculture operations that require uninfested water sources.
Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.
2016-04-01
Sanitation, and Hygiene WFP World Food Programme WHO World Health Organization Unclassified Unclassified xii This page intentionally left blank...Insurgency Natural Disaster Contamination Visibility Disposition Distribution Sourcing Prioritization Security Financial U.S. Military Services Combatant...supply; restrictions on sourcing; contamination concerns (IV solutions) Small in size; multiple variants with limited interchangeability; requires
Image recording requirements for earth observation applications in the next decade
NASA Technical Reports Server (NTRS)
Peavey, B.; Sos, J. Y.
1975-01-01
Future requirements for satellite-borne image recording systems are examined from the standpoints of system performance, system operation, product type, and product quality. Emphasis is on total system design while keeping in mind that the image recorder or scanner is the most crucial element which will affect the end product quality more than any other element within the system. Consideration of total system design and implementation for sustained operational usage must encompass the requirements for flexibility of input data and recording speed, pixel density, aspect ratio, and format size. To produce this type of system requires solution of challenging problems in interfacing the data source with the recorder, maintaining synchronization between the data source and the recorder, and maintaining a consistent level of quality. Film products of better quality than is currently achieved in a routine manner are needed. A 0.1 pixel geometric accuracy and 0.0001 d.u. radiometric accuracy on standard (240 mm) size format should be accepted as a goal to be reached in the near future.
SIproc: an open-source biomedical data processing platform for large hyperspectral images.
Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David
2017-04-10
There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.
40 CFR 63.1591 - What are my notification requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Requirements § 63.1591 What are my notification requirements? (a) If you have an industrial POTW treatment plant or a new or reconstructed non-industrial POTW which is a major source of HAP, and your State has... date; and (4) A brief description of the nature, size, design, and method of operation of your POTW...
Extinction cross-section suppression and active acoustic invisibility cloaking
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2017-10-01
Invisibility in its canonical form requires rendering a zero extinction cross-section (or energy efficiency) from an active or a passive object. This work demonstrates the successful theoretical realization of this physical effect for an active cylindrically radiating acoustic body, undergoing periodic axisymmetric harmonic vibrations near a flat rigid boundary. Radiating, amplification and extinction cross-sections of the active source are defined. Assuming monopole and dipole modal oscillations of the circular source, conditions are found where the extinction energy efficiency factor of the active source vanishes, achieving total invisibility with minimal influence of the source size. It also takes positive or negative values, depending on its size and distance from the boundary. Moreover, the amplification energy efficiency factor is negative for the acoustically-active source. These effects also occur for higher-order modal oscillations of the active source. The results find potential applications in the development of acoustic cloaking devices and invisibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courtney, Daniel G., E-mail: dcourtney@alum.mit.edu; Shea, Herbert
2015-09-07
Passively fed ionic liquid electrospray sources are capable of efficiently emitting a variety of ion beams with promising applications to spacecraft propulsion and as focused ion beams. Practical devices will require integrated or coupled ionic liquid reservoirs; the effects of which have not been explored in detail. Porous reservoirs are a simple, scalable solution. However, we have shown that their pore size can dramatically alter the beam composition. Emitting the ionic liquid 1-ethyl-3-methylimidazolium bis(triflouromethylsulfonyl)amide, the same device was shown to yield either an ion or droplet dominated beam when using reservoirs of small or large pore size, respectively; with themore » latter having a mass flow in excess of 15 times larger than the former at negative polarity. Another source, emitting nearly purely ionic beams of 1-ethyl-3-methylimidazolium tetrafluoroborate, was similarly shown to emit a significant droplet population when coupled to reservoirs of large (>100 μm) pores; constituting a reduction in propulsive efficiency from greater than 70% to less than 30%. Furthermore, we show that reservoir selection can alter the voltage required to obtain and sustain emission, increasing with smaller pore size.« less
NASA Astrophysics Data System (ADS)
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
Characterizing sources of emissions from wildland fires
Roger D. Ottmar; Ana Isabel Miranda; David V. Sandberg
2009-01-01
Smoke emissions from wildland fire can be harmful to human health and welfare, impair visibility, and contribute to greenhouse gas emissions. The generation of emissions and heat release need to be characterized to estimate the potential impacts of wildland fire smoke. This requires explicit knowledge of the source, including size of the area burned, burn period,...
Comparison of RF BPM Receivers for NSLS-II Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinayev,I.; Singh, O.
2009-05-04
The NSLS-II Light Source being built at Brookhaven National Laboratory requires submicron stability of the electron orbit in the storage ring in order to utilize fully very small emittances and electron beam sizes. This sets high stability requirements for beam position monitors and a program has been initiated for the purpose of characterizing RF beam position monitor (BPM) receivers in use at other light sources. Present state-of-the-art performance will be contrasted with more recently available technologies.
High-frequency monopole sound source for anechoic chamber qualification
NASA Astrophysics Data System (ADS)
Saussus, Patrick; Cunefare, Kenneth A.
2003-04-01
Anechoic chamber qualification procedures require the use of an omnidirectional monopole sound source. Required characteristics for these monopole sources are explicitly listed in ISO 3745. Building a high-frequency monopole source that meets these characteristics has proved difficult due to the size limitations imposed by small wavelengths at high frequency. A prototype design developed for use in hemianechoic chambers employs telescoping tubes, which act as an inverse horn. This same design can be used in anechoic chambers, with minor adaptations. A series of gradually decreasing brass telescoping tubes is attached to the throat of a well-insulated high-frequency compression driver. Therefore, all of the sound emitted from the driver travels through the horn and exits through an opening of approximately 2.5 mm. Directivity test data show that this design meets all of the requirements set forth by ISO 3745.
NASA Astrophysics Data System (ADS)
Liu, Cenwei; Lobb, David; Li, Sheng; Owens, Philip; Kuzyk, ZouZou
2014-05-01
Lake Winnipeg has recently brought attention to the deteriorated water quality due to in part to nutrient and sediment input from agricultural land. Improving water quality in Lake Winnipeg requires the knowledge of the sediment sources within this ecosystem. There are a variety of environmental fingerprinting techniques have been successfully used in the assessment of sediment sources. In this study, we used particle size distribution to evaluate spatial and temporal variations of suspended sediment and potential sediment sources collected in the Tobacco Creek Watershed in Manitoba, Canada. The particle size distribution of suspended sediment can reflect the origin of sediment and processes during sediment transport, deposition and remobilization within the watershed. The objectives of this study were to quantify visually observed spatial and temporal changes in sediment particles, and to assess the sediment source using a rapid and cost-effective fingerprinting technique based on particle size distribution. The suspended sediment was collected by sediment traps twice a year during rainfall and snowmelt periods from 2009 to 2012. The potential sediment sources included the top soil of cultivated field, riparian area and entire profile from stream banks. Suspended sediment and soil samples were pre-wet with RO water and sieved through 600 μm sieve before analyzing. Particle size distribution of all samples was determined using a Malvern Mastersizer 2000S laser diffraction with the measurement range up to 600μm. Comparison of the results for different fractions of sediment showed significant difference in particle size distribution of suspended sediment between snowmelt and rainfall events. An important difference of particle size distribution also found between the cultivated soil and forest soil. This difference can be explained by different land uses which provided a distinct fingerprint of sediment. An overall improvement in water quality can be achieved by managing sediment according to the identified sediment sources in the watershed.
Size scaling of negative hydrogen ion sources for fusion
NASA Astrophysics Data System (ADS)
Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.
2015-04-01
The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.
Optimal reactive planning with security constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.
1995-12-31
The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensivemore » and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.« less
Energy storage requirements of dc microgrids with high penetration renewables under droop control
Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; ...
2015-01-09
Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. Our paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Givenmore » a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, we found that the droop settings of the sources are such that they minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.« less
NASA Technical Reports Server (NTRS)
Singh, J. J.
1979-01-01
Computational methods were developed to study the trajectories of beta particles (positrons) through a magnetic analysis system as a function of the spatial distribution of the radionuclides in the beta source, size and shape of the source collimator, and the strength of the analyzer magnetic field. On the basis of these methods, the particle flux, their energy spectrum, and source-to-target transit times have been calculated for Na-22 positrons as a function of the analyzer magnetic field and the size and location of the target. These data are in studies requiring parallel beams of positrons of uniform energy such as measurement of the moisture distribution in composite materials. Computer programs for obtaining various trajectories are included.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-20
... percentage of the average total daily margin requirement for the preceding month that resulted in a fund....\\8\\ This includes the potential use of the clearing fund as a source of liquidity should it ever be... secured by the clearing fund, OCC is amending the current minimum clearing fund size requirement of $1...
Development and experimental study of large size composite plasma immersion ion implantation device
NASA Astrophysics Data System (ADS)
Falun, SONG; Fei, LI; Mingdong, ZHU; Langping, WANG; Beizhen, ZHANG; Haitao, GONG; Yanqing, GAN; Xiao, JIN
2018-01-01
Plasma immersion ion implantation (PIII) overcomes the direct exposure limit of traditional beam-line ion implantation, and is suitable for the treatment of complex work-piece with large size. PIII technology is often used for surface modification of metal, plastics and ceramics. Based on the requirement of surface modification of large size insulating material, a composite full-directional PIII device based on RF plasma source and metal plasma source is developed in this paper. This device can not only realize gas ion implantation, but also can realize metal ion implantation, and can also realize gas ion mixing with metal ions injection. This device has two metal plasma sources and each metal source contains three cathodes. Under the condition of keeping the vacuum unchanged, the cathode can be switched freely. The volume of the vacuum chamber is about 0.94 m3, and maximum vacuum degree is about 5 × 10-4 Pa. The density of RF plasma in homogeneous region is about 109 cm-3, and plasma density in the ion implantation region is about 1010 cm-3. This device can be used for large-size sample material PIII treatment, the maximum size of the sample diameter up to 400 mm. The experimental results show that the plasma discharge in the device is stable and can run for a long time. It is suitable for surface treatment of insulating materials.
Barkhofen, Sonja; Bartley, Tim J; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S; Jex, Igor; Silberhorn, Christine
2017-01-13
Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ∼e-fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.
A method for the microlensed flux variance of QSOs
NASA Astrophysics Data System (ADS)
Goodman, Jeremy; Sun, Ai-Lei
2014-06-01
A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.
Comparison of parameters affecting GNP-loaded choroidal melanoma dosimetry; Monte Carlo study
NASA Astrophysics Data System (ADS)
Sharabiani, Marjan; Asadi, Somayeh; Barghi, Amir Rahnamai; Vaezzadeh, Mehdi
2018-04-01
The current study reports the results of tumor dosimetry in the presence of gold nanoparticles (GNPs) with different sizes and concentrations. Due to limited number of works carried out on the brachytherapy of choroidal melanoma in combination with GNPs, this study was performed to determine the optimum size and concentration for GNPs which contributes the highest dose deposition in tumor region, using two phantom test cases namely water phantom and a full Monte Carlo model of human eye. Both water and human eye phantoms were simulated with MCNP5 code. Tumor dosimetry was performed for a typical point photon source with an energy of 0.38 MeV as a high energy source and 103Pd brachytherapy source with an average energy of 0.021 MeV as a low energy source in water phantom and eye phantom respectively. Such a dosimetry was done for different sizes and concentrations of GNPs. For all of the diameters, increase in concentration of GNPs resulted in an increase in dose deposited in the region of interest. In a certain concentration, GNPs with larger diameters contributed more dose to the tumor region, which was more pronounced using eye phantom. 100 nm was reported as the optimum size in order to achieve the highest energy deposition within the target. This work investigated the optimum parameters affecting macroscopic dose enhancement in GNP-aided brachytherapy of choroidal melanoma. The current work also had implications on using low energy photon sources in the presence of GNPs to acquire the highest dose enhancement. This study is conducted through four different sizes and concentrations of GNPs. Considering the sensitivity of human eye tissue, in order to report the precise optimum parameters affecting radiosensitivity, a comprehensive study on a wide range of sizes and concentrations are required.
Eddy Covariance Measurements of the Sea-Spray Aerosol Flu
NASA Astrophysics Data System (ADS)
Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.
2015-12-01
Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.
NASA Astrophysics Data System (ADS)
Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.
2017-05-01
The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-20
...Today's action proposes changes to the existing EPA emission inventory reporting requirements on state, local, and tribal agencies in the current Air Emissions Reporting Requirements rule published on December 17, 2008. The proposed amendments would lower the current threshold for reporting Pb sources as point sources; eliminate the requirement for reporting emissions from wildfires and prescribed fires; and replace a requirement for reporting mobile source emissions with a requirement for reporting the input parameters that can be used to run the EPA models that generate the emissions estimates. In addition, the proposed amendments would reduce the reporting burden on state, local, and tribal agencies by removing the requirements to report daily and seasonal emissions associated with carbon monoxide (CO), ozone (O3), and particulate matter up to 10 micrometers in size (PM10) nonattainment areas and nitrogen oxides (NOX) State Implementation Plan (SIP) call areas, although reporting requirements for those emissions would remain in other regulations. Lastly, the proposed amendments would clarify, remove, or simplify some current emissions reporting requirements which we believe are not necessary or are not clearly aligned with current inventory terminology and practices.
Predicting Attack-Prone Components with Source Code Static Analyzers
2009-05-01
models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the count...code churn and size, the count of faults found manually during development, and the measure of coupling between components. The dependent variable...is the count of vulnerabilities reported by testing and those found in the field. We evaluated our model on three commercial telecommunications
Investigating a compact phantom and setup for testing body sound transducers
Mansy, Hansen A; Grahe, Joshua; Royston, Thomas J; Sandler, Richard H
2011-01-01
Contact transducers are a key element in experiments involving body sounds. The characteristics of these devices are often not known with accuracy. There are no standardized calibration setups or procedures for testing these sensors. This study investigated the characteristics of a new computer-controlled sound source phantom for testing sensors. Results suggested that sensors with different sizes require special phantom requirements. The effectiveness of certain approaches on increasing the spatial and spectral uniformity of the phantom surface signal was studied. Non-uniformities >20 dB were removable, which can be particularly helpful in comparing the characteristics of different size sensors more accurately. PMID:21496795
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... of the clearing fund as a source of liquidity should it ever be the case that OCC is unable to obtain... margin requirement for the preceding month that resulted in a fund level of at least $1 billion would be... the clearing fund, the proposed rule change would amend the requirement that the minimum size of the...
NASA Astrophysics Data System (ADS)
Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia
2017-04-01
The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
40 CFR 63.346 - Recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Standards for Chromium Emissions From Hard and Decorative Chromium Electroplating and Chromium Anodizing...) Records of the actual cumulative rectifier capacity of hard chromium electroplating tanks at a facility... size in accordance with § 63.342(c)(2); (13) For sources using fume suppressants to comply with the...
40 CFR 63.346 - Recordkeeping requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Standards for Chromium Emissions From Hard and Decorative Chromium Electroplating and Chromium Anodizing...) Records of the actual cumulative rectifier capacity of hard chromium electroplating tanks at a facility... size in accordance with § 63.342(c)(2); (13) For sources using fume suppressants to comply with the...
40 CFR 63.346 - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Standards for Chromium Emissions From Hard and Decorative Chromium Electroplating and Chromium Anodizing...) Records of the actual cumulative rectifier capacity of hard chromium electroplating tanks at a facility... size in accordance with § 63.342(c)(2); (13) For sources using fume suppressants to comply with the...
PREFACE TO SPECIAL SECTION ON PARTICULATE MATTER SUPERSITES
An improved understanding of the key sources, development of the most cost/effective control strategies, and health risks associated with PM2.5 requires high-quality measurements of PM2.5 composition, size and, concentration over a variety of spatial and temporal scales. However...
Coherence Length and Vibrations of the Coherence Beamline I13 at the Diamond Light Source
NASA Astrophysics Data System (ADS)
Wagner, U. H.; Parson, A.; Rau, C.
2017-06-01
I13 is a 250 m long hard x-ray beamline for imaging and coherent diffraction at the Diamond Light Source. The beamline (6 keV to 35 keV) comprises two independent experimental endstations: one for imaging in direct space using x-ray microscopy and one for imaging in reciprocal space using coherent diffraction based imaging techniques [1]. In particular the coherence experiments pose very high demands on the performance on the beamline instrumentation, requiring extensive testing and optimisation of each component, even during the assembly phase. Various aspects like the quality of optical components, the mechanical design concept, vibrations, drifts, thermal influences and the performance of motion systems are of particular importance. In this paper we study the impact of the front-end slit size (FE slit size), which determines the horizontal source size, onto the coherence length and the detrimental impact of monochromator vibrations using in-situ x-ray metrology in conjunction with fringe visibility measurements and vibration measurements, based on centroid tracking of an x-ray pencil beam with a photon-counting detector.
Removal of Tin from Extreme Ultraviolet Collector Optics by an In-Situ Hydrogen Plasma
NASA Astrophysics Data System (ADS)
Elg, Daniel Tyler
Throughout the 1980s and 1990s, as the semiconductor industry upheld Moore's Law and continuously shrank device feature sizes, the wavelength of the lithography source remained at or below the resolution limit of the minimum feature size. Since 2001, however, the light source has been the 193nm ArF excimer laser. While the industry has managed to keep up with Moore's Law, shrinking feature sizes without shrinking the lithographic wavelength has required extra innovations and steps that increase fabrication time, cost, and error. These innovations include immersion lithography and double patterning. Currently, the industry is at the 14 nm technology node. Thus, the minimum feature size is an order of magnitude below the exposure wavelength. For the 10 nm node, triple and quadruple patterning have been proposed, causing potentially even more cost, fabrication time, and error. Such a trend cannot continue indefinitely in an economic fashion, and it is desirable to decrease the wavelength of the lithography sources. Thus, much research has been invested in extreme ultraviolet lithography (EUVL), which uses 13.5 nm light. While much progress has been made in recent years, some challenges must still be solved in order to yield a throughput high enough for EUVL to be commercially viable for high-volume manufacturing (HVM). One of these problems is collector contamination. Due to the 92 eV energy of a 13.5 nm photon, EUV light must be made by a plasma, rather than by a laser. Specifically, the industrially-favored EUV source topology is to irradiate a droplet of molten Sn with a laser, creating a dense, hot laser-produced plasma (LPP) and ionizing the Sn to (on average) the +10 state. Additionally, no materials are known to easily transmit EUV. All EUV light must be collected by a collector optic mirror, which cannot be guarded by a window. The plasmas used in EUV lithography sources expel Sn ions and neutrals, which degrade the quality of collector optics. The mitigation of this debris is one of the main problems facing potential manufacturers of EUV sources. which can damage the collector optic in three ways: sputtering, implantation, and deposition. The first two damage processes are irreversible and are caused by the high energies (1-10 keV) of the ion debris. Debris mitigation methods have largely managed to reduce this problem by using collisions with H2 buffer gas to slow down the energetic ions. However, deposition can take place at all ion and neutral energies, and no mitigation method can deterministically deflect all neutrals away from the collector. Thus, deposition still takes place, lowering the collector reflectivity and increasing the time needed to deliver enough EUV power to pattern a wafer. Additionally, even once EUV reaches HVM insertion, source power will need to be continually increased as feature sizes continue to shrink; this increase in source power may potentially come at a cost of increased debris. Thus, debris mitigation solutions that work for the initial generation of commercial EUVL systems may not be adequate for future generations. An in-situ technology to clean collector optics without source downtime is required. which will require an in-situ technology to clean collector optics. The novel cleaning solution described in this work is to create the radicals directly on the collector surface by using the collector itself to drive a capacitively-coupled hydrogen plasma. This allows for radical creation at the desired location without requiring any delivery system and without requiring any source downtime. Additionally, the plasma provides energetic radicals that aid in the etching process. This work will focus on two areas. First, it will focus on experimental collector cleaning and EUV reflectivity restoration. Second, it will focus on developing an understanding of the fundamental processes governing Sn removal. It will be shown that this plasma technique can clean an entire collector optic and restore EUV reflectivity to MLMs without damaging them. Additionally, it will be shown that, within the parameter space explored, the limiting factor in Sn etching is not hydrogen radical flux or SnH4 decomposition but ion energy flux. This will be backed up by experimental measurements, as well as a plasma chemistry model of the radical density and a 3D model of SnH4 transport and redeposition.
Focus characterization at an X-ray free-electron laser by coherent scattering and speckle analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikorski, Marcin; Song, Sanghoon; Schropp, Andreas
2015-04-14
X-ray focus optimization and characterization based on coherent scattering and quantitative speckle size measurements was demonstrated at the Linac Coherent Light Source. Its performance as a single-pulse free-electron laser beam diagnostic was tested for two typical focusing configurations. The results derived from the speckle size/shape analysis show the effectiveness of this technique in finding the focus' location, size and shape. In addition, its single-pulse compatibility enables users to capture pulse-to-pulse fluctuations in focus properties compared with other techniques that require scanning and averaging.
Low-Energy Microfocus X-Ray Source for Enhanced Testing Capability in the Stray Light Facility
NASA Technical Reports Server (NTRS)
Gaskin, Jessica; O'Dell, Stephen; Kolodziejczak, Jeff
2015-01-01
Research toward high-resolution, soft x-ray optics (mirrors and gratings) necessary for the next generation large x-ray observatories requires x-ray testing using a low-energy x-ray source with fine angular size (<1 arcsecond). To accommodate this somewhat demanding requirement, NASA Marshall Space Flight Center (MSFC) has procured a custom, windowless low-energy microfocus (approximately 0.1 mm spot) x-ray source from TruFocus Corporation that mates directly to the Stray Light Facility (SLF). MSFC X-ray Astronomy team members are internationally recognized for their expertise in the development, fabrication, and testing of grazing-incidence optics for x-ray telescopes. One of the key MSFC facilities for testing novel x-ray instrumentation is the SLF. This facility is an approximately 100-m-long beam line equipped with multiple x-ray sources and detectors. This new source adds to the already robust compliment of instrumentation, allowing MSFC to support additional internal and community x-ray testing needs.
Improvements in the EQ-10 electrodeless Z-pinch EUV source for metrology applications
NASA Astrophysics Data System (ADS)
Horne, Stephen F.; Gustafson, Deborah; Partlow, Matthew J.; Besen, Matthew M.; Smith, Donald K.; Blackborow, Paul A.
2011-04-01
Now that EUV lithography systems are beginning to ship into the fabs for next generation chips it is more critical that the EUV infrastructure developments are keeping pace. Energetiq Technology has been shipping the EQ-10 Electrodeless Z-pinch™ light source since 2005. The source is currently being used for metrology, mask inspection, and resist development. These applications require especially stable performance in both power and source size. Over the last 5 years Energetiq has made many source modifications which have included better thermal management as well as high pulse rate operation6. Recently we have further increased the system power handling and electrical pulse reproducibility. The impact of these modifications on source performance will be reported.
40 CFR 63.53 - Application content for case-by-case MACT determinations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... identified emission point or group of affected emission points, an identification of control technology in... on the design, operation, size, estimated control efficiency and any other information deemed... CATEGORIES Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air...
40 CFR 63.53 - Application content for case-by-case MACT determinations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... identified emission point or group of affected emission points, an identification of control technology in... on the design, operation, size, estimated control efficiency and any other information deemed... CATEGORIES Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air...
40 CFR 63.53 - Application content for case-by-case MACT determinations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... identified emission point or group of affected emission points, an identification of control technology in... on the design, operation, size, estimated control efficiency and any other information deemed... CATEGORIES Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air...
40 CFR 63.53 - Application content for case-by-case MACT determinations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... identified emission point or group of affected emission points, an identification of control technology in... on the design, operation, size, estimated control efficiency and any other information deemed... CATEGORIES Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air...
Deuterium results at the negative ion source test facility ELISE
NASA Astrophysics Data System (ADS)
Kraus, W.; Wünderlich, D.; Fantz, U.; Heinemann, B.; Bonomo, F.; Riedl, R.
2018-05-01
The ITER neutral beam system will be equipped with large radio frequency (RF) driven negative ion sources, with a cross section of 0.9 m × 1.9 m, which have to deliver extracted D- ion beams of 57 A at 1 MeV for 1 h. On the extraction from a large ion source experiment test facility, a source of half of this size is being operational since 2013. The goal of this experiment is to demonstrate a high operational reliability and to achieve the extracted current densities and beam properties required for ITER. Technical improvements of the source design and the RF system were necessary to provide reliable operation in steady state with an RF power of up to 300 kW. While in short pulses the required D- current density has almost been reached, the performance in long pulses is determined in particular in Deuterium by inhomogeneous and unstable currents of co-extracted electrons. By application of refined caesium evaporation and distribution procedures, and reduction and symmetrization of the electron currents, considerable progress has been made and up to 190 A/m2 D-, corresponding to 66% of the value required for ITER, have been extracted for 45 min.
Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier
2016-12-01
The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a potential long-term source of particulate contaminated matter that will require diligent management for the foreseeable future. Copyright © 2016 Elsevier Ltd. All rights reserved.
Underwater seismic source. [for petroleum exploration
NASA Technical Reports Server (NTRS)
Yang, L. C. (Inventor)
1979-01-01
Apparatus for generating a substantially oscillation-free seismic signal for use in underwater petroleum exploration, including a bag with walls that are flexible but substantially inelastic, and a pressured gas supply for rapidly expanding the bag to its fully expanded condition is described. The inelasticity of the bag permits the application of high pressure gas to rapidly expand it to full size, without requiring a venting mechanism to decrease the pressure as the bag approaches a predetermined size to avoid breaking of the bag.
1989-07-01
are established for particular missions. DESCRIPTION OF THE SCOPING CODE A fast-running FORTRAN code , TCT FOR, was written to perform the parameter...requirements; i.e., missions which require multi - stage , chemically propelled vehicles. Vehicle Sizing Algorithms The basic problem is the delivery of a...F04611-87-c-0092 77 - Ř ". -rd Z;PCc.e) 10 SOURCE OF FUNDING NUMBERS PROGRAM PROJECT " I WORK U" FLEMENT NO NO. [ iQ ACCESSION NO 162302F 3058
A lower bound on the number of cosmic ray events required to measure source catalogue correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolci, Marco; Romero-Wolf, Andrew; Wissel, Stephanie, E-mail: marco.dolci@polito.it, E-mail: Andrew.Romero-Wolf@jpl.nasa.gov, E-mail: swissel@calpoly.edu
2016-10-01
Recent analyses of cosmic ray arrival directions have resulted in evidence for a positive correlation with active galactic nuclei positions that has weak significance against an isotropic source distribution. In this paper, we explore the sample size needed to measure a highly statistically significant correlation to a parent source catalogue. We compare several scenarios for the directional scattering of ultra-high energy cosmic rays given our current knowledge of the galactic and intergalactic magnetic fields. We find significant correlations are possible for a sample of >1000 cosmic ray protons with energies above 60 EeV.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Using refractive optics to broaden the focus of an X-ray mirror.
Laundy, David; Sawhney, Kawal; Dhamgaye, Vishal
2017-07-01
X-ray mirrors are widely used at synchrotron radiation sources for focusing X-rays into focal spots of size less than 1 µm. The ability of the beamline optics to change the size of this spot over a range up to tens of micrometres can be an advantage for many experiments such as X-ray microprobe and X-ray diffraction from micrometre-scale crystals. It is a requirement that the beam size change should be reproducible and it is often essential that the change should be rapid, for example taking less than 1 s, in order to allow high data collection rates at modern X-ray sources. In order to provide a controlled broadening of the focused spot of an X-ray mirror, a series of refractive optical elements have been fabricated and installed immediately before the mirror. By translation, a new refractive element is moved into the X-ray beam allowing a variation in the size of the focal spot in the focusing direction. Measurements using a set of prefabricated refractive structures with a test mirror showed that the focused beam size could be varied from less than 1 µm to over 10 µm for X-rays in the energy range 10-20 keV. As the optics is in-line with the X-ray beam, there is no effect on the centroid position of the focus. Accurate positioning of the refractive optics ensures reproducibility in the focused beam profile and no additional re-alignment of the optics is required.
Using refractive optics to broaden the focus of an X-ray mirror
Dhamgaye, Vishal
2017-01-01
X-ray mirrors are widely used at synchrotron radiation sources for focusing X-rays into focal spots of size less than 1 µm. The ability of the beamline optics to change the size of this spot over a range up to tens of micrometres can be an advantage for many experiments such as X-ray microprobe and X-ray diffraction from micrometre-scale crystals. It is a requirement that the beam size change should be reproducible and it is often essential that the change should be rapid, for example taking less than 1 s, in order to allow high data collection rates at modern X-ray sources. In order to provide a controlled broadening of the focused spot of an X-ray mirror, a series of refractive optical elements have been fabricated and installed immediately before the mirror. By translation, a new refractive element is moved into the X-ray beam allowing a variation in the size of the focal spot in the focusing direction. Measurements using a set of prefabricated refractive structures with a test mirror showed that the focused beam size could be varied from less than 1 µm to over 10 µm for X-rays in the energy range 10–20 keV. As the optics is in-line with the X-ray beam, there is no effect on the centroid position of the focus. Accurate positioning of the refractive optics ensures reproducibility in the focused beam profile and no additional re-alignment of the optics is required. PMID:28664880
A reference aerosol for a radon reference chamber
NASA Astrophysics Data System (ADS)
Paul, Annette; Keyser, Uwe
1996-02-01
The measurement of radon and radon progenies and the calibration of their detection systems require the production and measurement of aerosols well-defined in size and concentration. In the German radon reference chamber, because of its unique chemical and physical properties, carnauba wax is used to produce standard aerosols. The aerosol size spectra are measured on-line by an aerosol measurement system in the range of 10 nm to 1 μm aerodynamic diameter. The experimental set-ups for the study of adsorption of radioactive ions on aerosols as function of their size and concentration will be described, the results presented and further adaptations for an aerosol jet introduced (for example, for the measurement of short-lived neutron-rich isotopes). Data on the dependence of aerosol radius, ion concentration and element selectivity is collected by using a 252Cf-sf source. The fission products of this source range widely in elements, isotopes and charges. Adsorption and the transport of radioactive ions on aerosols have therefore been studied for various ions for the first time, simultaneously with the aerosol size on-line spectrometry.
Larsson, Daniel H; Lundström, Ulf; Westermark, Ulrica K; Arsenian Henriksson, Marie; Burvall, Anna; Hertz, Hans M
2013-02-01
Small-animal studies require images with high spatial resolution and high contrast due to the small scale of the structures. X-ray imaging systems for small animals are often limited by the microfocus source. Here, the authors investigate the applicability of liquid-metal-jet x-ray sources for such high-resolution small-animal imaging, both in tomography based on absorption and in soft-tissue tumor imaging based on in-line phase contrast. The experimental arrangement consists of a liquid-metal-jet x-ray source, the small-animal object on a rotating stage, and an imaging detector. The source-to-object and object-to-detector distances are adjusted for the preferred contrast mechanism. Two different liquid-metal-jet sources are used, one circulating a Ga∕In∕Sn alloy and the other an In∕Ga alloy for higher penetration through thick tissue. Both sources are operated at 40-50 W electron-beam power with ∼7 μm x-ray spots, providing high spatial resolution in absorption imaging and high spatial coherence for the phase-contrast imaging. High-resolution absorption imaging is demonstrated on mice with CT, showing 50 μm bone details in the reconstructed slices. High-resolution phase-contrast soft-tissue imaging shows clear demarcation of mm-sized tumors at much lower dose than is required in absorption. This is the first application of liquid-metal-jet x-ray sources for whole-body small-animal x-ray imaging. In absorption, the method allows high-resolution tomographic skeletal imaging with potential for significantly shorter exposure times due to the power scalability of liquid-metal-jet sources. In phase contrast, the authors use a simple in-line arrangement to show distinct tumor demarcation of few-mm-sized tumors. This is, to their knowledge, the first small-animal tumor visualization with a laboratory phase-contrast system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, P. H., E-mail: p.charles@qut.edu.au; Crowe, S. B.; Langton, C. M.
Purpose: This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods: A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated intomore » additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. Results: According to the practical definition established in this project, field sizes ≤15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤12 mm. Source occlusion also caused a large change in OPF for field sizes ≤8 mm. Based on the results of this study, field sizes ≤12 mm were considered to be theoretically very small for 6 MV beams. Conclusions: Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least ≤12 mm and more conservatively≤15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.« less
Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Crowe, S B; Kairn, T; Knight, R T; Kenny, J; Langton, C M; Trapp, J V
2014-04-01
This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. According to the practical definition established in this project, field sizes ≤ 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤ 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤ 12 mm. Source occlusion also caused a large change in OPF for field sizes ≤ 8 mm. Based on the results of this study, field sizes ≤ 12 mm were considered to be theoretically very small for 6 MV beams. Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least ≤ 12 mm and more conservatively ≤ 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection. © 2014 American Association of Physicists in Medicine.
Helicon Wave Physics Impacts on Electrodeless Thruster Design
NASA Technical Reports Server (NTRS)
Gilland, James H.
2007-01-01
Effective generation of helicon waves for high density plasma sources is determined by the dispersion relation and plasma power balance. Helicon wave plasma sources inherently require an applied magnetic field of .01-0.1 T, an antenna properly designed to couple to the helicon wave in the plasma, and an rf power source in the 10-100 s of MHz, depending on propellant choice. For a plasma thruster, particularly one with a high specific impulse (>2000 s), the physics of the discharge would also have to address the use of electron cyclotron resonance (ECR) heating and magnetic expansion. In all cases the system design includes an optimized magnetic field coil, plasma source chamber, and antenna. A preliminary analysis of such a system, calling on experimental data where applicable and calculations where required, has been initiated at Glenn Research Center. Analysis results showing the mass scaling of various components as well as thruster performance projections and their impact on thruster size are discussed.
Helicon Wave Physics Impacts on Electrodeless Thruster Design
NASA Technical Reports Server (NTRS)
Gilland, James
2003-01-01
Effective generation of helicon waves for high density plasma sources is determined by the dispersion relation and plasma power balance. Helicon wave plasma sources inherently require an applied magnetic field of .01-0.1 T, an antenna properly designed to couple to the helicon wave in the plasma, and an rf power source in the 10-100 s of MHz, depending on propellant choice. For a plasma thruster, particularly one with a high specific impulse (>2000 s), the physics of the discharge would also have to address the use of electron cyclotron resonance (ECR) heating and magnetic expansion. In all cases the system design includes an optimized magnetic field coil, plasma source chamber, and antenna. A preliminary analysis of such a system, calling on experimental data where applicable and calculations where required, has been initiated at Glenn Research Center. Analysis results showing the mass scaling of various components as well as thruster performance projections and their impact on thruster size are discussed.
The KATRIN experiment The KATRIN experiment is designed to make a direct measurement of the mass experiment, scaled up by an order of magnitude in size, precision and tritium source intensity from previous experiments. Visit the experiment home page for more information. Gallery SimpleViewer requires JavaScript and
Reduction of noise radiated from open pipe terminations
NASA Astrophysics Data System (ADS)
Davis, M. R.
1989-07-01
A modified Quincke tube has been tested to determine the extent to which sound radiation from an open tube end can be reduced by conversion of the monopole source into a dipole form. It has been found that directivity patterns of the dipole with approximately 20 dB variation can be achieved provided that the out-of-phase tube ends are not too closely spaced. Very large spacings also reduce the effectiveness of the arrangement in reducing radiated power since the source system does not then approximate a simple dipole. Consideration has been given to compact designs which achieve path length differentials by the use of four concentric tubes. The relative size of the two acoustic paths has to be adjusted to allow for the size effect on radiation, requiring a somewhat larger area for the smaller tube. Through flow would require an opposite adjustment of the smaller tube area in this case if the smaller tube presented a smaller resistance to flow, as is likely since it involves straight-through flow. Flow through the system would increase the tuned operating frequency.
Voelz, David G; Roggemann, Michael C
2009-11-10
Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.
Senftle, F.E.; Macy, R.J.; Mikesell, J.L.
1979-01-01
The fast- and thermal-neutron fluence rates from a 3.7 ??g 252Cf neutron source in a simulated borehole have been measured as a function of the source-to-detector distance using air, water, coal, iron ore-concrete mix, and dry sand as borehole media. Gamma-ray intensity measurements were made for specific spectral lines at low and high energies for the same range of source-to-detector distances in the iron ore-concrete mix and in coal. Integral gamma-ray counts across the entire spectrum were also made at each source-to-detector distance. From these data, the specific neutron-damage rate, and the critical count-rate criteria, we show that in an iron ore-concrete mix (low hydrogen concentration), 252Cf neutron sources of 2-40 ??g are suitable. The source size required for optimum gamma-ray sensitivity depends on the energy of the gamma ray being measured. In a hydrogeneous medium such as coal, similar measurements were made. The results show that sources from 2 to 20 ??g are suitable to obtain the highest gamma-ray sensitivity, again depending on the energy of the gamma ray being measured. In a hydrogeneous medium, significant improvement in sensitivity can be achieved by using faster electronics; in iron ore, it cannot. ?? 1979 North-Holland Publishing Co.
Small Stirling dynamic isotope power system for multihundred-watt robotic missions
NASA Technical Reports Server (NTRS)
Bents, David J.
1991-01-01
Free Piston Stirling Engine (FPSE) and linear alternator (LA) technology is combined with radioisotope heat sources to produce a compact dynamic isotope power system (DIPS) suitable for multihundred watt space application which appears competitive with advance radioisotope thermoelectric generators (RTGs). The small Stirling DIPS is scalable to multihundred watt power levels or lower. The FPSE/LA is a high efficiency convertor in sizes ranging from tens of kilowatts down to only a few watts. At multihundred watt unit size, the FPSE can be directly integrated with the General Purpose Heat Source (GPHS) via radiative coupling; the resulting dynamic isotope power system has a size and weight that compares favorably with the advanced modular (Mod) RTG, but requires less than a third the amount of isotope fuel. Thus the FPSE extends the high efficiency advantage of dynamic systems into a power range never previously considered competitive for DIPS. This results in lower fuel cost and reduced radiological hazard per delivered electrical watt.
Small Stirling dynamic isotope power system for multihundred-watt robotic missions
NASA Technical Reports Server (NTRS)
Bents, David J.
1991-01-01
Free piston Stirling Engine (FPSE) and linear alternator (LA) technology is combined with radioisotope heat sources to produce a compact dynamic isotope power system (DIPS) suitable for multihundred watt space application which appears competitive with advanced radioisotope thermoelectric generators (RTGs). The small Stirling DIPS is scalable to multihundred watt power levels or lower. The FPSE/LA is a high efficiency convertor in sizes ranging from tens of kilowatts down to only a few watts. At multihundred watt unit size, the FPSE can be directly integrated with the General Purpose Heat Source (GPHS) via radiative coupling; the resulting dynamic isotope power system has a size and weight that compares favorably with the advanced modular (Mod) RTG, but requires less than a third the amount of isotope fuel. Thus the FPSE extends the high efficiency advantage of dynamic systems into a power range never previously considered competitive for DIPS. This results in lower fuel cost and reduced radiological hazard per delivered electrical watt.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsson, Daniel H.; Lundstroem, Ulf; Burvall, Anna
Purpose: Small-animal studies require images with high spatial resolution and high contrast due to the small scale of the structures. X-ray imaging systems for small animals are often limited by the microfocus source. Here, the authors investigate the applicability of liquid-metal-jet x-ray sources for such high-resolution small-animal imaging, both in tomography based on absorption and in soft-tissue tumor imaging based on in-line phase contrast. Methods: The experimental arrangement consists of a liquid-metal-jet x-ray source, the small-animal object on a rotating stage, and an imaging detector. The source-to-object and object-to-detector distances are adjusted for the preferred contrast mechanism. Two different liquid-metal-jetmore » sources are used, one circulating a Ga/In/Sn alloy and the other an In/Ga alloy for higher penetration through thick tissue. Both sources are operated at 40-50 W electron-beam power with {approx}7 {mu}m x-ray spots, providing high spatial resolution in absorption imaging and high spatial coherence for the phase-contrast imaging. Results: High-resolution absorption imaging is demonstrated on mice with CT, showing 50 {mu}m bone details in the reconstructed slices. High-resolution phase-contrast soft-tissue imaging shows clear demarcation of mm-sized tumors at much lower dose than is required in absorption. Conclusions: This is the first application of liquid-metal-jet x-ray sources for whole-body small-animal x-ray imaging. In absorption, the method allows high-resolution tomographic skeletal imaging with potential for significantly shorter exposure times due to the power scalability of liquid-metal-jet sources. In phase contrast, the authors use a simple in-line arrangement to show distinct tumor demarcation of few-mm-sized tumors. This is, to their knowledge, the first small-animal tumor visualization with a laboratory phase-contrast system.« less
MacDowell, Alastair A; Celestre, Rich S; Howells, Malcolm; McKinney, Wayne; Krupnick, James; Cambie, Daniella; Domning, Edward E; Duarte, Robert M; Kelez, Nicholas; Plate, David W; Cork, Carl W; Earnest, Thomas N; Dickert, Jeffery; Meigs, George; Ralston, Corie; Holton, James M; Alber, Tom; Berger, James M; Agard, David A; Padmore, Howard A
2004-11-01
At the Advanced Light Source, three protein crystallography beamlines have been built that use as a source one of the three 6 T single-pole superconducting bending magnets (superbends) that were recently installed in the ring. The use of such single-pole superconducting bend magnets enables the development of a hard X-ray program on a relatively low-energy 1.9 GeV ring without taking up insertion-device straight sections. The source is of relatively low power but, owing to the small electron beam emittance, it has high brightness. X-ray optics are required to preserve the brightness and to match the illumination requirements for protein crystallography. This was achieved by means of a collimating premirror bent to a plane parabola, a double-crystal monochromator followed by a toroidal mirror that focuses in the horizontal direction with a 2:1 demagnification. This optical arrangement partially balances aberrations from the collimating and toroidal mirrors such that a tight focused spot size is achieved. The optical properties of the beamline are an excellent match to those required by the small protein crystals that are typically measured. The design and performance of these new beamlines are described.
Optimisation of a propagation-based x-ray phase-contrast micro-CT system
NASA Astrophysics Data System (ADS)
Nesterets, Yakov I.; Gureyev, Timur E.; Dimmock, Matthew R.
2018-03-01
Micro-CT scanners find applications in many areas ranging from biomedical research to material sciences. In order to provide spatial resolution on a micron scale, these scanners are usually equipped with micro-focus, low-power x-ray sources and hence require long scanning times to produce high resolution 3D images of the object with acceptable contrast-to-noise. Propagation-based phase-contrast tomography (PB-PCT) has the potential to significantly improve the contrast-to-noise ratio (CNR) or, alternatively, reduce the image acquisition time while preserving the CNR and the spatial resolution. We propose a general approach for the optimisation of the PB-PCT imaging system. When applied to an imaging system with fixed parameters of the source and detector this approach requires optimisation of only two independent geometrical parameters of the imaging system, i.e. the source-to-object distance R 1 and geometrical magnification M, in order to produce the best spatial resolution and CNR. If, in addition to R 1 and M, the system parameter space also includes the source size and the anode potential this approach allows one to find a unique configuration of the imaging system that produces the required spatial resolution and the best CNR.
Schriever, G; Mager, S; Naweed, A; Engel, A; Bergmann, K; Lebert, R
1998-03-01
Extended ultraviolet (EUV) emission characteristics of a laser-produced lithium plasma are determined with regard to the requirements of x-ray photoelectron spectroscopy. The main features of interest are spectral distribution, photon flux, bandwidth, source size, and emission duration. Laser-produced lithium plasmas are characterized as emitters of intense narrow-band EUV radiation. It can be estimated that the lithium Lyman-alpha line emission in combination with an ellipsoidal silicon/molybdenum multilayer mirror is a suitable EUV source for an x-ray photoelectron spectroscopy microscope with a 50-meV energy resolution and a 10-mum lateral resolution.
Magnetic plasma confinement for laser ion source.
Okamura, M; Adeyemi, A; Kanesue, T; Tamura, J; Kondo, K; Dabrowski, R
2010-02-01
A laser ion source (LIS) can easily provide a high current beam. However, it has been difficult to obtain a longer beam pulse while keeping a high current. On occasion, longer beam pulses are required by certain applications. For example, more than 10 micros of beam pulse is required for injecting highly charged beams to a large sized synchrotron. To extend beam pulse width, a solenoid field was applied at the drift space of the LIS at Brookhaven National Laboratory. The solenoid field suppressed the diverging angle of the expanding plasma and the beam pulse was widened. Also, it was observed that the plasma state was conserved after passing through a few hundred gauss of the 480 mm length solenoid field.
Matilda: A mass filtered nanocluster source
NASA Astrophysics Data System (ADS)
Kwon, Gihan
Cluster science provides a good model system for the study of the size dependence of electronic properties, chemical reactivity, as well as magnetic properties of materials. One of the main interests in cluster science is the nanoscale understanding of chemical reactions and selectivity in catalysis. Therefore, a new cluster system was constructed to study catalysts for applications in renewable energy. Matilda, a nanocluster source, consists of a cluster source and a Retarding Field Analyzer (RFA). A moveable AJA A310 Series 1"-diameter magnetron sputtering gun enclosed in a water cooled aggregation tube served as the cluster source. A silver coin was used for the sputtering target. The sputtering pressure in the aggregation tube was controlled, ranging from 0.07 to 1torr, using a mass flow controller. The mean cluster size was found to be a function of relative partial pressure (He/Ar), sputtering power, and aggregation length. The kinetic energy distribution of ionized clusters was measured with the RFA. The maximum ion energy distribution was 2.9 eV/atom at a zero pressure ratio. At high Ar flow rates, the mean cluster size was 20 ˜ 80nm, and at a 9.5 partial pressure ratio, the mean cluster size was reduced to 1.6nm. Our results showed that the He gas pressure can be optimized to reduce the cluster size variations. Results from SIMION, which is an electron optics simulation package, supported the basic function of an RFA, a three-element lens and the magnetic sector mass filter. These simulated results agreed with experimental data. For the size selection experiment, the channeltron electron multiplier collected ionized cluster signal at different positions during Ag deposition on a TEM grid for four and half hours. The cluster signal was high at the position for neutral clusters, which was not bent by a magnetic field, and the signal decreased rapidly far away from the neutral cluster region. For cluster separation according to mass to charge ratio in a magnetic sector mass filter, the ion energy of the cluster and its distribution must be precisely controlled by acceleration or deceleration. To verify the size separation, a high resolution microscope was required. Matilda provided narrow particle sized distribution from atomic scale to 4nm in size with different pressure ratio without additional mass filter. It is very economical way to produce relatively narrow particle size distribution.
The Army’s Use of Containerization for Unit Deployments
1991-12-07
Because of their lack of MHE, they may require a smaller container size, like the old CONEX, that can be either man -handled or moved with a 10-ton...equipment. As a general rule, a unit should not take additional containers that will serve only as storage facilities or workplaces in the wartime area of...seamen required to man the existing reserve vessels; provides the govemrnment access to a ’healthy, source of shipping vt.’saels versus relying totally on
Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data
Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria
2015-01-01
Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841
Power Sources for Micro-Autonomous Vehicles- Challenges and Prospects
NASA Technical Reports Server (NTRS)
Narayan, S. R.; Kisor, A.; Valdez, T. I.; Manohara, H.
2009-01-01
Micro-autonomous vehicle systems are expected to have expanded role in military missions by providing full spectrum intelligence, surveillance and reconnaissance support on the battlefield, suppression of enemy defenses, and enabling co-operative (swarm-like) configurations. Of the numerous demanding requirements of autonomy, sensing, navigation, mobility, etc., meeting the requirement of mission duration or endurance is a very challenging one. This requirement is demanding because of the constraints of mass and volume that limit the quantity of energy that can be stored on-board. Energy is required for mobility, payload operation, information processing, and communication. Mobility requirements typically place an extraordinary demand on the specific energy (Wh/kg) and specific power (W/kg) of the power source; the actual distribution of the energy between mobility and other system functions could vary substantially with the mission type. The power requirements for continuous mobility can vary from 100-1000 W/kg depending on the terrain, ground speed and flight speed. Even with the power source accounting for 30% of the mass of the vehicle, the best of rechargeable batteries can provide only up to 1-2 hours of run-time for a continuous power demand at 100W/kg. In the case of micro-aerial vehicles with flight speed requirements in the range of 5-15 m s-1, the mission times rarely exceed 20 minutes [2]. Further, the power required during take-off and hover can be twice or thrice that needed for steady level flight, and thus the number and sequence of such events is also limited by the mass and size of the power source. For operations such as "perch and stare" or "silent watch" the power demand is often only a tenth of that required during continuous flight. Thus, variation in power demand during various phases of the mission importantly affects the power source selection.
Space station trace contaminant control
NASA Technical Reports Server (NTRS)
Olcutt, T.
1985-01-01
Different systems for the control of space station trace contaminants are outlined. The issues discussed include: spacecabin contaminant sources, technology base, contaminant control system elements and configuration, approach to contaminant control, contaminant load model definition, spacecraft maximum allowable concentrations, charcoal bed sizing and performance characteristics, catalytic oxidizer sizing and performance characteristics, special sorbent bed sizing, animal and plant research payload problems, and emergency upset contaminant removal. It is concluded that the trace contaminant control technology base is firm, the necessary hardware tools are available, and the previous design philosophy is still applicable. Some concerns are the need as opposed to danger of the catalytic oxidizer, contaminants with very low allowable concentrations, and the impact of relaxing materials requirements.
Acoustophoretic separation of airborne millimeter-size particles by a Fresnel lens.
Cicek, Ahmet; Korozlu, Nurettin; Adem Kaya, Olgun; Ulug, Bulent
2017-03-02
We numerically demonstrate acoustophoretic separation of spherical solid particles in air by means of an acoustic Fresnel lens. Beside gravitational and drag forces, freely-falling millimeter-size particles experience large acoustic radiation forces around the focus of the lens, where interplay of forces lead to differentiation of particle trajectories with respect to either size or material properties. Due to the strong acoustic field at the focus, radiation force can divert particles with source intensities significantly smaller than those required for acoustic levitation in a standing field. When the lens is designed to have a focal length of 100 mm at 25 kHz, finite-element method simulations reveal a sharp focus with a full-width at half-maximum of 0.5 wavelenghts and a field enhancement of 18 dB. Through numerical calculation of forces and simulation of particle trajectories, we demonstrate size-based separation of acrylic particles at a source sound pressure level of 153 dB such that particles with diameters larger than 0.5 mm are admitted into the central hole, whereas smaller particles are rejected. Besides, efficient separation of particles with similar acoustic properties such as polyethylene, polystyrene and acrylic particles of the same size is also demonstrated.
Acoustophoretic separation of airborne millimeter-size particles by a Fresnel lens
NASA Astrophysics Data System (ADS)
Cicek, Ahmet; Korozlu, Nurettin; Adem Kaya, Olgun; Ulug, Bulent
2017-03-01
We numerically demonstrate acoustophoretic separation of spherical solid particles in air by means of an acoustic Fresnel lens. Beside gravitational and drag forces, freely-falling millimeter-size particles experience large acoustic radiation forces around the focus of the lens, where interplay of forces lead to differentiation of particle trajectories with respect to either size or material properties. Due to the strong acoustic field at the focus, radiation force can divert particles with source intensities significantly smaller than those required for acoustic levitation in a standing field. When the lens is designed to have a focal length of 100 mm at 25 kHz, finite-element method simulations reveal a sharp focus with a full-width at half-maximum of 0.5 wavelenghts and a field enhancement of 18 dB. Through numerical calculation of forces and simulation of particle trajectories, we demonstrate size-based separation of acrylic particles at a source sound pressure level of 153 dB such that particles with diameters larger than 0.5 mm are admitted into the central hole, whereas smaller particles are rejected. Besides, efficient separation of particles with similar acoustic properties such as polyethylene, polystyrene and acrylic particles of the same size is also demonstrated.
Acoustophoretic separation of airborne millimeter-size particles by a Fresnel lens
Cicek, Ahmet; Korozlu, Nurettin; Adem Kaya, Olgun; Ulug, Bulent
2017-01-01
We numerically demonstrate acoustophoretic separation of spherical solid particles in air by means of an acoustic Fresnel lens. Beside gravitational and drag forces, freely-falling millimeter-size particles experience large acoustic radiation forces around the focus of the lens, where interplay of forces lead to differentiation of particle trajectories with respect to either size or material properties. Due to the strong acoustic field at the focus, radiation force can divert particles with source intensities significantly smaller than those required for acoustic levitation in a standing field. When the lens is designed to have a focal length of 100 mm at 25 kHz, finite-element method simulations reveal a sharp focus with a full-width at half-maximum of 0.5 wavelenghts and a field enhancement of 18 dB. Through numerical calculation of forces and simulation of particle trajectories, we demonstrate size-based separation of acrylic particles at a source sound pressure level of 153 dB such that particles with diameters larger than 0.5 mm are admitted into the central hole, whereas smaller particles are rejected. Besides, efficient separation of particles with similar acoustic properties such as polyethylene, polystyrene and acrylic particles of the same size is also demonstrated. PMID:28252033
NASA Astrophysics Data System (ADS)
Flandes, Alberto
2004-08-01
The Dust ballerina skirt is a set of well defined streams composed of nanometric sized dust particles that escape from the Jovian system and may be accelerated up to >=200 km/s. The source of this dust is Jupiter's moon Io, the most volcanically active body in the Solar system. The escape of dust grains from Jupiter requires first the escape of these grains from Io. This work is basically devoted to explain this escape given that the driving of dust particles to great heights and later injection into the ionosphere of Io may give the particles an equilibrium potential that allow the magnetic field to accelerate them away from Io. The grain sizes obtained through this study match very well to the values required for the particles to escape from the Jovian system.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2015-12-01
Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.
Simulating the x-ray image contrast to setup techniques with desired flaw detectability
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2015-04-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
NASA Astrophysics Data System (ADS)
Loisel, G.; Lake, P.; Gard, P.; Dunham, G.; Nielsen-Weber, L.; Wu, M.; Norris, E.
2016-11-01
At Sandia National Laboratories, the x-ray generator Manson source model 5 was upgraded from 10 to 25 kV. The purpose of the upgrade is to drive higher characteristics photon energies with higher throughput. In this work we present characterization studies for the source size and the x-ray intensity when varying the source voltage for a series of K-, L-, and M-shell lines emitted from Al, Y, and Au elements composing the anode. We used a 2-pinhole camera to measure the source size and an energy dispersive detector to monitor the spectral content and intensity of the x-ray source. As the voltage increases, the source size is significantly reduced and line intensity is increased for the three materials. We can take advantage of the smaller source size and higher source throughput to effectively calibrate the suite of Z Pulsed Power Facility crystal spectrometers.
Effect of oxygen supply on the size of implantable islet-containing encapsulation devices.
Papas, Klearchos K; Avgoustiniatos, Efstathios S; Suszynski, Thomas M
2016-03-01
Beta-cell replacement therapy is a promising approach for the treatment of diabetes but is currently limited by the human islet availability and by the need for systemic immunosuppression. Tissue engineering approaches that will enable the utilization of islets or β-cells from alternative sources (such as porcine islets or human stem cell derived beta cells) and minimize or eliminate the need for immunosuppression have the potential to address these critical limitations. However, tissue engineering approaches are critically hindered by the device size (similar to the size of a large flat screen television) required for efficacy in humans. The primary factor dictating the device size is the oxygen availability to islets to support their viability and function (glucose-stimulated insulin secretion [GSIS]). GSIS is affected (inhibited) at a much higher oxygen partial pressure [pO2] than that of viability (e.g. 10 mmHg as opposed to 0.1 mmHg). Enhanced oxygen supply (higher pO2) than what is available in vivo at transplant sites can have a profound effect on the required device size (potentially reduce it to the size of a postage stamp). This paper summarizes key information on the effect of oxygen on islet viability and function within immunoisolation devices and describes the potential impact of enhanced oxygen supply to devices in vivo on device size reduction.
1997-01-01
A special lighting technology was developed for space-based commercial plant growth research on NASA's Space Shuttle. Surgeons have used this technology to treat brain cancer on Earth, in two successful operations. The treatment technique called photodynamic therapy, requires the surgeon to use tiny pinhead-size Light Emitting Diodes (LEDs) (a source releasing long wavelengths of light) to activate light-sensitive, tumor-treating drugs. Laser light has been used for this type of surgery in the past, but the LED light illuminates through all nearby tissues, reaching parts of a tumor that shorter wavelengths of laser light carnot. The new probe is safer because the longer wavelengths of light are cooler than the shorter wavelengths of laser light, making the LED less likely to injure normal brain tissue near the tumor. It can also be used for hours at a time while still remaining cool to the touch. The LED probe consists of 144 tiny pinhead-size diodes, is 9-inches long, and about one-half-inch in diameter. The small balloon aids in even distribution of the light source. The LED light source is compact, about the size of a briefcase, and can be purchased for a fraction of the cost of a laser. The probe was developed for photodynamic cancer therapy by the Marshall Space Flight Center under a NASA Small Business Innovative Research program grant.
Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin
2014-01-01
In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150
From Extended Nanofluidics to an Autonomous Solar-Light-Driven Micro Fuel-Cell Device.
Pihosh, Yuriy; Uemura, Jin; Turkevych, Ivan; Mawatari, Kazuma; Kazoe, Yutaka; Smirnova, Adelina; Kitamori, Takehiko
2017-07-03
Autonomous micro/nano mechanical, chemical, and biomedical sensors require persistent power sources scaled to their size. Realization of autonomous micro-power sources is a challenging task, as it requires combination of wireless energy supply, conversion, storage, and delivery to the sensor. Herein, we realized a solar-light-driven power source that consists of a micro fuel cell (μFC) and a photocatalytic micro fuel generator (μFG) integrated on a single microfluidic chip. The μFG produces hydrogen by photocatalytic water splitting under solar light. The hydrogen fuel is then consumed by the μFC to generate electricity. Importantly, the by-product water returns back to the photocatalytic μFG via recirculation loop without losses. Both devices rely on novel phenomena in extended-nano-fluidic channels that ensure ultra-fast proton transport. As a proof of concept, we demonstrate that μFG/μFC source achieves remarkable energy density of ca. 17.2 mWh cm -2 at room temperature. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Gaston, Darilyn M.
1991-01-01
Electrical designers of Orbiter payloads face the challenge of determining proper circuit protection/wire size parameters to satisfy Orbiter engineering and safety requirements. This document is the result of a program undertaken to review test data from all available aerospace sources and perform additional testing to eliminate extrapolation errors. The resulting compilation of data was used to develop guidelines for the selection of wire sizes and circuit protection ratings. The purpose is to provide guidance to the engineering to ensure a design which meets Orbiter standards and which should be applicable to any aerospace design.
Girdling eastern black walnut to increase heartwood width
Larry D. Godsey; W.D. " Dusty" Walter; H.E. " Gene" Garrett
2004-01-01
Eastern black walnut (Juglans nigra L.) has often been planted at spacings that require pre-commercial thinning. These thinnings are deemed pre-commercial due to the small diameter of the trees and the low ratio of dark wood to light wood. As a consequence of size and wood quality, these thinnings are often an expense rather than a source of revenue...
Harwell Subroutine Library. A Catalogue of Subroutines (1973),
1973-07-01
up the equations AT Ax = ATb. There may be more than one right-hand size. Remark: If the solution is required see MAO9A. Versions: MAOA ; MAOBAD. Calls...Hermitian MEO8A source decks-modification of OEOIA real gene .’al to Hessenberg sparsity pattern TDO2A MC08A, MCI4A spherical co-ordinates GAOIA real
NASA Astrophysics Data System (ADS)
Booske, John H.
2008-05-01
Homeland security and military defense technology considerations have stimulated intense interest in mobile, high power sources of millimeter-wave (mmw) to terahertz (THz) regime electromagnetic radiation, from 0.1 to 10THz. While vacuum electronic sources are a natural choice for high power, the challenges have yet to be completely met for applications including noninvasive sensing of concealed weapons and dangerous agents, high-data-rate communications, high resolution radar, next generation acceleration drivers, and analysis of fluids and condensed matter. The compact size requirements for many of these high frequency sources require miniscule, microfabricated slow wave circuits. This necessitates electron beams with tiny transverse dimensions and potentially very high current densities for adequate gain. Thus, an emerging family of microfabricated, vacuum electronic devices share many of the same plasma physics challenges that are currently confronting "classic" high power microwave (HPM) generators including long-life bright electron beam sources, intense beam transport, parasitic mode excitation, energetic electron interaction with surfaces, and rf air breakdown at output windows. The contemporary plasma physics and other related issues of compact, high power mmw-to-THz sources are compared and contrasted to those of HPM generation, and future research challenges and opportunities are discussed.
High current plasma electron emitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiksel, G.; Almagri, A.F.; Craig, D.
1995-07-01
A high current plasma electron emitter based on a miniature plasma source has been developed. The emitting plasma is created by a pulsed high current gas discharge. The electron emission current is 1 kA at 300 V at the pulse duration of 10 ms. The prototype injector described in this paper will be used for a 20 kA electrostatic current injection experiment in the Madison Symmetric Torus (MST) reversed-field pinch. The source will be replicated in order to attain this total current requirement. The source has a simple design and has proven very reliable in operation. A high emission current,more » small size (3.7 cm in diameter), and low impurity generation make the source suitable for a variety of fusion and technological applications.« less
RACT (Reasonably Available Control Technology) determination for five industry categories in Florida
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawks, R.L.; Schlesser, S.P.; Loudin, D.L.
Section 172(b)(2) of the Clean Air Act as amended August 1977, requires that SIP revisions 'provide for the implementation of all reasonably available control measures as expeditiously as practicable.' The use of RACT for stationary sources is defined as the lowest emission limit that a particular source is capable of meeting by the application of control technology that is reasonably available considering technological and economic feasibility. The purpose of this report has been to identify control techniques that best represent RACT for particular emission sources in TSP nonattainment areas in the State of Florida. These sources include phosphate process operations;more » portland cement plants; electric arc furnaces; sweat or pot furnaces; materials handling, sizing, screening, crushing, and grinding operations.« less
Development, Integration and Utilization of Surface Nuclear Energy Sources for Exploration Missions
NASA Technical Reports Server (NTRS)
Houts, Michael G.; Schmidt, George R.; Bragg-Sitton, Shannon; Hickman, Robert; Hissam, Andy; Houston, Vance; Martin, Jim; Mireles, Omar; Reid, Bob; Schneider, Todd
2005-01-01
Throughout the past five decades numerous studies have identified nuclear energy as an enhancing or enabling technology for human surface exploration missions. Nuclear energy sources were used to provide electricity on Apollo missions 12, 14, 15, 16, and 17, and on the Mars Viking landers. Nuclear energy sources were used to provide heat on the Pathfinder; Spirit, and Discovery rovers. Scenarios have been proposed that utilize -1 kWe radioisotope systems for early missions, followed by fission systems in the 10 - 30 kWe range when energy requirements increase. A fission energy source unit size of approximately 150 kWt has been proposed based on previous lunar and Mars base architecture studies. Such a unit could support both early and advanced bases through a building block approach.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
Specific Features in Measuring Particle Size Distributions in Highly Disperse Aerosol Systems
NASA Astrophysics Data System (ADS)
Zagaynov, V. A.; Vasyanovich, M. E.; Maksimenko, V. V.; Lushnikov, A. A.; Biryukov, Yu. G.; Agranovskii, I. E.
2018-06-01
The distribution of highly dispersed aerosols is studied. Particular attention is given to the diffusion dynamic approach, as it is the best way to determine particle size distribution. It shown that the problem can be divided into two steps: directly measuring particle penetration through diffusion batteries and solving the inverse problem (obtaining a size distribution from the measured penetrations). No reliable way of solving the so-called inverse problem is found, but it can be done by introducing a parametrized size distribution (i.e., a gamma distribution). The integral equation is therefore reduced to a system of nonlinear equations that can be solved by elementary mathematical means. Further development of the method requires an increase in sensitivity (i.e., measuring the dimensions of molecular clusters with radioactive sources, along with the activity of diffusion battery screens).
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
Methods for the behavioral, educational, and social sciences: an R package.
Kelley, Ken
2007-11-01
Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loisel, G., E-mail: gploise@sandia.gov; Lake, P.; Gard, P.
2016-11-15
At Sandia National Laboratories, the x-ray generator Manson source model 5 was upgraded from 10 to 25 kV. The purpose of the upgrade is to drive higher characteristics photon energies with higher throughput. In this work we present characterization studies for the source size and the x-ray intensity when varying the source voltage for a series of K-, L-, and M-shell lines emitted from Al, Y, and Au elements composing the anode. We used a 2-pinhole camera to measure the source size and an energy dispersive detector to monitor the spectral content and intensity of the x-ray source. As themore » voltage increases, the source size is significantly reduced and line intensity is increased for the three materials. We can take advantage of the smaller source size and higher source throughput to effectively calibrate the suite of Z Pulsed Power Facility crystal spectrometers.« less
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-03-17
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.
Constraints on the extremely high-energy cosmic ray accelerators from classical electrodynamics
NASA Astrophysics Data System (ADS)
Aharonian, F. A.; Belyanin, A. A.; Derishev, E. V.; Kocharovsky, V. V.; Kocharovsky, Vl. V.
2002-07-01
We formulate the general requirements, set by classical electrodynamics, on the sources of extremely high-energy cosmic rays (EHECRs). It is shown that the parameters of EHECR accelerators are strongly limited not only by the particle confinement in large-scale magnetic fields or by the difference in electric potentials (generalized Hillas criterion) but also by the synchrotron radiation, the electro-bremsstrahlung, or the curvature radiation of accelerated particles. Optimization of these requirements in terms of an accelerator's size and magnetic field strength results in the ultimate lower limit to the overall source energy budget, which scales as the fifth power of attainable particle energy. Hard γ rays accompanying generation of EHECRs can be used to probe potential acceleration sites. We apply the results to several populations of astrophysical objects-potential EHECR sources-and discuss their ability to accelerate protons to 1020 eV and beyond. The possibility of gain from ultrarelativistic bulk flows is addressed, with active galactic nuclei and gamma-ray bursts being the examples.
Constraints on the extremely high-energy cosmic rays accelerators from classical electrodynamics
NASA Astrophysics Data System (ADS)
Belyanin, A.; Aharonian, F.; Derishev, E.; Kocharovsky, V.; Kocharovsky, V.
We formulate the general requirements, set by classical electrodynamics, to the sources of extremely high-energy cosmic rays (EHECRs). It is shown that the parameters of EHECR accelerators are strongly limited not only by the particle confinement in large-scale magnetic field or by the difference in electric potentials (generalized Hillas criterion), but also by the synchrotron radiation, the electro-bremsstrahlung, or the curvature radiation of accelerated particles. Optimization of these requirements in terms of accelerator's size and magnetic field strength results in the ultimate lower limit to the overall source energy budget, which scales as the fifth power of attainable particle energy. Hard gamma-rays accompanying generation of EHECRs can be used to probe potential acceleration sites. We apply the results to several populations of astrophysical objects - potential EHECR sources - and discuss their ability to accelerate protons to 1020 eV and beyond. A possibility to gain from ultrarelativistic bulk flows is addressed, with Active Galactic Nuclei and Gamma-Ray Bursts being the examples.
Joining of polymer composite materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magness, F.H.
1990-11-01
Under ideal conditions load bearing structures would be designed without joints, thus eliminating a source of added weight, complexity and weakness. In reality the need for accessibility, repair, and inspectability, added to the size limitations imposed by the manufacturing process and transportation/assembly requirements mean that some minimum number of joints will be required in most structures. The designer generally has two methods for joining fiber composite materials, adhesive bonding and mechanical fastening. As the use of thermoplastic materials increases, a third joining technique -- welding -- will become more common. It is the purpose of this document to provide amore » review of the available sources pertinent to the design of joints in fiber composites. The primary emphasis is given to adhesive bonding and mechanical fastening with information coming from documentary sources as old as 1961 and as recent as 1989. A third, shorter section on composite welding is included in order to provide a relatively comprehensive treatment of the subject.« less
Microelectromechanical Systems (MEMS) Broadband Light Source Developed
NASA Technical Reports Server (NTRS)
Tuma, Margaret L.
2003-01-01
A miniature, low-power broadband light source has been developed for aerospace applications, including calibrating spectrometers and powering miniature optical sensors. The initial motivation for this research was based on flight tests of a Fabry-Perot fiberoptic temperature sensor system used to detect aircraft engine exhaust gas temperature. Although the feasibility of the sensor system was proven, the commercial light source optically powering the device was identified as a critical component requiring improvement. Problems with the light source included a long stabilization time (approximately 1 hr), a large amount of heat generation, and a large input electrical power (6.5 W). Thus, we developed a new light source to enable the use of broadband optical sensors in aerospace applications. Semiconductor chip-based light sources, such as lasers and light-emitting diodes, have a relatively narrow range of emission wavelengths in comparison to incandescent sources. Incandescent light sources emit broadband radiation from visible to infrared wavelengths; the intensity at each wavelength is determined by the filament temperature and the materials chosen for the filament and the lamp window. However, present commercial incandescent light sources are large in size and inefficient, requiring several watts of electrical power to obtain the desired optical power, and they emit a large percentage of the input power as heat that must be dissipated. The miniature light source, developed jointly by the NASA Glenn Research Center, the Jet Propulsion Laboratory, and the Lighting Innovations Institute, requires one-fifth the electrical input power of some commercial light sources, while providing similar output light power that is easily coupled to an optical fiber. Furthermore, it is small, rugged, and lightweight. Microfabrication technology was used to reduce the size, weight, power consumption, and potential cost-parameters critical to future aerospace applications. This chip-based light source has the potential for monolithic fabrication with on-chip drive electronics. Other uses for these light sources are in systems for vehicle navigation, remote sensing applications such as monitoring bridges for stress, calibration sources for spectrometers, light sources for space sensors, display lighting, addressable arrays, and industrial plant monitoring. Two methods for filament fabrication are being developed: wet-chemical etching and laser ablation. Both yield a 25-mm-thick tungsten spiral filament. The proof-of-concept filament shown was fabricated with the wet etch method. Then it was tested by heating it in a vacuum chamber using about 1.25 W of electrical power; it generated bright, blackbody radiation at approximately 2650 K. The filament was packaged in Glenn's clean-room facilities. This design uses three chips vacuum-sealed with glass tape. The bottom chip consists of a reflective film deposited on silicon, the middle chip contains a tungsten filament bonded to silicon, and the top layer is a transparent window. Lifetime testing on the package will begin shortly. The emitted optical power is expected to be approximately 1.0 W with the spectral peak at 1.1 mm.
Identifying sources of aeolian mineral dust: Present and past
Muhs, Daniel R; Prospero, Joseph M; Baddock, Matthew C; Gill, Thomas E
2014-01-01
Aeolian mineral dust is an important component of the Earth’s environmental systems, playing roles in the planetary radiation balance, as a source of fertilizer for biota in both terrestrial and marine realms and as an archive for understanding atmospheric circulation and paleoclimate in the geologic past. Crucial to understanding all of these roles of dust is the identification of dust sources. Here we review the methods used to identify dust sources active at present and in the past. Contemporary dust sources, produced by both glaciogenic and non-glaciogenic processes, can be readily identified by the use of Earth-orbiting satellites. These data show that present dust sources are concentrated in a global dust belt that encompasses large topographic basins in low-latitude arid and semiarid regions. Geomorphic studies indicate that specific point sources for dust in this zone include dry or ephemeral lakes, intermittent stream courses, dune fields, and some bedrock surfaces. Back-trajectory analyses are also used to identify dust sources, through modeling of wind fields and the movement of air parcels over periods of several days. Identification of dust sources from the past requires novel approaches that are part of the geologic toolbox of provenance studies. Identification of most dust sources of the past requires the use of physical, mineralogical, geochemical, and isotopic analyses of dust deposits. Physical properties include systematic spatial changes in dust deposit thickness and particle size away from a source. Mineralogy and geochemistry can pinpoint dust sources by clay mineral ratios and Sc-Th-La abundances, respectively. The most commonly used isotopic methods utilize isotopes of Nd, Sr, and Pb and have been applied extensively in dust archives of deep-sea cores, ice cores, and loess. All these methods have shown that dust sources have changed over time, with far more abundant dust supplies existing during glacial periods. Greater dust supplies in glacial periods are likely due to greater production of glaciogenic dust particles from expanded ice sheets and mountain glaciers, but could also include dust inputs from exposed continental and insular shelves now submerged. Future dust sources are difficult to assess, but will likely differ from those of the present because of global warming. Global warming could bring about shifts in dust sources by changes in degree or type of vegetation cover, changes in wind strength, and increases or decreases in the size of water bodies. A major uncertainty in assessing dust sources of the future is related to changes inhuman land use, which could affect land surface cover, particularly due to increased agricultural endeavors and water usage.
Experience with Round Beam Operation at The Advanced Photon Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, A.; Emery, L.; Sajaev, V.
2015-01-01
Very short Touschek lifetime becomes a common issue for next-generation ultra-low emittance storage ring light sources. In order to reach a longer beamlifetime, such amachine often requires operating with a vertical-to-horizontal emittance ratio close to an unity, i.e. a “round beam”. In tests at the APS storage ring, we determined how a round beam can be reached experimentally. Some general issues, such as beam injection, optics measurement and corrections, and orbit correction have been tested also. To demonstrate that a round beam was achieved, the beam size ratio is calibrated using beam lifetime measurement.
Particle and Smoke Detection on ISS for Next Generation Smoke Detectors
NASA Technical Reports Server (NTRS)
Urban, David L.; Ruff, Gary; Yuan, Zeng-guang; Sheredy, William; Funk, Greg
2007-01-01
Rapid fire detection requires the ability to differentiate fire signatures from background conditions and nuisance sources. Proper design of a fire detector requires detailed knowledge of all of these signal sources so that a discriminating detector can be designed. Owing to the absence of microgravity smoke data, all current spacecraft smoke detectors were designed based upon normal-g conditions. The removal of buoyancy reduces the velocities in the high temperature zones in flames, increasing the residence time of smoke particles and consequently allowing longer growth time for the particles. Recent space shuttle experiments confirmed that, in some cases, increased particles sizes are seen in low-gravity and that the relative performance of the ISS (International Space Station) and space-shuttle smoke-detectors changes in low-gravity; however, sufficient particle size information to design new detectors was not obtained. To address this issue, the SAME (Smoke Aerosol Measurement Experiment) experiment is manifested to fly on the ISS in 2007. The SAME experiment will make measurements of the particle size distribution of the smoke particulate from several typical spacecraft materials providing quantitative design data for spacecraft smoke detectors. A precursor experiment (DAFT: Dust Aerosol measurement Feasibility Test) flew recently on the ISS and provided the first measurement of the background smoke particulate levels on the ISS. These background levels are critical to the design of future smoke detectors. The ISS cabin was found to be a very clean environment with particulate levels substantially below the space shuttle and typical ground-based environments.
Swords to plowshares: Shock wave applications to advanced lithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trucano, T.G.; Grady, D.E.; Kubiak, G.D.
1995-03-01
Extreme UltraViolet Lithography (EUVL) seeks to apply radiation in a wavelength region centered near 13 nm to produce microcircuits having features sizes 0.1 micron or less. A critical requirement for the commercial application of this technology is the development of an economical, compact source of this radiation which is suitable for lithographic applications. A good candidate is a laser-plasma source, which is generated by the interaction of an intermediate intensity laser pulse (up to 10{sup 12} W/cm{sup 2}) with a metallic target. While such a source has radiative characteristics which satisfy the needs of an EUVL source, the debris generatedmore » during the laser-target interaction strikes at the economy of the source. Here, the authors review the use of concepts and computer modeling, originally developed for hypervelocity impact analysis, to study this problem.« less
Deconvolution Methods and Systems for the Mapping of Acoustic Sources from Phased Microphone Arrays
NASA Technical Reports Server (NTRS)
Humphreys, Jr., William M. (Inventor); Brooks, Thomas F. (Inventor)
2012-01-01
Mapping coherent/incoherent acoustic sources as determined from a phased microphone array. A linear configuration of equations and unknowns are formed by accounting for a reciprocal influence of one or more cross-beamforming characteristics thereof at varying grid locations among the plurality of grid locations. An equation derived from the linear configuration of equations and unknowns can then be iteratively determined. The equation can be attained by the solution requirement of a constraint equivalent to the physical assumption that the coherent sources have only in phase coherence. The size of the problem may then be reduced using zoning methods. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with a phased microphone array (microphones arranged in an optimized grid pattern including a plurality of grid locations) in order to compile an output presentation thereof, thereby removing beamforming characteristics from the resulting output presentation.
Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone
Cole, J. M.; Wood, J. C.; Lopes, N. C.; Poder, K.; Abel, R. L.; Alatabi, S.; Bryant, J. S. J.; Jin, A.; Kneip, S.; Mecseki, K.; Symes, D. R.; Mangles, S. P. D.; Najmudin, Z.
2015-01-01
A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications. PMID:26283308
Sources of variability in collection and preparation of paint and lead-coating samples.
Harper, S L; Gutknecht, W F
2001-06-01
Chronic exposure of children to lead (Pb) can result in permanent physiological impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The sources of variability in the collection and preparation of samples were investigated to improve the performance and comparability of methods and to ensure that data generated will be adequate for its intended use. Paint samples of varying sizes (areas and masses) were collected at different locations across a variety of surfaces including metal, plaster, concrete, and wood. A variety of grinding techniques were compared. Manual mortar and pestle grinding for at least 1.5 min and mechanized grinding techniques were found to generate similar homogenous particle size distributions required for aliquots as small as 0.10 g. When 342 samples were evaluated for sample weight loss during mortar and pestle grinding, 4% had 20% or greater loss with a high of 41%. Homogenization and sub-sampling steps were found to be the principal sources of variability related to the size of the sample collected. Analysis of samples from different locations on apparently identical surfaces were found to vary by more than a factor of two both in Pb concentration (mg cm-2 or %) and areal coating density (g cm-2). Analyses of substrates were performed to determine the Pb remaining after coating removal. Levels as high as 1% Pb were found in some substrate samples, corresponding to more than 35 mg cm-2 Pb. In conclusion, these sources of variability must be considered in development and/or application of any sampling and analysis methodologies.
NASA Astrophysics Data System (ADS)
Sidorov, Vladimir P.; Melzitdinova, Anna V.
2017-10-01
This paper represents the definition methods for thermal constants according to the data of the weld width under the normal-circular heat source. The method is based on isoline contouring of “effective power - temperature conductivity coefficient”. The definition of coefficients provides setting requirements to the precision of welding parameters support with the enough accuracy for an engineering practice.
Wildlife habitats in managed rangelands—the Great Basin of southeastern Oregon: riparian zones.
Jack Ward Thomas; Chris Maser; Jon E. Rodiek
1979-01-01
Riparian zones can be identified by the presence of vegetation that requires free or unbound water or conditions that are more moist than normal (fig. 1) (Franklin and Dyrness 1973, Minore and Smith 1971). Riparian zones can vary considerably in size and vegetative complex because of the many combinations that can be created between water sources (fig. 2) and physical...
Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2015-01-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
Galileo probe battery systems design
NASA Technical Reports Server (NTRS)
Dagarin, B. P.; Van Ess, J. S.; Marcoux, L. S.
1986-01-01
NASA's Galileo mission to Jupiter will consist of a Jovian orbiter and an atmospheric entry probe. The power for the probe will be derived from two primary power sources. The main source is composed of three Li-SO2 battery modules containing 13 D-size cell strings per module. These are required to retain capacity for 7.5 years, support a 150 day clock, and a 7 hour mission sequence of increasing loads from 0.15 to 9.5 amperes for the last 30 minutes. This main power source is supplemented by two thermal batteries (CaCrO4-Ca) for use in firing the pyrotechnic initiators during the atmospheric staging events. This paper describes design development and testing of these batteries at the system level.
Compression of transmission bandwidth requirements for a certain class of band-limited functions.
NASA Technical Reports Server (NTRS)
Smith, I. R.; Schilling, D. L.
1972-01-01
A study of source-encoding techniques that afford a reduction of data-transmission rates is made with particular emphasis on the compression of transmission bandwidth requirements of band-limited functions. The feasibility of bandwidth compression through analog signal rooting is investigated. It is found that the N-th roots of elements of a certain class of entire functions of exponential type possess contour integrals resembling Fourier transforms, the Cauchy principal values of which are compactly supported on an interval one N-th the size of that of the original function. Exploring this theoretical result, it is found that synthetic roots can be generated, which closely approximate the N-th roots of a certain class of band-limited signals and possess spectra that are essentially confined to a bandwidth one N-th that of the signal subjected to the rooting operation. A source-encoding algorithm based on this principle is developed that allows the compression of data-transmission requirements for a certain class of band-limited signals.
Stegemann, Sven; Riedl, Regina; Sourij, Harald
2017-01-30
The clear identification of drug products by the patients is essential for a safe and effective medication management. In order to understand the impact of shape, size and color on medication identification a study was performed in subjects with type 2 diabetes mellitus (T2D). Ten model drugs differentiated by shape, size and color were evaluated using a mixed method of medication schedule preparation by the participants followed by a semi-structured interview. Detection times were fastest for the large round tablet shape and the bi-chromatic forms. Larger size was easier to identify than the smaller sizes except for the bi-chromatic forms. The shape was the major source of errors, followed by the size and the color dimension. The results from this study suggests that color as a single dimension are perceived more effectively by subjects with T2D compared to shape and size, which requires a more demanding processing of three dimension and is dependent on the perspective. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
1997-01-01
A special lighting technology was developed for space-based commercial plant growth research on NASA's Space Shuttle. Surgeons have used this technology to treat brain cancer on Earth, in two successful operations. The treatment technique called photodynamic therapy, requires the surgeon to use tiny pinhead-size Light Emitting Diodes (LEDs) (a source releasing long wavelengths of light) to activate light-sensitive, tumor-treating drugs. Laser light has been used for this type of surgery in the past, but the LED light illuminates through all nearby tissues, reaching parts of a tumor that shorter wavelengths of laser light carnot. The new probe is safer because the longer wavelengths of light are cooler than the shorter wavelengths of laser light, making the LED less likely to injure normal brain tissue near the tumor. It can also be used for hours at a time while still remaining cool to the touch. The LED probe consists of 144 tiny pinhead-size diodes, is 9-inches long, and about one-half-inch in diameter. The small balloon aids in even distribution of the light source. The LED light source is compact, about the size of a briefcase, and can be purchased for a fraction of the cost of a laser. The probe was developed for photodynamic cancer therapy by the Marshall Space Flight Center under a NASA Small Business Innovative Research program grant.
NASA Astrophysics Data System (ADS)
Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.
2014-03-01
To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.
NASA Technical Reports Server (NTRS)
1980-01-01
Twenty-four functional requirements were prepared under six categories and serve to indicate how to integrate dispersed storage generation (DSG) systems with the distribution and other portions of the electric utility system. Results indicate that there are no fundamental technical obstacles to prevent the connection of dispersed storage and generation to the distribution system. However, a communication system of some sophistication is required to integrate the distribution system and the dispersed generation sources for effective control. The large-size span of generators from 10 KW to 30 MW means that a variety of remote monitoring and control may be required. Increased effort is required to develop demonstration equipment to perform the DSG monitoring and control functions and to acquire experience with this equipment in the utility distribution environment.
A suite of diagnostics to validate and optimize the prototype ITER neutral beam injector
NASA Astrophysics Data System (ADS)
Pasqualotto, R.; Agostini, M.; Barbisan, M.; Brombin, M.; Cavazzana, R.; Croci, G.; Dalla Palma, M.; Delogu, R. S.; De Muri, M.; Muraro, A.; Peruzzo, S.; Pimazzoni, A.; Pomaro, N.; Rebai, M.; Rizzolo, A.; Sartori, E.; Serianni, G.; Spagnolo, S.; Spolaore, M.; Tardocchi, M.; Zaniol, B.; Zaupa, M.
2017-10-01
The ITER project requires additional heating provided by two neutral beam injectors using 40 A negative deuterium ions accelerated at 1 MV. As the beam requirements have never been experimentally met, a test facility is under construction at Consorzio RFX, which hosts two experiments: SPIDER, full-size 100 kV ion source prototype, and MITICA, 1 MeV full-size ITER injector prototype. Since diagnostics in ITER injectors will be mainly limited to thermocouples, due to neutron and gamma radiation and to limited access, it is crucial to thoroughly investigate and characterize in more accessible experiments the key parameters of source plasma and beam, using several complementary diagnostics assisted by modelling. In SPIDER and MITICA the ion source parameters will be measured by optical emission spectroscopy, electrostatic probes, cavity ring down spectroscopy for H^- density and laser absorption spectroscopy for cesium density. Measurements over multiple lines-of-sight will provide the spatial distribution of the parameters over the source extension. The beam profile uniformity and its divergence are studied with beam emission spectroscopy, complemented by visible tomography and neutron imaging, which are novel techniques, while an instrumented calorimeter based on custom unidirectional carbon fiber composite tiles observed by infrared cameras will measure the beam footprint on short pulses with the highest spatial resolution. All heated components will be monitored with thermocouples: as these will likely be the only measurements available in ITER injectors, their capabilities will be investigated by comparison with other techniques. SPIDER and MITICA diagnostics are described in the present paper with a focus on their rationale, key solutions and most original and effective implementations.
The Relationship of Body Size and Adiposity to Source of Self-Esteem in College Women
ERIC Educational Resources Information Center
Moncur, Breckann; Bailey, Bruce W.; Lockhart, Barbara D.; LeCheminant, James D.; Perkins, Annette E.
2013-01-01
Background: Studies looking at self-esteem and body size or adiposity generally demonstrate a negative relationship. However, the relationship between the source of self-esteem and body size has not been examined in college women. Purpose: The purpose of this study was to evaluate the relationship of body size and adiposity to source of…
Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S
2017-10-04
Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.
The requirements for low-temperature plasma ionization support miniaturization of the ion source.
Kiontke, Andreas; Holzer, Frank; Belder, Detlev; Birkemeyer, Claudia
2018-06-01
Ambient ionization mass spectrometry (AI-MS), the ionization of samples under ambient conditions, enables fast and simple analysis of samples without or with little sample preparation. Due to their simple construction and low resource consumption, plasma-based ionization methods in particular are considered ideal for use in mobile analytical devices. However, systematic investigations that have attempted to identify the optimal configuration of a plasma source to achieve the sensitive detection of target molecules are still rare. We therefore used a low-temperature plasma ionization (LTPI) source based on dielectric barrier discharge with helium employed as the process gas to identify the factors that most strongly influence the signal intensity in the mass spectrometry of species formed by plasma ionization. In this study, we investigated several construction-related parameters of the plasma source and found that a low wall thickness of the dielectric, a small outlet spacing, and a short distance between the plasma source and the MS inlet are needed to achieve optimal signal intensity with a process-gas flow rate of as little as 10 mL/min. In conclusion, this type of ion source is especially well suited for downscaling, which is usually required in mobile devices. Our results provide valuable insights into the LTPI mechanism; they reveal the potential to further improve its implementation and standardization for mobile mass spectrometry as well as our understanding of the requirements and selectivity of this technique. Graphical abstract Optimized parameters of a dielectric barrier discharge plasma for ionization in mass spectrometry. The electrode size, shape, and arrangement, the thickness of the dielectric, and distances between the plasma source, sample, and MS inlet are marked in red. The process gas (helium) flow is shown in black.
Growing Larger Crystals for Neutron Diffraction
NASA Technical Reports Server (NTRS)
Pusey, Marc
2003-01-01
Obtaining crystals of suitable size and high quality has been a major bottleneck in macromolecular crystallography. With the advent of advanced X-ray sources and methods the question of size has rapidly dwindled, almost to the point where if one can see the crystal then it was big enough. Quality is another issue, and major national and commercial efforts were established to take advantage of the microgravity environment in an effort to obtain higher quality crystals. Studies of the macromolecule crystallization process were carried out in many labs in an effort to understand what affected the resultant crystal quality on Earth, and how microgravity improved the process. While technological improvements are resulting in a diminishing of the minimum crystal size required, neutron diffraction structural studies still require considerably larger crystals, by several orders of magnitude, than X-ray studies. From a crystal growth physics perspective there is no reason why these 'large' crystals cannot be obtained: the question is generally more one of supply than limitations mechanism. This talk will discuss our laboratory s current model for macromolecule crystal growth, with highlights pertaining to the growth of crystals suitable for neutron diffraction studies.
A low-cost and portable realization on fringe projection three-dimensional measurement
NASA Astrophysics Data System (ADS)
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2015-12-01
Fringe projection three-dimensional measurement is widely applied in a wide range of industrial application. The traditional fringe projection system has the disadvantages of high expense, big size, and complicated calibration requirements. In this paper we introduce a low-cost and portable realization on three-dimensional measurement with Pico projector. It has the advantages of low cost, compact physical size, and flexible configuration. For the proposed fringe projection system, there is no restriction to camera and projector's relative alignment on parallelism and perpendicularity for installation. Moreover, plane-based calibration method is adopted in this paper that avoids critical requirements on calibration system such as additional gauge block or precise linear z stage. What is more, error sources existing in the proposed system are introduced in this paper. The experimental results demonstrate the feasibility of the proposed low cost and portable fringe projection system.
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-01-01
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371
Efficacy of adaptation measures to future water scarcity on a global scale
NASA Astrophysics Data System (ADS)
Yoshikawa, S.; Kanae, S.
2015-12-01
Water supply sources for all sector are critically important for agricultural and industrial productivity. The current rapid increase in water use is considered unsustainable and threatens human life. In our previous study (Yoshikawa et al., 2014 in HESS), we estimated the time-varying dependence of water requirements from water supply sources during past and future periods using the global water resources model, H08. The sources of water requirements were specified using four categories: rivers, large reservoirs, medium-size reservoirs, and non-local non-renewable blue water (NNBW). We also estimated ΔNNBW which is defined as an increase in NNBW from the past to the future. From the results, we could require the further development of water supply sources in order to sustain future water use. For coping with water scarcity using ΔNNBW, there is need for adaptation measure. To address adaptation measures, we need to set adaptation options which can be divided between 'Supply enhancement' and 'Demand management'. The supply enhancement includes increased storage, groundwater development, inter-basin transfer, desalination and re-use urban waste water. Demand management is defined as a set of actions controlling water demand by reducing water loss, increasing water productivity, and water re-allocation. In this study, we focus on estimating further future water demand under taking into account of several adaptation measures using H08 model.
Imam, Neena; Barhen, Jacob
2009-01-01
For real-time acoustic source localization applications, one of the primary challenges is the considerable growth in computational complexity associated with the emergence of ever larger, active or passive, distributed sensor networks. These sensors rely heavily on battery-operated system components to achieve highly functional automation in signal and information processing. In order to keep communication requirements minimal, it is desirable to perform as much processing on the receiver platforms as possible. However, the complexity of the calculations needed to achieve accurate source localization increases dramatically with the size of sensor arrays, resulting in substantial growth of computational requirements that cannot bemore » readily met with standard hardware. One option to meet this challenge builds upon the emergence of digital optical-core devices. The objective of this work was to explore the implementation of key building block algorithms used in underwater source localization on the optical-core digital processing platform recently introduced by Lenslet Inc. This demonstration of considerably faster signal processing capability should be of substantial significance to the design and innovation of future generations of distributed sensor networks.« less
Gary W. Miller; Patrick H. Brose; Jeffrey D. Kochenderfer; James N. Kochenderfer; Kurt W. Gottschalk; John R. Denning
2016-01-01
Successful oak (Quercus spp.) regeneration requires the presence of competitive sources of oak reproduction before parent oaks are harvested. Mountain laurel (Kalmia latifolia) in the understory of many Appalachian forests prevents new oak seedlings from receiving adequate sunlight to survive and grow into competitive size classes. This study examined the efficacy of...
Influence and Modeling of Residual Stresses in Thick Walled Pressure Vessels with Through Holes
2012-02-28
9 FIGURE 4 ENVIRONMENTAL CRACKING OBSERVED IN EVACUATOR HOLE .......... 9 FIGURE 5 STRESSES PRESENT IN STRAIGHT EVACUATOR... ASSESMENT OF INITIAL DAMAGE Through investigation was undertaken on vessels similar in size and strength level to pressure vessels 85A and 85B...suggesting that the source of the residual stresses required to initiate and propagate these environmental cracks is not a resultant of the typical
Chalcogenide Glass Lasers on Silicon Substrate Integrated Photonics
2016-07-08
AFRL-AFOSR-UK-TR-2016-0013 Chalcogenide glass lasers on silicon substrate integrated photonics Clara Dimas MASDAR INSTITUTE OF SCIENCE & TECHNOLOGY...PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) MASDAR INSTITUTE OF SCIENCE & TECHNOLOGY - MIST...communication by reducing coupling losses, chip size, energy requirements and manufacturing cost. Chalcogenide glass (ChG) light sources doped with rare earth
NASA Astrophysics Data System (ADS)
Salimi, F.; Ristovski, Z.; Mazaheri, M.; Laiman, R.; Crilley, L. R.; He, C.; Clifford, S.; Morawska, L.
2014-06-01
Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.
NASA Astrophysics Data System (ADS)
Salimi, F.; Ristovski, Z.; Mazaheri, M.; Laiman, R.; Crilley, L. R.; He, C.; Clifford, S.; Morawska, L.
2014-11-01
Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods that have been recently employed to analyse PNSD data; however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectrum to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.
NASA Astrophysics Data System (ADS)
Fourmaux, Sylvain; Kieffer, Jean-Claude; Krol, Andrzej
2017-03-01
We are developing ultrahigh spatial resolution (FWHM < 2 μm) high-brilliance x-ray source for rapid in vivo tomographic microvasculature imaging micro-CT angiography (μCTA) in small animal models using optimized contrast agent. It exploits Laser Wakefield Accelerator (LWFA) betatron x-ray emission phenomenon. Ultrashort high-intensity laser pulse interacting with a supersonic gas jet produces an ion cavity ("bubble") in the plasma in the wake of the laser pulse. Electrons that are injected into this bubble gain energy, perform wiggler-like oscillations and generate burst of incoherent x-rays with characteristic duration time comparable to the laser pulse duration, continuous synchrotron-like spectral distribution that might extend to hundreds keV, very high brilliance, very small focal spot and highly directional emission in the cone-beam geometry. Such LWFA betatron x-ray source created in our lab produced 1021 -1023 photonsṡ shot-1ṡmrad-2ṡmm-2/0.1%bw with mean critical energy in the12-30 keV range. X-ray source size for a single laser shot was FWHM=1.7 μm x-ray beam divergence 20-30 mrad, and effective focal spot size for multiple shots FWHM= 2 μm. Projection images of simple phantoms and complex biological objects including insects and mice were obtained in single laser shots. We conclude that ultrahigh spatial resolution μCTA (FWHM 2 μm) requiring thousands of projection images could be accomplished using LWFA betatron x-ray radiation in approximately 40 s with our existing 220 TW laser and sub seconds with next generation of ultrafast lasers and x-ray detectors, as opposed to several hours required using conventional microfocal x-ray tubes. Thus, sub second ultrahigh resolution in vivo microtomographic microvasculature imaging (in both absorption and phase contrast mode) in small animal models of cancer and vascular diseases will be feasible with LWFA betatron x-ray source.
Mulenga, Philippe Cilundika; Kazadi, Alex Bukasa
2016-01-01
Penis size is a huge topic of anxiety for a lot of men. Some of them are unhappy with their penis size as shown in the study conducted by Tiggemann in 2008. There are relatively few studies on erect penis size. This may reflect cultural taboos of researchers or doctors interacting with men who are in a state of sexual arousal. On the other hand, it is important for people who announce details on penis size to give the average penis size first and then sizes suggested by the researchers. We performed a cross-sectional survey in the two major urban centres of the Democratic Republic of Congo namely Kinshasa and Lubumbashi over a period of two years from May 2014 to May 2016. A total of 21 information sources constituted our sample, 8 in Kinshasa and 13 in Lubumbashi. We found it sufficient because in our culture discussing about sexual matter is rare. The parameters studied were: the nature of the source, the accuracy of the measurement method, the presence of bibliographical reference, the announced penis size. The majority of information sources used were radio or television broadcastings (23,8%); this can be explained by the fact that there are an increasing number of radio and television stations in our country and especially in large cities. With regard to accuracy of information about penis measurement method when sharing the message about penis size, our study showed that the majority of information sources did not indicate it when they announced penis size to the public (85,7%). Several sources did not report bibliographical references (57,1%). Announced data analysis on penis size showed that the average penis size was: 14 cm (28,6%), 15 cm (23,8%) and 15-20 cm (19%). All these results are intended to offer a warning to all players responsible for diffusing information on sexual health (penis size): scientific rigor consists in seeking information from reliable sources.
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
Maintenance of neuronal size gradient in MNTB requires sound-evoked activity.
Weatherstone, Jessica H; Kopp-Scheinpflug, Conny; Pilati, Nadia; Wang, Yuan; Forsythe, Ian D; Rubel, Edwin W; Tempel, Bruce L
2017-02-01
The medial nucleus of the trapezoid body (MNTB) is an important source of inhibition during the computation of sound location. It transmits fast and precisely timed action potentials at high frequencies; this requires an efficient calcium clearance mechanism, in which plasma membrane calcium ATPase 2 (PMCA2) is a key component. Deafwaddler ( dfw 2J ) mutant mice have a null mutation in PMCA2 causing deafness in homozygotes ( dfw 2J / dfw 2J ) and high-frequency hearing loss in heterozygotes (+/ dfw 2J ). Despite the deafness phenotype, no significant differences in MNTB volume or cell number were observed in dfw 2J homozygous mutants, suggesting that PMCA2 is not required for MNTB neuron survival. The MNTB tonotopic axis encodes high to low sound frequencies across the medial to lateral dimension. We discovered a cell size gradient along this axis: lateral neuronal somata are significantly larger than medially located somata. This size gradient is decreased in +/ dfw 2J and absent in dfw 2J / dfw 2J The lack of acoustically driven input suggests that sound-evoked activity is required for maintenance of the cell size gradient. This hypothesis was corroborated by selective elimination of auditory hair cell activity with either hair cell elimination in Pou4f3 DTR mice or inner ear tetrodotoxin (TTX) treatment. The change in soma size was reversible and recovered within 7 days of TTX treatment, suggesting that regulation of the gradient is dependent on synaptic activity and that these changes are plastic rather than permanent. NEW & NOTEWORTHY Neurons of the medial nucleus of the trapezoid body (MNTB) act as fast-spiking inhibitory interneurons within the auditory brain stem. The MNTB is topographically organized, with low sound frequencies encoded laterally and high frequencies medially. We discovered a cell size gradient along this axis: lateral neurons are larger than medial neurons. The absence of this gradient in deaf mice lacking plasma membrane calcium ATPase 2 suggests an activity-dependent, calcium-mediated mechanism that controls neuronal soma size. Copyright © 2017 the American Physiological Society.
Kuiper Belt Object Orbiter Using Advanced Radioisotope Power Sources and Electric Propulsion
NASA Technical Reports Server (NTRS)
Oleson, Steven R.; McGuire, Melissa L.; Dankanich, John; Colozza, Anthony; Schmitz, Paul; Khan, Omair; Drexler, Jon; Fittje, James
2011-01-01
A joint NASA GRC/JPL design study was performed for the NASA Radioisotope Power Systems Office to explore the use of radioisotope electric propulsion for flagship class missions. The Kuiper Belt Object Orbiter is a flagship class mission concept projected for launch in the 2030 timeframe. Due to the large size of a flagship class science mission larger radioisotope power system building blocks were conceptualized to provide the roughly 4 kW of power needed by the NEXT ion propulsion system and the spacecraft. Using REP the spacecraft is able to rendezvous with and orbit a Kuiper Belt object in 16 years using either eleven (no spare) 420 W advanced RTGs or nine (with a spare) 550 W advanced Stirling Radioisotope systems. The design study evaluated integrating either system and estimated impacts on cost as well as required General Purpose Heat Source requirements.
Rhenium ion beam for implantation into semiconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulevoy, T. V.; Seleznev, D. N.; Alyoshin, M. E.
2012-02-15
At the ion source test bench in Institute for Theoretical and Experimental Physics the program of ion source development for semiconductor industry is in progress. In framework of the program the Metal Vapor Vacuum Arc ion source for germanium and rhenium ion beam generation was developed and investigated. It was shown that at special conditions of ion beam implantation it is possible to fabricate not only homogenous layers of rhenium silicides solid solutions but also clusters of this compound with properties of quantum dots. At the present moment the compound is very interesting for semiconductor industry, especially for nanoelectronics andmore » nanophotonics, but there is no very developed technology for production of nanostructures (for example quantum sized structures) with required parameters. The results of materials synthesis and exploration are presented.« less
ProFound: Source Extraction and Application to Modern Survey Data
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.
2018-04-01
ProFound detects sources in noisy images, generates segmentation maps identifying the pixels belonging to each source, and measures statistics like flux, size, and ellipticity. These inputs are key requirements of ProFit (ascl:1612.004), our galaxy profiling package; these two packages used in unison semi-automatically profile large samples of galaxies. The key novel feature introduced in ProFound is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. ProFound offers good initial parameter estimation for ProFit, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the ProFound and ProFit pipeline.
Is plagioclase removal responsible for the negative Eu anomaly in the source regions of mare basalts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shearer, C.K.; Papike, J.J.
1989-12-01
The nearly ubiquitous presence of a negative Eu anomaly in the mare basalts has been suggested to indicate prior separation and flotation of plagioclase from the basalt source region during its crystallization from a lunar magma ocean (LMO). Are there any mare basalts derived from a mantle source which did not experience prior plagioclase separation Crystal chemical rationale for REE substitution in pyroxene suggests that the combination of REE size and charge, M2 site characteristics of pyroxene, fO{sub 2}, magma chemistry, and temperature may account for the negative Eu anomaly in the source region of some types of primitive, lowmore » TiO{sub 2} mare basalts. This origin for the negative Eu anomaly does not preclude the possibility of the LMO as many mare basalts still require prior plagioclase crystallization and separation and/or hybridization involving a KREEP component.« less
Water gun vs air gun: A comparison
Hutchinson, D.R.; Detrick, R. S.
1984-01-01
The water gun is a relatively new marine seismic sound source that produces an acoustic signal by an implosive rather than explosive mechanism. A comparison of the source characteristics of two different-sized water guns with those of conventional air guns shows the the water gun signature is cleaner and much shorter than that of a comparable-sized air gun: about 60-100 milliseconds (ms) for an 80-in3. (1.31-liter (I)) water gun compared with several hundred ms for an 80-in3. (1.31-1) air gun. The source spectra of water guns are richer in high frequencies (>200 Hz) than are those of air guns, but they also have less energy than those of air guns at low frequencies. A comparison between water gun and air gun reflection profiles in both shallow (Long Island Sound)-and deep (western Bermuda Rise)-water settings suggests that the water gun offers a good compromise between very high resolution, limited penetration systems (e.g. 3.5-kHz profilers and sparkers) and the large volume air guns and tuned air gun arrays generally used where significant penetration is required. ?? 1984 D. Reidel Publishing Company.
Zheng, Shuanghao; Tang, Xingyan; Wu, Zhong-Shuai; Tan, Yuan-Zhi; Wang, Sen; Sun, Chenglin; Cheng, Hui-Ming; Bao, Xinhe
2017-02-28
The emerging smart power source-unitized electronics represent an utmost innovative paradigm requiring dramatic alteration from materials to device assembly and integration. However, traditional power sources with huge bottlenecks on the design and performance cannot keep pace with the revolutionized progress of shape-confirmable integrated circuits. Here, we demonstrate a versatile printable technology to fabricate arbitrary-shaped, printable graphene-based planar sandwich supercapacitors based on the layer-structured film of electrochemically exfoliated graphene as two electrodes and nanosized graphene oxide (lateral size of 100 nm) as a separator on one substrate. These monolithic planar supercapacitors not only possess arbitrary shapes, e.g., rectangle, hollow-square, "A" letter, "1" and "2" numbers, circle, and junction-wire shape, but also exhibit outstanding performance (∼280 F cm -3 ), excellent flexibility (no capacitance degradation under different bending states), and applicable scalability, which are far beyond those achieved by conventional technologies. More notably, such planar supercapacitors with superior integration can be readily interconnected in parallel and series, without use of metal interconnects and contacts, to modulate the output current and voltage of modular power sources for designable integrated circuits in various shapes and sizes.
Directional Emission from Dielectric Leaky-Wave Nanoantennas
NASA Astrophysics Data System (ADS)
Peter, Manuel; Hildebrandt, Andre; Schlickriede, Christian; Gharib, Kimia; Zentgraf, Thomas; Förstner, Jens; Linden, Stefan
2017-07-01
An important source of innovation in nanophotonics is the idea to scale down known radio wave technologies to the optical regime. One thoroughly investigated example of this approach are metallic nanoantennas which employ plasmonic resonances to couple localized emitters to selected far-field modes. While metals can be treated as perfect conductors in the microwave regime, their response becomes Drude-like at optical frequencies. Thus, plasmonic nanoantennas are inherently lossy. Moreover, their resonant nature requires precise control of the antenna geometry. A promising way to circumvent these problems is the use of broadband nanoantennas made from low-loss dielectric materials. Here, we report on highly directional emission from active dielectric leaky-wave nanoantennas made of Hafnium dioxide. Colloidal semiconductor quantum dots deposited in the nanoantenna feed gap serve as a local light source. The emission patterns of active nanoantennas with different sizes are measured by Fourier imaging. We find for all antenna sizes a highly directional emission, underlining the broadband operation of our design.
An assessment of the effects of cell size on AGNPS modeling of watershed runoff
Wu, S.-S.; Usery, E.L.; Finn, M.P.; Bosch, D.D.
2008-01-01
This study investigates the changes in simulated watershed runoff from the Agricultural NonPoint Source (AGNPS) pollution model as a function of model input cell size resolution for eight different cell sizes (30 m, 60 m, 120 m, 210 m, 240 m, 480 m, 960 m, and 1920 m) for the Little River Watershed (Georgia, USA). Overland cell runoff (area-weighted cell runoff), total runoff volume, clustering statistics, and hot spot patterns were examined for the different cell sizes and trends identified. Total runoff volumes decreased with increasing cell size. Using data sets of 210-m cell size or smaller in conjunction with a representative watershed boundary allows one to model the runoff volumes within 0.2 percent accuracy. The runoff clustering statistics decrease with increasing cell size; a cell size of 960 m or smaller is necessary to indicate significant high-runoff clustering. Runoff hot spot areas have a decreasing trend with increasing cell size; a cell size of 240 m or smaller is required to detect important hot spots. Conclusions regarding cell size effects on runoff estimation cannot be applied to local watershed areas due to the inconsistent changes of runoff volume with cell size; but, optimal cells sizes for clustering and hot spot analyses are applicable to local watershed areas due to the consistent trends.
Position and morphology of the compact non-thermal radio source at the Galactic Center
NASA Technical Reports Server (NTRS)
Marcaide, J. M.; Alberdi, A.; Bartel, N.; Clark, T. A.; Corey, B. E.; Elosegui, P.; Gorenstein, M. V.; Guirado, J. C.; Kardashev, N.; Popov, M.
1992-01-01
We have determined with VLBI the position of the compact nonthermal radio source at the Galactic Center, commonly referred to as SgrA*, in the J2000.0 reference frame of extragalactic radio sources. We have also determined the size of SgrA* at 1.3, 3.6, and 13 cm wavelengths and found that the apparent size of the source increases proportionally to the observing wavelength squared, as expected from source size broadening by interstellar scattering and as reported previously by other authors. We have also established an upper limit of about 8 mJy at 3.6 cm wavelength for any ultracompact component. The actual size of the source is less than 15 AU. Fourier analysis of our very sensitive 3.6 cm observations of this source shows no significant variations of correlated flux density on time scales from 12 to 700 s.
Functional morphology of the sound-generating labia in the syrinx of two songbird species.
Riede, Tobias; Goller, Franz
2010-01-01
In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function.
Functional morphology of the sound-generating labia in the syrinx of two songbird species
Riede, Tobias; Goller, Franz
2010-01-01
In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function. PMID:19900184
Brog, Jean-Pierre; Crochet, Aurélien; Seydoux, Joël; Clift, Martin J D; Baichette, Benoît; Maharajan, Sivarajakumar; Barosova, Hana; Brodard, Pierre; Spodaryk, Mariana; Züttel, Andreas; Rothen-Rutishauser, Barbara; Kwon, Nam Hee; Fromm, Katharina M
2017-08-22
LiCoO 2 is one of the most used cathode materials in Li-ion batteries. Its conventional synthesis requires high temperature (>800 °C) and long heating time (>24 h) to obtain the micronscale rhombohedral layered high-temperature phase of LiCoO 2 (HT-LCO). Nanoscale HT-LCO is of interest to improve the battery performance as the lithium (Li + ) ion pathway is expected to be shorter in nanoparticles as compared to micron sized ones. Since batteries typically get recycled, the exposure to nanoparticles during this process needs to be evaluated. Several new single source precursors containing lithium (Li + ) and cobalt (Co 2+ ) ions, based on alkoxides and aryloxides have been structurally characterized and were thermally transformed into nanoscale HT-LCO at 450 °C within few hours. The size of the nanoparticles depends on the precursor, determining the electrochemical performance. The Li-ion diffusion coefficients of our LiCoO 2 nanoparticles improved at least by a factor of 10 compared to commercial one, while showing good reversibility upon charging and discharging. The hazard of occupational exposure to nanoparticles during battery recycling was investigated with an in vitro multicellular lung model. Our heterobimetallic single source precursors allow to dramatically reduce the production temperature and time for HT-LCO. The obtained nanoparticles of LiCoO 2 have faster kinetics for Li + insertion/extraction compared to microparticles. Overall, nano-sized LiCoO 2 particles indicate a lower cytotoxic and (pro-)inflammogenic potential in vitro compared to their micron-sized counterparts. However, nanoparticles aggregate in air and behave partially like microparticles.
New class of optoelectronic oscillators (OEO) for microwave signal generation and processing
NASA Astrophysics Data System (ADS)
Maleki, Lute; Yao, X. S.
1996-11-01
A new class of oscillators based on photonic devices is presented. These opto-electronic oscillators (OEO's) generate microwave oscillation by converting continuous energy from a light source using a feedback circuit which includes a delay element, an electro-optic switch, and a photodetector. Different configurations of OEO's are presented, each of which may be applied to a particular application requiring ultra-high performance, or low cost and small size.
A planar near-field scanning technique for bistatic radar cross section measurements
NASA Technical Reports Server (NTRS)
Tuhela-Reuning, S.; Walton, E. K.
1990-01-01
A progress report on the development of a bistatic radar cross section (RCS) measurement range is presented. A technique using one parabolic reflector and a planar scanning probe antenna is analyzed. The field pattern in the test zone is computed using a spatial array of signal sources. It achieved an illumination pattern with 1 dB amplitude and 15 degree phase ripple over the target zone. The required scan plane size is found to be proportional to the size of the desired test target. Scan plane probe sample spacing can be increased beyond the Nyquist lambda/2 limit permitting constant probe sample spacing over a range of frequencies.
NASA Technical Reports Server (NTRS)
Snyder, Christopher
2017-01-01
Assessing the potential to bring 100 years of aeronautics knowledge to the entrepreneurs desktop to enable a design environment for emerging vertical lift vehicles is one goal for the NASA's Design Environment for Novel Vertical Lift Vehicles (DELIVER). As part of this effort, a system study was performed using a notional, urban aerial taxi system to better understand vehicle requirements along with the tools and methods capability to assess these vehicles and their subsystems using cryogenic cooled components. The baseline was a vertical take-off and landing (VTOL) aircraft, with all-electric propulsion system assuming 15 year technology performance levels and its capability limited to a pilot with one or two people and cargo. Hydrocarbon-fueled hybrid concepts were developed to improve mission capabilities. The hybrid systems resulted in significant improvements in maximum range and number of on demand mobility (ODM) missions that could be completed before refuel or recharge. An important consideration was thermal management, including the choice for air-cooled or cryogenic cooling using liquid natural gas (LNG) fuel. Cryogenic cooling for critical components can have important implications on component performance and size. Thermal loads were also estimated, subsequent effort will be required to verify feasibility for cooling airflow and packaging. LNG cryogenic cooling of selected components further improved vehicle range and reduced thermal loads, but the same concerns for airflow and packaging still need to be addressed. The use of the NASA Design and Analysis of Rotorcraft (NDARC) tool for vehicle sizing and mission analysis appears to be capable of supporting analyses for present and future types of vehicles, missions, propulsion, and energy sources. Further efforts are required to develop verified models for these new types of propulsion and energy sources in the size and use envisioned for these emerging vehicle and mission classes.
NASA Technical Reports Server (NTRS)
Snyder, Christopher A.
2017-01-01
Assessing the potential to bring 100 years of aeronautics knowledge to the entrepreneurs desktop to enable a design environment for emerging vertical lift vehicles is one goal for the NASAs Design Environment for Novel Vertical Lift Vehicles (DELIVER). As part of this effort, a system study was performed using a notional, urban aerial taxi system to better understand vehicle requirements along with the tools and methods capability to assess these vehicles and their subsystems using cryogenic cooled components. The baseline was a vertical take-off and landing (VTOL) aircraft, with all-electric propulsion system assuming 15 year technology performance levels and its capability limited to a pilot with one or two people and cargo. Hydrocarbon-fueled hybrid concepts were developed to improve mission capabilities. The hybrid systems resulted in significant improvements in maximum range and number of on demand mobility (ODM) missions that could be completed before refuel or recharge. An important consideration was thermal management, including the choice for air-cooled or cryogenic cooling using liquid natural gas (LNG) fuel. Cryogenic cooling for critical components can have important implications on component performance and size. Thermal loads were also estimated, subsequent effort will be required to verify feasibility for cooling airflow and packaging. LNG cryogenic cooling of selected components further improved vehicle range and reduced thermal loads, but the same concerns for airflow and packaging still need to be addressed. The use of the NASA Design and Analysis of Rotorcraft (NDARC) tool for vehicle sizing and mission analysis appears to be capable of supporting analyses for present and future types of vehicles, missions, propulsion, and energy sources. Further efforts are required to develop verified models for these new types of propulsion and energy sources in the size and use envisioned for these emerging vehicle and mission classes.
Enceladus as a hydrothermal water world
NASA Astrophysics Data System (ADS)
Postberg, Frank; Hsu, Hsiang-Wen; Sekine, Yasuhito
2014-05-01
The composition of both salty ice grains and nanometer-sized stream particles emitted from Enceladus and measured by Cassini-CDA require require liquid water as a source. Moreover, they provide strong geochemical constraints for their origin inside the active moon. Most stream particles are composed of silica, a unique indicator as nano-silica would only form under quite specific conditions. With high probability on-going or geological recent hydrothermal activity at Enceladus is required to generate these particles. Inferred reaction temperatures at Enceladus ocean floor lie between 100 and 350 °C in a slightly alkaline environment (pH 7.5 - 10.5). The inferred high temperatures at great depth might require heat sources other than tides alone, such as remaining primordial heat and/or serpentinization of a probably porous rocky core. Long-term laboratory experiments were carried out to simulate the conditions at the Enceladus rock/water interface using the constraints derived from CDA measurements. These experiments allow insights into a rock/water chemistry which severely constrains the formation history of the moon and substantially enhances its astrobiological potential. Together with recent results from other Cassini instruments a conclusive picture of Enceladus as an active water world seems to be in reach.
Ruthenium Oxide Electrochemical Super Capacitor Optimization for Pulse Power Applications
NASA Technical Reports Server (NTRS)
Merryman, Stephen A.; Chen, Zheng
2000-01-01
Electrical actuator systems are being pursued as alternatives to hydraulic systems to reduce maintenance time, weight and costs while increasing reliability. Additionally, safety and environmental hazards associated with the hydraulic fluids can be eliminated. For most actuation systems, the actuation process is typically pulsed with high peak power requirements but with relatively modest average power levels. The power-time requirements for electrical actuators are characteristic of pulsed power technologies where the source can be sized for the average power levels while providing the capability to achieve the peak requirements. Among the options for the power source are battery systems, capacitor systems or battery-capacitor hybrid systems. Battery technologies are energy dense but deficient in power density; capacitor technologies are power dense but limited by energy density. The battery-capacitor hybrid system uses the battery to supply the average power and the capacitor to meet the peak demands. It has been demonstrated in previous work that the hybrid electrical power source can potentially provide a weight savings of approximately 59% over a battery-only source. Electrochemical capacitors have many properties that make them well-suited for electrical actuator applications. They have the highest demonstrated energy density for capacitive storage (up to 100 J/g), have power densities much greater than most battery technologies (greater than 30kW/kg), are capable of greater than one million charge-discharge cycles, can be charged at extremely high rates, and have non-explosive failure modes. Thus, electrochemical capacitors exhibit a combination of desirable battery and capacitor characteristics.
Temporal Characterization of Aircraft Noise Sources
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Sullivan, Brenda M.; Rizzi, Stephen A.
2004-01-01
Current aircraft source noise prediction tools yield time-independent frequency spectra as functions of directivity angle. Realistic evaluation and human assessment of aircraft fly-over noise require the temporal characteristics of the noise signature. The purpose of the current study is to analyze empirical data from broadband jet and tonal fan noise sources and to provide the temporal information required for prediction-based synthesis. Noise sources included a one-tenth-scale engine exhaust nozzle and a one-fifth scale scale turbofan engine. A methodology was developed to characterize the low frequency fluctuations employing the Short Time Fourier Transform in a MATLAB computing environment. It was shown that a trade-off is necessary between frequency and time resolution in the acoustic spectrogram. The procedure requires careful evaluation and selection of the data analysis parameters, including the data sampling frequency, Fourier Transform window size, associated time period and frequency resolution, and time period window overlap. Low frequency fluctuations were applied to the synthesis of broadband noise with the resulting records sounding virtually indistinguishable from the measured data in initial subjective evaluations. Amplitude fluctuations of blade passage frequency (BPF) harmonics were successfully characterized for conditions equivalent to take-off and approach. Data demonstrated that the fifth harmonic of the BPF varied more in frequency than the BPF itself and exhibited larger amplitude fluctuations over the duration of the time record. Frequency fluctuations were found to be not perceptible in the current characterization of tonal components.
Energy as a Constraint on Habitability in the Subsurface
NASA Astrophysics Data System (ADS)
Hoehler, T.
2008-12-01
All living things must obtain energy from the environment to grow, to maintain a metabolic steady state, or simply to preserve viability. The availability of energy sources in the environment thus represents a key factor in determining the size, distribution, and activity of biological populations, and ultimately constrains the possibility for life itself. Lacking the abundant energy provided by solar radiation or the products of oxygenic photosynthesis, life in subsurface environments may be limited by energy availability as much as any other factor. The biological requirement for energy is expressed in two dimensions - analogous to the power and voltage requirements of electrical devices - and consideration and quantification of these requirements establishes quantitative boundary conditions on subsurface habitability. The magnitude of these requirements depends significantly on physicochemical environment, as does the provision of biologically-accessible energy from subsurface sources. With this conceptual basis, we are developing an 'energy balance' model that is designed to ultimately predict the habitability of a given environment, with respect to a given metabolism, in quantitative terms (as 'biomass density potential'). The model will develop from conceptual to quantitative as experimental and observational work constrains and quantifies, in natural populations adapted to low energy conditions, the magnitude of the biological energy requirements and the impacts of physicochemical environmental conditions on energy demand and supply.
Characteristics of a dynamic holographic sensor for shape control of a large reflector
NASA Technical Reports Server (NTRS)
Welch, Sharon S.; Cox, David E.
1991-01-01
Design of a distributed holographic interferometric sensor for measuring the surface displacement of a large segmented reflector is proposed. The reflector's surface is illuminated by laser light of two wavelengths and volume holographic gratings are formed in photorefractive crystals of the wavefront returned from the surface. The sensor is based on holographic contouring with a multiple frequency source. It is shown that the most stringent requirement of temporal stability affects both the temporal resolution and the dynamic range. Principal factor which limit the sensor performance include the response time of photorefractive crystal, laser power required to write a hologram, and the size of photorefractive crystal.
Review of magnetostrictive vibration energy harvesters
NASA Astrophysics Data System (ADS)
Deng, Zhangxian; Dapino, Marcelo J.
2017-10-01
The field of energy harvesting has grown concurrently with the rapid development of portable and wireless electronics in which reliable and long-lasting power sources are required. Electrochemical batteries have a limited lifespan and require periodic recharging. In contrast, vibration energy harvesters can supply uninterrupted power by scavenging useful electrical energy from ambient structural vibrations. This article reviews the current state of vibration energy harvesters based on magnetostrictive materials, especially Terfenol-D and Galfenol. Existing magnetostrictive harvester designs are compared in terms of various performance metrics. Advanced techniques that can reduce device size and improve performance are presented. Models for magnetostrictive devices are summarized to guide future harvester designs.
NASA Astrophysics Data System (ADS)
Araya, Miguel
2017-07-01
HESS J1534-571 is a very high-energy gamma-ray source that was discovered by the H.E.S.S. observatory and reported as one of several new sources with a shell-like morphology at TeV energies, matching in size and location with the supernova remnant (SNR) G323.7-1.0 discovered in radio observations by the Molonglo Galactic Plane Survey. Many known TeV shells also show X-ray emission however, no X-ray counterpart has been seen for HESS J1534-571. The detection of a new GeV source using data from the Fermi satellite that is compatible in extension with the radio SNR and shows a very hard power-law spectrum ≤ft(\\tfrac{{dN}}{{dE}}\\propto {E}-1.35\\right) is presented here, together with the first broadband modeling of the nonthermal emission from this source. It is shown that leptonic emission is compatible with the known multiwavelength data and a corresponding set of physical source parameters is given. The required total energy budget in leptons is reasonable, ˜1.5 × 1048 erg for a distance to the object of 5 kpc. The new GeV observations imply that a hadronic scenario, on the other hand, requires a cosmic-ray spectrum that deviates considerably from theoretical expectations of particle acceleration.
Method for silicon carbide production by reacting silica with hydrocarbon gas
Glatzmaier, G.C.
1994-06-28
A method is described for producing silicon carbide particles using a silicon source material and a hydrocarbon. The method is efficient and is characterized by high yield. Finely divided silicon source material is contacted with hydrocarbon at a temperature of 400 C to 1000 C where the hydrocarbon pyrolyzes and coats the particles with carbon. The particles are then heated to 1100 C to 1600 C to cause a reaction between the ingredients to form silicon carbide of very small particle size. No grinding of silicon carbide is required to obtain small particles. The method may be carried out as a batch process or as a continuous process. 5 figures.
Method for silicon carbide production by reacting silica with hydrocarbon gas
Glatzmaier, Gregory C.
1994-01-01
A method is described for producing silicon carbide particles using a silicon source material and a hydrocarbon. The method is efficient and is characterized by high yield. Finely divided silicon source material is contacted with hydrocarbon at a temperature of 400.degree. C. to 1000.degree. C. where the hydrocarbon pyrolyzes and coats the particles with carbon. The particles are then heated to 1100.degree. C. to 1600.degree. C. to cause a reaction between the ingredients to form silicon carbide of very small particle size. No grinding of silicon carbide is required to obtain small particles. The method may be carried out as a batch process or as a continuous process.
Stochastic recruitment leads to symmetry breaking in foraging populations
NASA Astrophysics Data System (ADS)
Biancalani, Tommaso; Dyson, Louise; McKane, Alan
2014-03-01
When an ant colony is faced with two identical equidistant food sources, the foraging ants are found to concentrate more on one source than the other. Analogous symmetry-breaking behaviours have been reported in various population systems, (such as queueing or stock market trading) suggesting the existence of a simple universal mechanism. Past studies have neglected the effect of demographic noise and required rather complicated models to qualitatively reproduce this behaviour. I will show how including the effects of demographic noise leads to a radically different conclusion. The symmetry-breaking arises solely due to the process of recruitment and ceases to occur for large population sizes. The latter fact provides a testable prediction for a real system.
Evidence of scattering effects on the sizes of interplanetary Type III radio bursts
NASA Technical Reports Server (NTRS)
Steinberg, J. L.; Hoang, S.; Dulk, G. A.
1985-01-01
An analysis is conducted of 162 interplanetary Type III radio bursts; some of these bursts have been observed in association with fast electrons and Langmuir wave events at 1 AU and, in addition, have been subjected to in situ plasma parameter measurements. It is noted that the sizes of burst sources are anomalously large, compared to what one would anticipate on the basis of the interplanetary plasma density distribution, and that the variation of source size with frequency, when compared with the plasma frequency variation measured in situ, implies that the source sizes expand with decreasing frequency to fill a cone whose apex is at the sun. It is also found that some local phenomenon near the earth controls the apparent size of low frequency Type III sources.
Padyšáková, Eliška; Okrouhlík, Jan; Brown, Mark; Bartoš, Michael; Janeček, Štěpán
2017-04-01
There are two alternative hypotheses related to body size and competition for restricted food sources. The first one supposes that larger animals are superior competitors because of their increased feeding abilities, whereas the second one assumes superiority of smaller animals because of their lower food requirements. We examined the relationship between two unrelated species of different size, drinking technique, energy requirements and roles in plant pollination system, to reveal the features of their competitive interaction and mechanisms enabling their co-existence while utilising the same nectar source. We observed diurnal feeding behaviour of the main pollinator, the carpenter bee Xylocopa caffra and a nectar thief, the northern double-collared sunbird Cinnyris reichenowi on 19 clumps of Hypoestes aristata (Acanthaceae) in Bamenda Highlands, Cameroon. For comparative purpose, we established a simplistic model of daily energy expenditure and daily energy intake by both visitor species assuming that they spend all available daytime feeding on H. aristata. We revealed the energetic gain-expenditure balance of the studied visitor species in relation to diurnal changes in nectar quality and quantity. In general, smaller energy requirements and related ability to utilise smaller resources made the main pollinator X. caffra competitively superior to the larger nectar thief C. reichenowi. Nevertheless, sunbirds are endowed with several mechanisms to reduce asymmetry in exploitative competition, such as the use of nectar resources in times of the day when rivals are inactive, aggressive attacks on carpenter bees while defending the nectar plants, and higher speed of nectar consumption.
Data storage and retrieval system
NASA Technical Reports Server (NTRS)
Nakamoto, Glen
1991-01-01
The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9 track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to shrink the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.
Data storage and retrieval system
NASA Technical Reports Server (NTRS)
Nakamoto, Glen
1992-01-01
The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9-track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to 'shrink' the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.
Load management as a smart grid concept for sizing and designing of hybrid renewable energy systems
NASA Astrophysics Data System (ADS)
Eltamaly, Ali M.; Mohamed, Mohamed A.; Al-Saud, M. S.; Alolah, Abdulrahman I.
2017-10-01
Optimal sizing of hybrid renewable energy systems (HRES) to satisfy load requirements with the highest reliability and lowest cost is a crucial step in building HRESs to supply electricity to remote areas. Applying smart grid concepts such as load management can reduce the size of HRES components and reduce the cost of generated energy considerably. In this article, sizing of HRES is carried out by dividing the load into high- and low-priority parts. The proposed system is formed by a photovoltaic array, wind turbines, batteries, fuel cells and a diesel generator as a back-up energy source. A smart particle swarm optimization (PSO) algorithm using MATLAB is introduced to determine the optimal size of the HRES. The simulation was carried out with and without division of the load to compare these concepts. HOMER software was also used to simulate the proposed system without dividing the loads to verify the results obtained from the proposed PSO algorithm. The results show that the percentage of division of the load is inversely proportional to the cost of the generated energy.
Recent advances in the front-end sources of the LMJ fusion laser
NASA Astrophysics Data System (ADS)
Gleyze, Jean-François; Hares, Jonathan; Vidal, Sebastien; Beck, Nicolas; Dubertrand, Jerome; Perrin, Arnaud
2011-03-01
LMJ is typical of lasers used for inertial confinement fusion and requires a laser of programmable parameters for injection into the main amplifier. For several years, the CEA has developed front end fiber sources, based on telecommunications fiber optics technologies. These sources meet the needs but as the technology evolves we can expect improved efficiency and reductions in size and cost. We give an up-to-date description of some present development issues, particularly in the field of temporal shaping with the use of digital system. The synchronization of such electronics has been challenging however we now obtain system jitter of less then 7ps rms. Secondly, we will present recent advance in the use of fiber based pre-comp system to avoid parasitic amplitude modulation from phase modulation used for spectral broadening.
Kritcher, A. L.; Neumayer, P.; Lee, H. J.; ...
2008-10-31
Here, we present K-α x-ray Thomson scattering from shock compressed matter for use as a diagnostic in determining the temperature, density, and ionization state with picosecond resolution. The development of this source as a diagnostic as well as stringent requirements for successful K-α x-ray Thomson scattering are addressed. Here, the first elastic and inelastic scattering measurements on a medium size laser facility have been observed. We present scattering data from solid density carbon plasmas with >1X 10 5 photons in the elastic peak that validate the capability of single shot characterization of warm dense matter and the ability to usemore » this scattering source at future free electron lasers and for fusion experiments at the National Ignition Facility (NIF), LLNL.« less
NASA Astrophysics Data System (ADS)
Opachich, Y. P.; Heeter, R. F.; Barrios, M. A.; Garcia, E. M.; Craxton, R. S.; King, J. A.; Liedahl, D. A.; McKenty, P. W.; Schneider, M. B.; May, M. J.; Zhang, R.; Ross, P. W.; Kline, J. L.; Moore, A. S.; Weaver, J. L.; Flippo, K. A.; Perry, T. S.
2017-06-01
Direct drive implosions of plastic capsules have been performed at the National Ignition Facility to provide a broad-spectrum (500-2000 eV) X-ray continuum source for X-ray transmission spectroscopy. The source was developed for the high-temperature plasma opacity experimental platform. Initial experiments using 2.0 mm diameter polyalpha-methyl styrene capsules with ˜20 μm thickness have been performed. X-ray yields of up to ˜1 kJ/sr have been measured using the Dante multichannel diode array. The backlighter source size was measured to be ˜100 μm FWHM, with ˜350 ps pulse duration during the peak emission stage. Results are used to simulate transmission spectra for a hypothetical iron opacity sample at 150 eV, enabling the derivation of photometrics requirements for future opacity experiments.
Operation of the CESR-TA vertical beam size monitor at Eb = 4 GeV
NASA Astrophysics Data System (ADS)
Alexander, J. P.; Conolly, C.; Edwards, E.; Flanagan, J. W.; Fontes, E.; Heltsley, B. K.; Lyndaker, A.; Peterson, D. P.; Rider, N. T.; Rubin, D. L.; Seeley, R.; Shanks, J.
2015-10-01
We describe operation of the CESR-TA vertical beam size monitor (xBSM) with e± beams with Eb=4 GeV. The xBSM measures vertical beam size by imaging synchrotron radiation x-rays through an optical element onto a detector array of 32 InGaAs photodiodes with 50 μm pitch. The device has previously been successfully used to measure vertical beam sizes of 10-100 μm on a bunch-by-bunch, turn-by-turn basis at e± beam energies of ~2 GeV and source magnetic fields below 2.8 kG, for which the detector required calibration for incident x-rays of 1-5 keV. At Eb = 4.0 GeV and B=4.5 kG, however, the incident synchrotron radiation spectrum extends to ~20 keV, requiring calibration of detector response in that regime. Such a calibration is described and then used to analyze data taken with several different thicknesses of filters in front of the detector. We obtain a relative precision of better than 4% on beam size measurement from 15 to 100 μm over several different ranges of x-ray energy, including both 1-12 keV and 6-17 keV. The response of an identical detector, but tilted vertically by 60° in order to increase magnification without a longer beamline, is measured and shown to improve x-ray detection above 4 keV without compromising sensitivity to beam size. We also investigate operation of a coded aperture using gold masking backed by synthetic diamond.
The Chandra Source Catalog : Automated Source Correlation
NASA Astrophysics Data System (ADS)
Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-01-01
Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).
NASA Astrophysics Data System (ADS)
Jeyakumar, S.
2016-06-01
The dependence of the turnover frequency on the linear size is presented for a sample of Giga-hertz Peaked Spectrum and Compact Steep Spectrum radio sources derived from complete samples. The dependence of the luminosity of the emission at the peak frequency with the linear size and the peak frequency is also presented for the galaxies in the sample. The luminosity of the smaller sources evolve strongly with the linear size. Optical depth effects have been included to the 3D model for the radio source of Kaiser to study the spectral turnover. Using this model, the observed trend can be explained by synchrotron self-absorption. The observed trend in the peak-frequency-linear-size plane is not affected by the luminosity evolution of the sources.
Abendroth, Jan; McCormick, Michael S.; Edwards, Thomas E.; Staker, Bart; Loewen, Roderick; Gifford, Martin; Rifkin, Jeff; Mayer, Chad; Guo, Wenjin; Zhang, Yang; Myler, Peter; Kelley, Angela; Analau, Erwin; Hewitt, Stephen Nakazawa; Napuli, Alberto J.; Kuhn, Peter; Ruth, Ronald D.; Stewart, Lance J.
2010-01-01
Structural genomics discovery projects require ready access to both X-ray and NMR instrumentation which support the collection of experimental data needed to solve large numbers of novel protein structures. The most productive X-ray crystal structure determination laboratories make extensive frequent use of tunable synchrotron X-ray light to solve novel structures by anomalous diffraction methods. This requires that frozen cryo-protected crystals be shipped to large government-run synchrotron facilities for data collection. In an effort to eliminate the need to ship crystals for data collection, we have developed the first laboratory-scale synchrotron light source capable of performing many of the state-of-the-art synchrotron applications in X-ray science. This Compact Light Source is a first-in-class device that uses inverse Compton scattering to generate X-rays of sufficient flux, tunable wavelength and beam size to allow high-resolution X-ray diffraction data collection from protein crystals. We report on benchmarking tests of X-ray diffraction data collection with hen egg white lysozyme, and the successful high-resolution X-ray structure determination of the Glycine cleavage system protein H from Mycobacterium tuberculosis using diffraction data collected with the Compact Light Source X-ray beam. PMID:20364333
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick; Olley, Jon
2013-04-01
Moreton Bay, in South East Queensland, Australia, is a Ramsar wetland of international significance. A decline of the bay's ecosystem health has been primarily attributed to sediments and nutrients from catchment sources. Sediment budgets for three catchments indicated gully erosion dominates the supply of sediment in Knapp Creek and the Upper Bremer River whereas erosion from cultivated soils is the primary sediment source in Blackfellow Creek. Sediment tracing with fallout-radionuclides confirmed subsoil erosion processes dominate the supply of sediment in Knapp Creek and the Upper Bremer River whereas in Blackfellow Creek cultivated and subsoil sources contribute >90% of sediments. Other sediment properties are required to determine the relative sediment contributions of channel bank, gully and cultivated sources in these catchments. The potential of total organic carbon (TOC), total nitrogen (TN), and carbon and nitrogen stable isotopes (δ13C, δ15N) to conservatively discriminate between subsoil sediment sources is presented. The conservativeness of these sediment properties was examined through evaluating particle size variations in depth core soil samples and investigating whether they remain constant in source soils over two sampling occasions. Varying conservative behavior and source discrimination was observed. TN in the
Soultan, Alaaeldin; Safi, Kamran
2017-01-01
Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.
NASA Astrophysics Data System (ADS)
Massin, F.; Malcolm, A. E.
2017-12-01
Knowing earthquake source mechanisms gives valuable information for earthquake response planning and hazard mitigation. Earthquake source mechanisms can be analyzed using long period waveform inversion (for moderate size sources with sufficient signal to noise ratio) and body-wave first motion polarity or amplitude ratio inversion (for micro-earthquakes with sufficient data coverage). A robust approach that gives both source mechanisms and their associated probabilities across all source scales would greatly simplify the determination of source mechanisms and allow for more consistent interpretations of the results. Following previous work on shift and stack approaches, we develop such a probabilistic source mechanism analysis, using waveforms, which does not require polarity picking. For a given source mechanism, the first period of the observed body-waves is selected for all stations, multiplied by their corresponding theoretical polarity and stacked together. (The first period is found from a manually picked travel time by measuring the central period where the signal power is concentrated, using the second moment of the power spectral density function.) As in other shift and stack approaches, our method is not based on the optimization of an objective function through an inversion. Instead, the power of the polarity-corrected stack is a proxy for the likelihood of the trial source mechanism, with the most powerful stack corresponding to the most likely source mechanism. Using synthetic data, we test our method for robustness to the data coverage, coverage gap, signal to noise ratio, travel-time picking errors and non-double couple component. We then present results for field data in a volcano-tectonic context. Our results are reliable when constrained by 15 body-wavelets, with gap below 150 degrees, signal to noise ratio over 1 and arrival time error below a fifth of the period (0.2T) of the body-wave. We demonstrate that the source scanning approach for source mechanism analysis has similar advantages to waveform inversion (full waveform data, no manual intervention, probabilistic approach) and similar applicability to polarity inversion (any source size, any instrument type).
NASA Astrophysics Data System (ADS)
Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David
2018-05-01
In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.
Power conversion distribution system using a resonant high-frequency AC link
NASA Technical Reports Server (NTRS)
Sood, P. K.; Lipo, T. A.
1986-01-01
Static power conversion systems based on a resonant high frequency (HF) link offers a significant reduction in the size and weight of the equipment over that achieved with conventional approaches, especially when multiple sources and loads are to be integrated. A faster system response and absence of audible noise are the other principal characteristics of such systems. A conversion configuration based on a HF link which is suitable for applications requiring distributed power is proposed.
Engineering and Design: Civil Works Cost Engineering
1994-03-31
labor cost requirements are broken into tasks of work. Each task is usually performd by a labor crew. Crews may vary in size and mix of skills. The...requested in advance of the expected purchase date. Suppliers are reluctant to guarantee future pricw and ofien will only quote current prices. It may be...unit cost is the overhead cost for the item. g. Sources for Pricing. The Cost Engineer must rely on judgement, historical data, and current labor market
Policy issues and data communications for NASA earth observation missions until 1985
NASA Technical Reports Server (NTRS)
Corte, A. B.; Warren, C. J.
1975-01-01
The series of LANDSAT sensors with the highest potential data rates of the missions were examined. An examination of LANDSAT imagery uses shows that relatively few require transmission of the full resolution data on a repetitive quasi real time basis. Accuracy of global crop size forecasting can possibly be improved through information derived from LANDSAT imagery. A current forecasting experiment uses the imagery for crop area estimation only, yield being derived from other data sources.
Radioisotope Stirling Engine Powered Airship for Low Altitude Operation on Venus
NASA Technical Reports Server (NTRS)
Colozza, Anthony J.
2012-01-01
The feasibility of a Stirling engine powered airship for the near surface exploration of Venus was evaluated. The heat source for the Stirling engine was limited to 10 general purpose heat source (GPHS) blocks. The baseline airship utilized hydrogen as the lifting gas and the electronics and payload were enclosed in a cooled insulated pressure vessel to maintain the internal temperature at 320 K and 1 Bar pressure. The propulsion system consisted of an electric motor driving a propeller. An analysis was set up to size the airship that could operate near the Venus surface based on the available thermal power. The atmospheric conditions on Venus were modeled and used in the analysis. The analysis was an iterative process between sizing the airship to carry a specified payload and the power required to operate the electronics, payload and cooling system as well as provide power to the propulsion system to overcome the drag on the airship. A baseline configuration was determined that could meet the power requirements and operate near the Venus surface. From this baseline design additional trades were made to see how other factors affected the design such as the internal temperature of the payload chamber and the flight altitude. In addition other lifting methods were evaluated such as an evacuated chamber, heated atmospheric gas and augmented heated lifting gas. However none of these methods proved viable.
Food for the Future: A Study of Insects as a Protein Source
NASA Astrophysics Data System (ADS)
Riggs, S.
2017-12-01
This study is designed to gain a sustainable, organic food source containing the proper amino acids, minerals, and protein to sustain the needs of human life on Earth as the current economy and environment will not be able to equip the needs for the future. The hypotheses are if available protein is increased in an insect's diet then their nutritional value will increase to fulfill a human's daily protein requirements in one serving size or less for each species tested and if there is a higher content of protein in the insects, then food created with it will receive higher ratings. Protein supplements were added to an insect's natural diet to increase nutritional value. Protein value in the insects increased to fulfill a human's daily dietary protein requirement in a third of a serving size. Biuret and absorption spectrometry testing demonstrates this correlation. Insects increased protein in their body showing a positive correlation to the hypothesis. Week one, protein values doubled and tripled in some species, unexpectedly. After three weeks, protein still continued increasing. There was high success in increasing the protein value in the different species of insects chosen. Is there a taste difference benefit with a higher content of protein in the insects? Over 55% of participants rated the brownies with more protein higher than the control groups, and overall 88% preferred brownies with insects opposed to without, supporting the second hypothesis.
Low-temperature nitridation of manganese and iron oxides using NaNH2 molten salt.
Miura, Akira; Takei, Takahiro; Kumada, Nobuhiro
2013-10-21
Manganese and iron nitrides are important functional materials, but their synthesis processes from oxides often require high temperatures. Herein, we show a novel meta-synthesis method for manganese and iron nitrides by low-temperature nitridation of their oxides using NaNH2 molten salt as the nitrogen source in an autoclave at 240 °C. With this method, nitridation of micrometer-sized oxide particles kept their initial morphologies, but the size of the primary particles decreased. The thermodynamic driving force is considered to be the conversion of oxides to sodium hydroxide, and the kinetic of nitridation is improved by the decrease of particle size and the low melting point of NaNH2. This technique as developed here has the advantages of low reaction temperature, reduced consumption of ammonia, employing nonspecialized equipment, and providing facile control of the reactions for producing nitrides from oxides.
Future oil and gas: Can Iran deliver?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takin, M.
1996-11-01
Iran`s oil and gas production and exports constitute the country`s main source of foreign exchange earnings. The future level of these earnings will depend on oil prices, global demand for Iranian exports, the country`s productive capability and domestic consumption. The size of Iranian oil reserves suggests that, in principle, present productive capacity could be maintained and expanded. However, the greatest share of production in coming years still will come from fields that already have produced for several decades. In spite of significant remaining reserves, these fields are not nearly as prolific as they were in their early years. The operationsmore » required for further development are now more complicated and, in particular, more costly. These fields` size also implies that improving production, and instituting secondary and tertiary recovery methods (such as gas injection), will require mega-scale operations. This article discusses future oil and gas export revenues from the Islamic Republic of Iran, emphasizing the country`s future production and commenting on the effects of proposed US sanctions.« less
Development of high sensitivity and high speed large size blank inspection system LBIS
NASA Astrophysics Data System (ADS)
Ohara, Shinobu; Yoshida, Akinori; Hirai, Mitsuo; Kato, Takenori; Moriizumi, Koichi; Kusunose, Haruhiko
2017-07-01
The production of high-resolution flat panel displays (FPDs) for mobile phones today requires the use of high-quality large-size photomasks (LSPMs). Organic light emitting diode (OLED) displays use several transistors on each pixel for precise current control and, as such, the mask patterns for OLED displays are denser and finer than the patterns for the previous generation displays throughout the entire mask surface. It is therefore strongly demanded that mask patterns be produced with high fidelity and free of defect. To enable the production of a high quality LSPM in a short lead time, the manufacturers need a high-sensitivity high-speed mask blank inspection system that meets the requirement of advanced LSPMs. Lasertec has developed a large-size blank inspection system called LBIS, which achieves high sensitivity based on a laser-scattering technique. LBIS employs a high power laser as its inspection light source. LBIS's delivery optics, including a scanner and F-Theta scan lens, focus the light from the source linearly on the surface of the blank. Its specially-designed optics collect the light scattered by particles and defects generated during the manufacturing process, such as scratches, on the surface and guide it to photo multiplier tubes (PMTs) with high efficiency. Multiple PMTs are used on LBIS for the stable detection of scattered light, which may be distributed at various angles due to irregular shapes of defects. LBIS captures 0.3mμ PSL at a detection rate of over 99.5% with uniform sensitivity. Its inspection time is 20 minutes for a G8 blank and 35 minutes for G10. The differential interference contrast (DIC) microscope on the inspection head of LBIS captures high-contrast review images after inspection. The images are classified automatically.
NASA Astrophysics Data System (ADS)
Silver, J. A.; Bomse, D. S.; Massick, S. M.; Zondlo, M. A.
2003-12-01
Tropospheric ammonia plays important roles in the nucleation, growth, composition, and chemistry of aerosol particles. Unfortunately, high frequency and sensitive measurements of gas phase ammonia are lacking in most airborne-based field campaigns. Chemical ionization mass spectrometers (CIMS) have shown great promise for ammonia measurements, but CIMS instruments typically consume large amounts of power, are highly labor intensive, and are very heavy for most airborne platforms. These characteristics of CIMS instruments severely limit their potential deployment on smaller and lighter aircraft, despite the strong desire for ammonia measurements in atmospheric chemistry field campaigns. To this end, a CIMS ammonia instrument for light aircraft is being developed using a double-focusing, miniature mass spectrometer. The size of the mass spectrometer, comparable to a small apple, allows for higher operating pressures (0.1 mTorr) and lower pumping requirements. Power usage, including pumps and electronics, is estimated to be around 300 W, and the overall instrument including pumps, electronics, and permeation cells is expected to be about the size of a small monitor. The ion source uses americium-241 to generate protonated water ions which proton transfer to form ammonium ions. The ion source is made with commercially available ion optics to minimize machining costs. Mass spectra over its working range (~ 5-120 amu) are well represented by Gaussian shaped peaks. By examining the peak widths as a function of mass location, the resolution of the instrument was determined experimentally to be around 110 (m/delta m). The sensitivity, selectivity, power requirements, size, and performance characteristics of the miniature mass spectrometer will be described along with the possibilities for CIMS measurements on light aircraft.
Almiron-Roig, Eva; Aitken, Amanda; Galloway, Catherine
2017-01-01
Context: Dietary assessment in minority ethnic groups is critical for surveillance programs and for implementing effective interventions. A major challenge is the accurate estimation of portion sizes for traditional foods and dishes. Objective: The aim of this systematic review was to assess records published up to 2014 describing a portion-size estimation element (PSEE) applicable to the dietary assessment of UK-residing ethnic minorities. Data sources, selection, and extraction: Electronic databases, internet sites, and theses repositories were searched, generating 5683 titles, from which 57 eligible full-text records were reviewed. Data analysis: Forty-two publications about minority ethnic groups (n = 20) or autochthonous populations (n = 22) were included. The most common PSEEs (47%) were combination tools (eg, food models and portion-size lists), followed by portion-size lists in questionnaires/guides (19%) and image-based and volumetric tools (17% each). Only 17% of PSEEs had been validated against weighed data. Conclusions: When developing ethnic-specific dietary assessment tools, it is important to consider customary portion sizes by sex and age, traditional household utensil usage, and population literacy levels. Combining multiple PSEEs may increase accuracy, but such methods require validation. PMID:28340101
Production of EUV mask blanks with low killer defects
NASA Astrophysics Data System (ADS)
Antohe, Alin O.; Kearney, Patrick; Godwin, Milton; He, Long; John Kadaksham, Arun; Goodwin, Frank; Weaver, Al; Hayes, Alan; Trigg, Steve
2014-04-01
For full commercialization, extreme ultraviolet lithography (EUVL) technology requires the availability of EUV mask blanks that are free of defects. This remains one of the main impediments to the implementation of EUV at the 22 nm node and beyond. Consensus is building that a few small defects can be mitigated during mask patterning, but defects over 100 nm (SiO2 equivalent) in size are considered potential "killer" defects or defects large enough that the mask blank would not be usable. The current defect performance of the ion beam sputter deposition (IBD) tool will be discussed and the progress achieved to date in the reduction of large size defects will be summarized, including a description of the main sources of defects and their composition.
A microwave interferometer for small and tenuous plasma density measurements.
Tudisco, O; Lucca Fabris, A; Falcetta, C; Accatino, L; De Angelis, R; Manente, M; Ferri, F; Florean, M; Neri, C; Mazzotta, C; Pavarin, D; Pollastrone, F; Rocchi, G; Selmo, A; Tasinato, L; Trezzolani, F; Tuccillo, A A
2013-03-01
The non-intrusive density measurement of the thin plasma produced by a mini-helicon space thruster (HPH.com project) is a challenge, due to the broad density range (between 10(16) m(-3) and 10(19) m(-3)) and the small size of the plasma source (2 cm of diameter). A microwave interferometer has been developed for this purpose. Due to the small size of plasma, the probing beam wavelength must be small (λ = 4 mm), thus a very high sensitivity interferometer is required in order to observe the lower density values. A low noise digital phase detector with a phase noise of 0.02° has been used, corresponding to a density of 0.5 × 10(16) m(-3).
NASA Astrophysics Data System (ADS)
Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.
2016-10-01
We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loznikov, V. M., E-mail: loznikov@yandex.ru; Erokhin, N. S.; Zol’nikova, N. N.
A three-component phenomenological model describing the specific features of the spectrum of cosmic-ray protons and helium nuclei in the rigidity range of 30–2×10{sup 5} GV is proposed. The first component corresponds to the constant background; the second, to the variable “soft” (30–500 GV) heliospheric source; and the third, to the variable “hard” (0.5–200 TV) source located inside a local bubble. The existence and variability of both sources are provided by the corresponding “surfatron accelerators,” whose operation requires the presence of an extended region with an almost uniform (in both magnitude and direction) magnetic field, orthogonally (or obliquely) to which electromagneticmore » waves propagate. The maximum energy to which cosmic rays can be accelerated is determined by the source size. The soft source with a size of ∼100 AU is located at the periphery of the heliosphere, behind the front of the solar wind shock wave. The hard source with a size of >0.1 pc is located near the boundary of an interstellar cloud at a distance of ∼0.01 pc from the Sun. The presence of a kink in the rigidity spectra of p and He near 230 GV is related to the variability of the physical conditions in the acceleration region and depends on the relation between the amplitudes and power-law exponents in the dependences of the background, soft heliospheric source, and hard near galactic source. The ultrarelativistic acceleration of p and He by an electromagnetic wave propagating in space plasma across the external magnetic field is numerically analyzed. Conditions for particle trapping by the wave and the dynamics of the particle velocity and momentum components are considered. The calculations show that, in contrast to electrons and positrons (e{sup +}), the trapped protons relatively rapidly escape from the effective potential well and cease to accelerate. Due to this effect, the p and He spectra are softer than that of e{sup +}. The possibility that the spectra of accelerated protons deviate from standard power-law dependences due to the surfatron mechanism is discussed.« less
The recent and future health burden of air pollution apportioned across U.S. sectors.
Fann, Neal; Fulcher, Charles M; Baker, Kirk
2013-04-16
Recent risk assessments have characterized the overall burden of recent PM2.5 and ozone levels on public health, but generally not the variability of these impacts over time or by sector. Using photochemical source apportionment modeling and a health impact function, we attribute PM2.5 and ozone air quality levels, population exposure and health burden to 23 industrial point, area, mobile and international emission sectors in the Continental U.S. in 2005 and 2016. Our modeled policy scenarios account for a suite of emission control requirements affecting many of these sectors. Between these two years, the number of PM2.5 and ozone-related deaths attributable to power plants and mobile sources falls from about 68,000 (90% confidence interval from 48,000 to 87,000) to about 36,000 (90% confidence intervals from 26,000 to 47,000). Area source mortality risk grows slightly between 2005 and 2016, due largely to population growth. Uncertainties relating to the timing and magnitude of the emission reductions may affect the size of these estimates. The detailed sector-level estimates of the size and distribution of mortality and morbidity risk suggest that the air pollution mortality burden has fallen over time but that many sectors continue to pose a substantial risk to human health.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Does source population size affect performance in new environments?
Yates, Matthew C; Fraser, Dylan J
2014-01-01
Small populations are predicted to perform poorly relative to large populations when experiencing environmental change. To explore this prediction in nature, data from reciprocal transplant, common garden, and translocation studies were compared meta-analytically. We contrasted changes in performance resulting from transplantation to new environments among individuals originating from different sized source populations from plants and salmonids. We then evaluated the effect of source population size on performance in natural common garden environments and the relationship between population size and habitat quality. In ‘home-away’ contrasts, large populations exhibited reduced performance in new environments. In common gardens, the effect of source population size on performance was inconsistent across life-history stages (LHS) and environments. When transplanted to the same set of new environments, small populations either performed equally well or better than large populations, depending on life stage. Conversely, large populations outperformed small populations within native environments, but only at later life stages. Population size was not associated with habitat quality. Several factors might explain the negative association between source population size and performance in new environments: (i) stronger local adaptation in large populations and antagonistic pleiotropy, (ii) the maintenance of genetic variation in small populations, and (iii) potential environmental differences between large and small populations. PMID:25469166
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
Determining the sources of fine-grained sediment using the Sediment Source Assessment Tool (Sed_SAT)
Gorman Sanisaca, Lillian E.; Gellis, Allen C.; Lorenz, David L.
2017-07-27
A sound understanding of sources contributing to instream sediment flux in a watershed is important when developing total maximum daily load (TMDL) management strategies designed to reduce suspended sediment in streams. Sediment fingerprinting and sediment budget approaches are two techniques that, when used jointly, can qualify and quantify the major sources of sediment in a given watershed. The sediment fingerprinting approach uses trace element concentrations from samples in known potential source areas to determine a clear signature of each potential source. A mixing model is then used to determine the relative source contribution to the target suspended sediment samples.The computational steps required to apportion sediment for each target sample are quite involved and time intensive, a problem the Sediment Source Assessment Tool (Sed_SAT) addresses. Sed_SAT is a user-friendly statistical model that guides the user through the necessary steps in order to quantify the relative contributions of sediment sources in a given watershed. The model is written using the statistical software R (R Core Team, 2016b) and utilizes Microsoft Access® as a user interface but requires no prior knowledge of R or Microsoft Access® to successfully run the model successfully. Sed_SAT identifies outliers, corrects for differences in size and organic content in the source samples relative to the target samples, evaluates the conservative behavior of tracers used in fingerprinting by applying a “Bracket Test,” identifies tracers with the highest discriminatory power, and provides robust error analysis through a Monte Carlo simulation following the mixing model. Quantifying sediment source contributions using the sediment fingerprinting approach provides local, State, and Federal land management agencies with important information needed to implement effective strategies to reduce sediment. Sed_SAT is designed to assist these agencies in applying the sediment fingerprinting approach to quantify sediment sources in the sediment TMDL framework.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-07-09
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-01-01
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616
FIRST BEAM TESTS OF THE APS MBA UPGRADE ORBIT FEEDBACK CONTROLLER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sereno, N. S.; Arnold, N.; Brill, A.
The new orbit feedback system required for the APS multi-bend acromat (MBA) ring must meet challenging beam stability requirements. The AC stability requirement is to correct rms beam motion to 10 % the rms beam size at the insertion device source points from 0.01 to 1000 Hz. The vertical plane represents the biggest challenge for AC stability which is required to be 400 nm rms for a 4 micron vertical beam size. In addition long term drift over a period of 7 days is required to be 1 micron or less at insertion de- vice BPMs and 2 microns formore » arc bpms. We present test re- sults of theMBA prototype orbit feedback controller (FBC) in the APS storage ring. In this test, four insertion device BPMs were configured to send data to the FBC for process- ing into four fast corrector setpoints. The configuration of four bpms and four fast correctors creates a 4-bump and the configuration of fast correctors is similar to what will be implemented in the MBA ring. We report on performance benefits of increasing the sampling rate by a factor of 15 to 22.6 kHz over the existing APS orbit feedback system, lim- itations due to existing storage ring hardware and extrapo- lation to theMBA orbit feedback design. FBC architecture, signal flow and processing design will also be discussed.« less
Recent progress in X-ray optics at the ESRF
NASA Astrophysics Data System (ADS)
Freund, A.
2003-03-01
It is the task of x-ray optics to adapt the raw beam generated by modern sources such as synchrotron storage rings to a great variety of experimental requirements in terms of intensity, spot size, polarization and other parameters. The very high quality of synchrotron radiation (source size of a few microns and beam divergence of a few micro-radians) and the extreme x-ray flux (power of several hundred Watts in a few square mm) make this task quite difficult. In particular the heat load aspect is very important in the conditioning process of the brute x-ray power. Cryogenically cooled silicon crystals and water-cooled diamond crystals can presently fulfil this task, but limits will soon be reached and new schemes and materials must be envisioned. A major tendency of instrument improvement has a ways been to concentrate more photons into a smaller spot utilizing a whole variety of focusing devices such as Fresnel zone plates, refractive lenses and Systems based on bent surfaces, for example Kirkpatrick-Baez Systems. Apart from the resistance of the sample, the ultimate limits are determined by the source size and strength on one side, by materials properties, cooling, mounting and bending schemes on the other side, and fundamentally by the diffraction process. There is also the important aspect of coherence that can be both a nuisance and a blessing for the experiments, in particular for imaging techniques. Its conservation puts additional constraints on the quality of the optical elements. A review of recent progress in this field is given.
Subsurface energy storage and transport for solar-powered geysers on Triton
NASA Technical Reports Server (NTRS)
Kirk, Randolph L.; Soderblom, Laurence A.; Brown, Robert H.
1990-01-01
The location of active geyser-like eruptions and related features close to the current subsolar latitude on Triton suggests a solar energy source for these phenomena. Solid-state greenhouse calculations have shown that sunlight can generate substantially elevated subsurface temperatures. A variety of models for the storage of solar energy in a subgreenhouse layer and for the supply of gas and energy to a geyser are examined. 'Leaky greenhouse' models with only vertical gas transport are inconsistent with the observed upper limit on geyser radius of about 1.5 km. However, lateral transport of energy by gas flow in a porous N2 layer with a block size on the order of a meter can supply the required amount of gas to a source region about 1 km in radius. The decline of gas output to steady state may occur over a period comparable with the inferred active geyser lifetime of 5 earth years. The required subsurface permeability may be maintained by thermal fracturing of the residual N2 polar cap. A lower limit on geyser source radius of about 50 to 100 m predicted by a theory of negatively buoyant jets is not readily attained.
SEGY to ASCII: Conversion and Plotting Program
Goldman, Mark R.
1999-01-01
This report documents a computer program to convert standard 4 byte, IBM floating point SEGY files to ASCII xyz format. The program then optionally plots the seismic data using the GMT plotting package. The material for this publication is contained in a standard tar file (of99-126.tar) that is uncompressed and 726 K in size. It can be downloaded by any Unix machine. Move the tar file to the directory you wish to use it in, then type 'tar xvf of99-126.tar' The archive files (and diskette) contain a NOTE file, a README file, a version-history file, source code, a makefile for easy compilation, and an ASCII version of the documentation. The archive files (and diskette) also contain example test files, including a typical SEGY file along with the resulting ASCII xyz and postscript files. Requirements for compiling the source code into an executable are a C++ compiler. The program has been successfully compiled using Gnu's g++ version 2.8.1, and use of other compilers may require modifications to the existing source code. The g++ compiler is a free, high quality C++ compiler and may be downloaded from the ftp site: ftp://ftp.gnu.org/gnu Requirements for plotting the seismic data is the existence of the GMT plotting package. The GMT plotting package may be downloaded from the web site: http://www.soest.hawaii.edu/gmt/
van de Geijn, J; Fraass, B A
1984-01-01
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.
Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR
DOE Office of Scientific and Technical Information (OSTI.GOV)
van de Geijn, J.; Fraass, B.A.
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; Ravi, Koustuban; Fallahi, Arya; Moriena, Gustavo; Dwayne Miller, R. J.; Kärtner, Franz X.
2015-01-01
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeV m−1 gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/proton accelerators with simple accelerating structures, high repetition rates and significant charge per bunch. These ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams. PMID:26439410
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; ...
2015-10-06
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeVm -1 gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/protonmore » accelerators with simple accelerating structures, high repetition rates and significant charge per bunch. As a result, these ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams.« less
NASA Technical Reports Server (NTRS)
Alvarez, H.
1976-01-01
We present preliminary results on the apparent angular size of the sources of four type III bursts observed between 3500 and 50 kHz from the IMP-6 spacecraft. The observations were made with a dipole rotating in the plane of the ecliptic where the sources are assumed to be. The apparent angular sizes obtained are unexpectedly large. We discuss different explanations for the results. It seems that the scattering of radio waves by electron density inhomogeneities is the most likely cause. We report a temporal increase of the apparent angular size of the source during the burst lifetime for some bursts. From its characteristics it appears to be a real effect.
Surgical Ablation of Atrial Fibrillation Using Energy Sources.
Brick, Alexandre Visconti; Braile, Domingo Marcolino
2015-01-01
Surgical ablation, concomitant with other operations, is an option for treatment in patients with chronic atrial fibrillation. The aim of this study is to present a literature review on surgical ablation of atrial fibrillation in patients undergoing cardiac surgery, considering energy sources and return to sinus rhythm. A comprehensive survey was performed in the literature on surgical ablation of atrial fibrillation considering energy sources, sample size, study type, outcome (early and late), and return to sinus rhythm. Analyzing studies with immediate results (n=5), the percentage of return to sinus rhythm ranged from 73% to 96%, while those with long-term results (n=20) (from 12 months on) ranged from 62% to 97.7%. In both of them, there was subsequent clinical improvement of patients who underwent ablation, regardless of the energy source used. Surgical ablation of atrial fibrillation is essential for the treatment of this arrhythmia. With current technology, it may be minimally invasive, making it mandatory to perform a procedure in an attempt to revert to sinus rhythm in patients requiring heart surgery.
A Compact, High-Flux Cold Atom Beam Source
NASA Technical Reports Server (NTRS)
Kellogg, James R.; Kohel, James M.; Thompson, Robert J.; Aveline, David C.; Yu, Nan; Schlippert, Dennis
2012-01-01
The performance of cold atom experiments relying on three-dimensional magneto-optical trap techniques can be greatly enhanced by employing a highflux cold atom beam to obtain high atom loading rates while maintaining low background pressures in the UHV MOT (ultra-high vacuum magneto-optical trap) regions. Several techniques exist for generating slow beams of cold atoms. However, one of the technically simplest approaches is a two-dimensional (2D) MOT. Such an atom source typically employs at least two orthogonal trapping beams, plus an additional longitudinal "push" beam to yield maximum atomic flux. A 2D atom source was created with angled trapping collimators that not only traps atoms in two orthogonal directions, but also provides a longitudinal pushing component that eliminates the need for an additional push beam. This development reduces the overall package size, which in turn, makes the 2D trap simpler, and requires less total optical power. The atom source is more compact than a previously published effort, and has greater than an order of magnitude improved loading performance.
High duty cycle inverse Compton scattering X-ray source
Ovodenko, A.; Agustsson, R.; Babzien, M.; ...
2016-12-22
Inverse Compton Scattering (ICS) is an emerging compact X-ray source technology, where the small source size and high spectral brightness are of interest for multitude of applications. However, to satisfy the practical flux requirements, a high-repetition-rate ICS system needs to be developed. To this end, this article reports the experimental demonstration of a high peak brightness ICS source operating in a burst mode at 40 MHz. A pulse train interaction has been achieved by recirculating a picosecond CO 2 laser pulse inside an active optical cavity synchronized to the electron beam. The pulse train ICS performance has been characterized atmore » 5- and 15- pulses per train and compared to a single pulse operation under the same operating conditions. Lastly, with the observed near-linear X-ray photon yield gain due to recirculation, as well as noticeably higher operational reliability, the burst-mode ICS offers a great potential for practical scalability towards high duty cycles.« less
Efficient, High-Power Mid-Infrared Laser for National Securityand Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiani, Leily S.
The LLNL fiber laser group developed a unique short-wave-infrared, high-pulse energy, highaverage- power fiber based laser. This unique laser source has been used in combination with a nonlinear frequency converter to generate wavelengths, useful for remote sensing and other applications in the mid-wave infrared (MWIR). Sources with high average power and high efficiency in this MWIR wavelength region are not yet available with the size, weight, and power requirements or energy efficiency necessary for future deployment. The LLNL developed Fiber Laser Pulsed Source (FiLPS) design was adapted to Erbium doped silica fibers for 1.55 μm pumping of Cadmium Silicon Phosphidemore » (CSP). We have demonstrated, for the first time optical parametric amplification of 2.4 μm light via difference frequency generation using CSP with an Erbium doped fiber source. In addition, for efficiency comparison purposes, we also demonstrated direct optical parametric generation (OPG) as well as optical parametric oscillation (OPO).« less
Magnitude, moment, and measurement: The seismic mechanism controversy and its resolution.
Miyake, Teru
This paper examines the history of two related problems concerning earthquakes, and the way in which a theoretical advance was involved in their resolution. The first problem is the development of a physical, as opposed to empirical, scale for measuring the size of earthquakes. The second problem is that of understanding what happens at the source of an earthquake. There was a controversy about what the proper model for the seismic source mechanism is, which was finally resolved through advances in the theory of elastic dislocations. These two problems are linked, because the development of a physically-based magnitude scale requires an understanding of what goes on at the seismic source. I will show how the theoretical advances allowed seismologists to re-frame the questions they were trying to answer, so that the data they gathered could be brought to bear on the problem of seismic sources in new ways. Copyright © 2017 Elsevier Ltd. All rights reserved.
Population demographics and genetic diversity in remnant and translocated populations of sea otters
Bodkin, James L.; Ballachey, Brenda E.; Cronin, M.A.; Scribner, K.T.
1999-01-01
The effects of small population size on genetic diversity and subsequent population recovery are theoretically predicted, but few empirical data are available to describe those relations. We use data from four remnant and three translocated sea otter (Enhydra lutris) populations to examine relations among magnitude and duration of minimum population size, population growth rates, and genetic variation. Metochondrial (mt)DNA haplotype diversity was correlated with the number of years at minimum population size (r = -0.741, p = 0.038) and minimum population size (r = 0.709, p = 0.054). We found no relation between population growth and haplotype diversity, altough growth was significantly greater in translocated than in remnant populations. Haplotype diversity in populations established from two sources was higher than in a population established from a single source and was higher than in the respective source populations. Haplotype frequencies in translocated populations of founding sizes of 4 and 28 differed from expected, indicating genetic drift and differential reproduction between source populations, whereas haplotype frequencies in a translocated population with a founding size of 150 did not. Relations between population demographics and genetic characteristics suggest that genetic sampling of source and translocated populations can provide valuable inferences about translocations.
Common display performance requirements for military and commercial aircraft product lines
NASA Astrophysics Data System (ADS)
Hoener, Steven J.; Behrens, Arthur J.; Flint, John R.; Jacobsen, Alan R.
2001-09-01
Obtaining high quality Active Matrix Liquid Crystal (AMLCD) glass to meet the needs of the commercial and military aerospace business is a major challenge, at best. With the demise of all domestic sources of AMLCD substrate glass, the industry is now focused on overseas sources, which are primarily producing glass for consumer electronics. Previous experience with ruggedizing commercial glass leads to the expectation that the aerospace industry can leverage off the commercial market. The problem remains, while the commercial industry is continually changing and improving its products, the commercial and military aerospace industries require stable and affordable supplies of AMLCD glass for upwards of 20 years to support production and maintenance operations. The Boeing Engineering and Supplier Management Process Councils have chartered a group of displays experts from multiple aircraft product divisions within the Boeing Company, the Displays Process Action Team (DPAT), to address this situation from an overall corporate perspective. The DPAT has formulated a set of Common Displays Performance Requirements for use across the corporate line of commercial and military aircraft products. Though focused on the AMLCD problem, the proposed common requirements are largely independent of display technology. This paper describes the strategy being pursued within the Boeing Company to address the AMLCD supply problem and details the proposed implementation process, centered on common requirements for both commercial and military aircraft displays. Highlighted in this paper are proposed common, or standard, display sizes and the other major requirements established by the DPAT, along with the rationale for these requirements.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
Uncertainties in predicting solar panel power output
NASA Technical Reports Server (NTRS)
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
Macroalgae as a Biomass Feedstock: A Preliminary Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roesijadi, Guritno; Jones, Susanne B.; Snowden-Swan, Lesley J.
2010-09-26
A thorough of macroalgae analysis as a biofuels feedstock is warranted due to the size of this biomass resource and the need to consider all potential sources of feedstock to meet current biomass production goals. Understanding how to harness this untapped biomass resource will require additional research and development. A detailed assessment of environmental resources, cultivation and harvesting technology, conversion to fuels, connectivity with existing energy supply chains, and the associated economic and life cycle analyses will facilitate evaluation of this potentially important biomass resource.
The nuclear window to the extragalactic universe
NASA Astrophysics Data System (ADS)
Erdmann, M.; Müller, G.; Urban, M.; Wirtz, M.
2016-12-01
We investigate two recent parameterizations of the galactic magnetic field with respect to their impact on cosmic nuclei traversing the field. We present a comprehensive study of the size of angular deflections, dispersion in the arrival probability distributions, multiplicity in the images of arrival on Earth, variance in field transparency, and influence of the turbulent field components. To remain restricted to ballistic deflections, a cosmic nucleus with energy E and charge Z should have a rigidity above E / Z = 6 EV. In view of the differences resulting from the two field parameterizations as a measure of current knowledge in the galactic field, this rigidity threshold may have to be increased. For a point source search with E/Z ≥ 60 EV, field uncertainties increase the required signal events for discovery moderately for sources in the northern and southern regions, but substantially for sources near the galactic disk.
Opachich, Y. P.; Heeter, R. F.; Barrios, M. A.; ...
2017-06-08
Direct drive implosions of plastic capsules have been performed at the National Ignition Facility to provide a broad-spectrum (500–2000 eV) X-ray continuum source for X-ray transmission spectroscopy. The source was developed for the high-temperature plasma opacity experimental platform. Initial experiments using 2.0 mm diameter polyalpha-methyl styrene capsules with ~20 μm thickness have been performed. X-ray yields of up to ~1 kJ/sr have been measured using the Dante multichannel diode array. The backlighter source size was measured to be ~100 μm FWHM, with ~350 ps pulse duration during the peak emission stage. Lastly, these results are used to simulate transmission spectramore » for a hypothetical iron opacity sample at 150 eV, enabling the derivation of photometrics requirements for future opacity experiments.« less
Opachich, Y P; Heeter, R F; Barrios, M A; Garcia, E M; Craxton, R S; King, J A; Liedahl, D A; McKenty, P W; Schneider, M B; May, M J; Zhang, R; Ross, P W; Kline, J L; Moore, A S; Weaver, J L; Flippo, K A; Perry, T S
2017-06-01
Direct drive implosions of plastic capsules have been performed at the National Ignition Facility to provide a broad-spectrum (500-2000 eV) X-ray continuum source for X-ray transmission spectroscopy. The source was developed for the high-temperature plasma opacity experimental platform. Initial experiments using 2.0 mm diameter polyalpha-methyl styrene capsules with ∼20 μ m thickness have been performed. X-ray yields of up to ∼1 kJ/sr have been measured using the Dante multichannel diode array. The backlighter source size was measured to be ∼100 μ m FWHM, with ∼350 ps pulse duration during the peak emission stage. Results are used to simulate transmission spectra for a hypothetical iron opacity sample at 150 eV, enabling the derivation of photometrics requirements for future opacity experiments.
PhySIC: a veto supertree method with desirable properties.
Ranwez, Vincent; Berry, Vincent; Criscuolo, Alexis; Fabre, Pierre-Henri; Guillemot, Sylvain; Scornavacca, Celine; Douzery, Emmanuel J P
2007-10-01
This paper focuses on veto supertree methods; i.e., methods that aim at producing a conservative synthesis of the relationships agreed upon by all source trees. We propose desirable properties that a supertree should satisfy in this framework, namely the non-contradiction property (PC) and the induction property (PI). The former requires that the supertree does not contain relationships that contradict one or a combination of the source topologies, whereas the latter requires that all topological information contained in the supertree is present in a source tree or collectively induced by several source trees. We provide simple examples to illustrate their relevance and that allow a comparison with previously advocated properties. We show that these properties can be checked in polynomial time for any given rooted supertree. Moreover, we introduce the PhySIC method (PHYlogenetic Signal with Induction and non-Contradiction). For k input trees spanning a set of n taxa, this method produces a supertree that satisfies the above-mentioned properties in O(kn(3) + n(4)) computing time. The polytomies of the produced supertree are also tagged by labels indicating areas of conflict as well as those with insufficient overlap. As a whole, PhySIC enables the user to quickly summarize consensual information of a set of trees and localize groups of taxa for which the data require consolidation. Lastly, we illustrate the behaviour of PhySIC on primate data sets of various sizes, and propose a supertree covering 95% of all primate extant genera. The PhySIC algorithm is available at http://atgc.lirmm.fr/cgi-bin/PhySIC.
Gyrotron-driven high current ECR ion source for boron-neutron capture therapy neutron generator
NASA Astrophysics Data System (ADS)
Skalyga, V.; Izotov, I.; Golubev, S.; Razin, S.; Sidorov, A.; Maslennikova, A.; Volovecky, A.; Kalvas, T.; Koivisto, H.; Tarvainen, O.
2014-12-01
Boron-neutron capture therapy (BNCT) is a perspective treatment method for radiation resistant tumors. Unfortunately its development is strongly held back by a several physical and medical problems. Neutron sources for BNCT currently are limited to nuclear reactors and accelerators. For wide spread of BNCT investigations more compact and cheap neutron source would be much more preferable. In present paper an approach for compact D-D neutron generator creation based on a high current ECR ion source is suggested. Results on dense proton beams production are presented. A possibility of ion beams formation with current density up to 600 mA/cm2 is demonstrated. Estimations based on obtained experimental results show that neutron target bombarded by such deuteron beams would theoretically yield a neutron flux density up to 6·1010 cm-2/s. Thus, neutron generator based on a high-current deuteron ECR source with a powerful plasma heating by gyrotron radiation could fulfill the BNCT requirements significantly lower price, smaller size and ease of operation in comparison with existing reactors and accelerators.
Upgrade of the BATMAN test facility for H- source development
NASA Astrophysics Data System (ADS)
Heinemann, B.; Fröschle, M.; Falter, H.-D.; Fantz, U.; Franzen, P.; Kraus, W.; Nocentini, R.; Riedl, R.; Ruf, B.
2015-04-01
The development of a radio frequency (RF) driven source for negative hydrogen ions for the neutral beam heating devices of fusion experiments has been successfully carried out at IPP since 1996 on the test facility BATMAN. The required ITER parameters have been achieved with the prototype source consisting of a cylindrical driver on the back side of a racetrack like expansion chamber. The extraction system, called "Large Area Grid" (LAG) was derived from a positive ion accelerator from ASDEX Upgrade (AUG) using its aperture size (ø 8 mm) and pattern but replacing the first two electrodes and masking down the extraction area to 70 cm2. BATMAN is a well diagnosed and highly flexible test facility which will be kept operational in parallel to the half size ITER source test facility ELISE for further developments to improve the RF efficiency and the beam properties. It is therefore planned to upgrade BATMAN with a new ITER-like grid system (ILG) representing almost one ITER beamlet group, namely 5 × 14 apertures (ø 14 mm). Additionally to the standard three grid extraction system a repeller electrode upstream of the grounded grid can optionally be installed which is positively charged against it by 2 kV. This is designated to affect the onset of the space charge compensation downstream of the grounded grid and to reduce the backstreaming of positive ions from the drift space backwards into the ion source. For magnetic filter field studies a plasma grid current up to 3 kA will be available as well as permanent magnets embedded into a diagnostic flange or in an external magnet frame. Furthermore different source vessels and source configurations are under discussion for BATMAN, e.g. using the AUG type racetrack RF source as driver instead of the circular one or modifying the expansion chamber for a more flexible position of the external magnet frame.
MSAViewer: interactive JavaScript visualization of multiple sequence alignments.
Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E; Rost, Burkhard; Goldberg, Tatyana
2016-11-15
The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is 'web ready': written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/Supplementary information: Supplementary data are available at Bioinformatics online. msa@bio.sh. © The Author 2016. Published by Oxford University Press.
MSAViewer: interactive JavaScript visualization of multiple sequence alignments
Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E.; Rost, Burkhard; Goldberg, Tatyana
2016-01-01
Summary: The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is ‘web ready’: written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. Availability and Implementation: The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: msa@bio.sh PMID:27412096
Design and implementation of current fed DC-DC converter for PHEV application using renewable source
NASA Astrophysics Data System (ADS)
Milind Metha, Manish; Tutki, Sanjay; Rajan, Aju; Elangovan, D.; Arunkumar, G.
2017-11-01
As the fossil fuels are depleting day by day, the use of renewable energy sources came into existence and they evolved a lot lately. To increase efficiency and productivity in the hybrid vehicles, the existence less efficient petroleum and diesel IC engines need to be replaced with the new and efficient converters with renewable energy sources. This has to be done in such a way that impacts three factors mainly: cost, efficiency and reliability. The PHEVs that have been launched and the upcoming PHEVs using converters with voltage range around 380V to 400V generated with power ranges between 2.4KW to 2.8KW. The basic motto of this paper is to design a prolific converter while considering the factor such as cost and size. In this paper, a two stage DC-DC converter is proposed and the proposed DC-DC converter is utilized to endeavour voltage from 24V (photovoltaic source) to a yield voltage of 400V and to meet the power demand of 250W, since only one panel is being used for this proposed paper. This paper discuss in detail about why and how the current fed DC-DC converter is utilized along with a voltage doubler, thus reducing transformer turns and thereby reducing overall size of the product. Simulation and hardware results have been presented along with calculations for duty cycle required for firing sequence for different values of transformer turns.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Plasma Physics Challenges of MM-to-THz and High Power Microwave Generation
NASA Astrophysics Data System (ADS)
Booske, John
2007-11-01
Homeland security and military defense technology considerations have stimulated intense interest in mobile, high power sources of millimeter-wave to terahertz regime electromagnetic radiation, from 0.1 to 10 THz. While sources at the low frequency end, i.e., the gyrotron, have been deployed or are being tested for diverse applications such as WARLOC radar and active denial systems, the challenges for higher frequency sources have yet to be completely met for applications including noninvasive sensing of concealed weapons and dangerous agents, high-data-rate communications, and high resolution spectroscopy and atmospheric sensing. The compact size requirements for many of these high frequency sources requires miniscule, micro-fabricated slow wave circuits with high rf ohmic losses. This necessitates electron beams with not only very small transverse dimensions but also very high current density for adequate gain. Thus, the emerging family of mm-to-THz e-beam-driven vacuum electronics devices share many of the same plasma physics challenges that currently confront ``classic'' high power microwave (HPM) generators [1] including bright electron sources, intense beam transport, energetic electron interaction with surfaces and rf air breakdown at output windows. Multidimensional theoretical and computational models are especially important for understanding and addressing these challenges. The contemporary plasma physics issues, recent achievements, as well as the opportunities and outlook on THz and HPM will be addressed. [1] R.J. Barker, J.H. Booske, N.C. Luhmann, and G.S. Nusinovich, Modern Microwave and Millimeter-Wave Power Electronics (IEEE/Wiley, 2005).
NASA Astrophysics Data System (ADS)
Alyafei, Nora
Renewable energy (RE) sources are becoming popular for power generations due to advances in renewable energy technologies and their ability to reduce the problem of global warming. However, their supply varies in availability (as sun and wind) and the required load demand fluctuates. Thus, to overcome the uncertainty issues of RE power sources, they can be combined with storage devices and conventional energy sources in a Hybrid Power Systems (HPS) to satisfy the demand load at any time. Recently, RE systems received high interest to take advantage of their positive benefits such as renewable availability and CO2 emissions reductions. The optimal design of a hybrid renewable energy system is mostly defined by economic criteria, but there are also technical and environmental criteria to be considered to improve decision making. In this study three main renewable sources of the system: photovoltaic arrays (PV), wind turbine generators (WG) and waste boilers (WB) are integrated with diesel generators and batteries to design a hybrid system that supplies the required demand of a remote area in Qatar using heuristic approach. The method utilizes typical year data to calculate hourly output power of PV, WG and WB throughout the year. Then, different combinations of renewable energy sources with battery storage are proposed to match hourly demand during the year. The design which satisfies the desired level of loss of power supply, CO 2 emissions and minimum costs is considered as best design.
NASA Astrophysics Data System (ADS)
Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.
2016-12-01
Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.
On the use of high-gradient magnetic force field in capturing airborne particles
Cheng, Mengdawn; Murphy, Bart L.; Moon, Ji Won; ...
2018-06-01
Airborne particles in the environment are generally smaller than a couple of microns. Use of magnetic force to collect aerosol particles thus has not been popular as the other means. There are billions of airborne particles emitted by a host of man-made sources with the particle size smaller than 1 µm and possess some magnetic susceptibility. We are thus interested in the use of high-gradient magnetic collection to extract the magnetic fraction in an aerosol population. Here in this study, we reported that the magnetic force is the dominant force in collection of ferromagnetic particles of mobility equivalent size largermore » than or equal to 50 nm in a high-gradient permanent-magnetic aerosol collector, while the diffusiophoretic force is responsible for particles smaller than 10 nm. Both forces compete for particles in between these two sizes in the magnetic aerosol collector designed for this study. To enable a wide-range effective collection of aerosol particles across entire size spectrum from a few nanometers to tens of a micron, the ORNL-designed high-gradient magnetic collector would require the use of an engineered matrix. Thus, the matrix design for a specific application becomes application specific. Irrespective of the collection efficiency, the use of permanent magnets to collect magnetic particles is feasible and also highly selective because it tunes into the magnetic susceptibility of the particles as well as the size. Lastly, the use of permanent magnets enables the collector to be operated at a minimal power requirement, which is a critical factor in long-term field operation.« less
On the use of high-gradient magnetic force field in capturing airborne particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Mengdawn; Murphy, Bart L.; Moon, Ji Won
Airborne particles in the environment are generally smaller than a couple of microns. Use of magnetic force to collect aerosol particles thus has not been popular as the other means. There are billions of airborne particles emitted by a host of man-made sources with the particle size smaller than 1 µm and possess some magnetic susceptibility. We are thus interested in the use of high-gradient magnetic collection to extract the magnetic fraction in an aerosol population. Here in this study, we reported that the magnetic force is the dominant force in collection of ferromagnetic particles of mobility equivalent size largermore » than or equal to 50 nm in a high-gradient permanent-magnetic aerosol collector, while the diffusiophoretic force is responsible for particles smaller than 10 nm. Both forces compete for particles in between these two sizes in the magnetic aerosol collector designed for this study. To enable a wide-range effective collection of aerosol particles across entire size spectrum from a few nanometers to tens of a micron, the ORNL-designed high-gradient magnetic collector would require the use of an engineered matrix. Thus, the matrix design for a specific application becomes application specific. Irrespective of the collection efficiency, the use of permanent magnets to collect magnetic particles is feasible and also highly selective because it tunes into the magnetic susceptibility of the particles as well as the size. Lastly, the use of permanent magnets enables the collector to be operated at a minimal power requirement, which is a critical factor in long-term field operation.« less
Petterson, S; Roser, D; Deere, D
2015-09-01
It is proposed that the next revision of the Australian Drinking Water Guidelines will include 'health-based targets', where the required level of potable water treatment quantitatively relates to the magnitude of source water pathogen concentrations. To quantify likely Cryptosporidium concentrations in southern Australian surface source waters, the databases for 25 metropolitan water supplies with good historical records, representing a range of catchment sizes, land use and climatic regions were mined. The distributions and uncertainty intervals for Cryptosporidium concentrations were characterized for each site. Then, treatment targets were quantified applying the framework recommended in the World Health Organization Guidelines for Drinking-Water Quality 2011. Based on total oocyst concentrations, and not factoring in genotype or physiological state information as it relates to infectivity for humans, the best estimates of the required level of treatment, expressed as log10 reduction values, ranged among the study sites from 1.4 to 6.1 log10. Challenges associated with relying on historical monitoring data for defining drinking water treatment requirements were identified. In addition, the importance of quantitative microbial risk assessment input assumptions on the quantified treatment targets was investigated, highlighting the need for selection of locally appropriate values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutka, Michael S.; Carpenter, Bryce D.; Gehrels, Neil
2017-02-01
Quasi-simultaneous observations of the Flat Spectrum Radio Quasar PKS 2326−502 were carried out in the γ -ray, X-ray, UV, optical, near-infrared, and radio bands. Using these observations, we are able to characterize the spectral energy distribution (SED) of the source during two flaring and one quiescent γ -ray states. These data were used to constrain one-zone leptonic models of the SEDs of each flare and investigate the physical conditions giving rise to them. While modeling one flare required only changes in the electron spectrum compared to the quiescent state, modeling the other flare required changes in both the electron spectrummore » and the size of the emitting region. These results are consistent with an emerging pattern of two broad classes of flaring states seen in blazars. Type 1 flares are explained by changes solely in the electron distribution, whereas type 2 flares require a change in an additional parameter. This suggests that different flares, even in the same source, may result from different physical conditions or different regions in the jet.« less
Simmons, Kaitlyn E.; Hoffman, Christy L.
2016-01-01
Long-distance dog transfer programs are a topic of burgeoning interest in the animal welfare community, but little research has focused on such programs. This exploratory study, which surveyed 193 individuals associated with animal shelter and rescue organizations in the United States, evaluated factors that impacted organizations’ decisions to transfer in dogs over long distances (>100 miles) and assessed what criteria were commonly valued by destination organizations. Specifically, we examined the following aspects of long-distance transfer programs: (1) logistics of long-distance dog transfers; (2) factors impacting dog selection; (3) medical requirements; (4) partnerships formed between source and destination organizations; and (5) perceptions of long-distance dog transfer programs by individuals affiliated with the destination organizations. This study revealed that many logistical considerations factor into transfer decisions and the formation of healthy partnerships between source and destination organizations. Participants indicated their organization’s willingness to receive dogs of various sizes, coat colors and ages, but organizations often had restrictions regarding the breeds they would accept. Study findings indicate some organizations have strict quarantine policies and pre-transfer medical requirements, while others have no such requirements. PMID:26848694
Lithium-Ion Batteries for Aerospace Applications
NASA Technical Reports Server (NTRS)
Surampudi, S.; Halpert, G.; Marsh, R. A.; James, R.
1999-01-01
This presentation reviews: (1) the goals and objectives, (2) the NASA and Airforce requirements, (3) the potential near term missions, (4) management approach, (5) the technical approach and (6) the program road map. The objectives of the program include: (1) develop high specific energy and long life lithium ion cells and smart batteries for aerospace and defense applications, (2) establish domestic production sources, and to demonstrate technological readiness for various missions. The management approach is to encourage the teaming of universities, R&D organizations, and battery manufacturing companies, to build on existing commercial and government technology, and to develop two sources for manufacturing cells and batteries. The technological approach includes: (1) develop advanced electrode materials and electrolytes to achieve improved low temperature performance and long cycle life, (2) optimize cell design to improve specific energy, cycle life and safety, (3) establish manufacturing processes to ensure predictable performance, (4) establish manufacturing processes to ensure predictable performance, (5) develop aerospace lithium ion cells in various AH sizes and voltages, (6) develop electronics for smart battery management, (7) develop a performance database required for various applications, and (8) demonstrate technology readiness for the various missions. Charts which review the requirements for the Li-ion battery development program are presented.
Dutka, Michael S.; Carpenter, Bryce D.; Ojha, Roopesh; ...
2017-01-30
We present that quasi-simultaneous observations of the Flat Spectrum Radio Quasar PKS 2326-502 were carried out in the γ-ray, X-ray, UV, optical, near-infrared, and radio bands. Using these observations, we are able to characterize the spectral energy distribution (SED) of the source during two flaring and one quiescent γ-ray states. These data were used to constrain one-zone leptonic models of the SEDs of each flare and investigate the physical conditions giving rise to them. While modeling one flare required only changes in the electron spectrum compared to the quiescent state, modeling the other flare required changes in both the electronmore » spectrum and the size of the emitting region. These results are consistent with an emerging pattern of two broad classes of flaring states seen in blazars. Type 1 flares are explained by changes solely in the electron distribution, whereas type 2 flares require a change in an additional parameter. Finally, this suggests that different flares, even in the same source, may result from different physical conditions or different regions in the jet.« less
NASA Astrophysics Data System (ADS)
Wason, H.; Herrmann, F. J.; Kumar, R.
2016-12-01
Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzger, Brian D.; Margalit, Ben; Berger, Edo
Subarcsecond localization of the repeating fast radio burst FRB 121102 revealed its coincidence with a dwarf host galaxy and a steady (“quiescent”) nonthermal radio source. We show that the properties of the host galaxy are consistent with those of long-duration gamma-ray bursts (LGRB) and hydrogen-poor superluminous supernovae (SLSNe-I). Both LGRBs and SLSNe-I were previously hypothesized to be powered by the electromagnetic spin-down of newly formed, strongly magnetized neutron stars with millisecond birth rotation periods (“millisecond magnetars”). This motivates considering a scenario whereby the repeated bursts from FRB 121102 originate from a young magnetar remnant embedded within a young hydrogen-poor supernovamore » (SN) remnant. Requirements on the gigahertz free–free optical depth through the expanding SN ejecta (accounting for photoionization by the rotationally powered magnetar nebula), energetic constraints on the bursts, and constraints on the size of the quiescent source all point to an age of less than a few decades. The quiescent radio source can be attributed to synchrotron emission from the shock interaction between the fast outer layer of the supernova ejecta with the surrounding wind of the progenitor star, or the radio source can from deeper within the magnetar wind nebula as outlined in Metzger et al. Alternatively, the radio emission could be an orphan afterglow from an initially off-axis LGRB jet, though this might require the source to be too young. The young age of the source can be tested by searching for a time derivative of the dispersion measure and the predicted fading of the quiescent radio source. We propose future tests of the SLSNe-I/LGRB/FRB connection, such as searches for FRBs from nearby SLSNe-I/LGRBs on timescales of decades after their explosions.« less
SDTM - SYSTEM DESIGN TRADEOFF MODEL FOR SPACE STATION FREEDOM RELEASE 1.1
NASA Technical Reports Server (NTRS)
Chamberlin, R. G.
1994-01-01
Although extensive knowledge of space station design exists, the information is widely dispersed. The Space Station Freedom Program (SSFP) needs policies and procedures that ensure the use of consistent design objectives throughout its organizational hierarchy. The System Design Tradeoff Model (SDTM) produces information that can be used for this purpose. SDTM is a mathematical model of a set of possible designs for Space Station Freedom. Using the SDTM program, one can find the particular design which provides specified amounts of resources to Freedom's users at the lowest total (or life cycle) cost. One can also compare alternative design concepts by changing the set of possible designs, while holding the specified user services constant, and then comparing costs. Finally, both costs and user services can be varied simultaneously when comparing different designs. SDTM selects its solution from a set of feasible designs. Feasibility constraints include safety considerations, minimum levels of resources required for station users, budget allocation requirements, time limitations, and Congressional mandates. The total, or life cycle, cost includes all of the U.S. costs of the station: design and development, purchase of hardware and software, assembly, and operations throughout its lifetime. The SDTM development team has identified, for a variety of possible space station designs, the subsystems that produce the resources to be modeled. The team has also developed formulas for the cross consumption of resources by other resources, as functions of the amounts of resources produced. SDTM can find the values of station resources, so that subsystem designers can choose new design concepts that further reduce the station's life cycle cost. The fundamental input to SDTM is a set of formulas that describe the subsystems which make up a reference design. Most of the formulas identify how the resources required by each subsystem depend upon the size of the subsystem. Some of the formulas describe how the subsystem costs depend on size. The formulas can be complicated and nonlinear (if nonlinearity is needed to describe how designs change with size). SDTM's outputs are amounts of resources, life-cycle costs, and marginal costs. SDTM will run on IBM PC/XTs, ATs, and 100% compatibles with 640K of RAM and at least 3Mb of fixed-disk storage. A printer which can print in 132-column mode is also required, and a mathematics co-processor chip is highly recommended. This code is written in Turbo C 2.0. However, since the developers used a modified version of the proprietary Vitamin C source code library, the complete source code is not available. The executable is provided, along with all non-proprietary source code. This program was developed in 1989.
Characterization of indigenous chicken production systems in Kenya.
Okeno, Tobias O; Kahi, Alexander K; Peters, Kurt J
2012-03-01
Indigenous chicken (IC) and their production systems were characterized to understand how the whole system operates for purposes of identifying threats and opportunities for holistic improvement. A survey involving 594 households was conducted in six counties with the highest population of IC in Kenya using structured questionnaires. Data on IC farmers' management practices were collected and analysed and inbreeding levels calculated based on the effective population size. Indigenous chicken were ranked highest as a source of livestock income by households in medium- to high-potential agricultural areas, but trailed goats in arid and semi-arid areas. The production system practised was mainly low-input and small-scale free range, with mean flock size of 22.40 chickens per household. The mean effective population size was 16.02, translating to high levels of inbreeding (3.12%). Provision for food and cash income were the main reasons for raising IC, whilst high mortality due to diseases, poor nutrition, housing and marketing channels were the major constraints faced by farmers. Management strategies targeting improved healthcare, nutrition and housing require urgent mitigation measures, whilst rural access road network needs to be developed for ease of market accessibility. Sustainable genetic improvement programmes that account for farmers' multiple objectives, market requirements and the production circumstances should be developed for a full realization of IC productivity.
Controlling free flight of a robotic fly using an onboard vision sensor inspired by insect ocelli
Fuller, Sawyer B.; Karpelson, Michael; Censi, Andrea; Ma, Kevin Y.; Wood, Robert J.
2014-01-01
Scaling a flying robot down to the size of a fly or bee requires advances in manufacturing, sensing and control, and will provide insights into mechanisms used by their biological counterparts. Controlled flight at this scale has previously required external cameras to provide the feedback to regulate the continuous corrective manoeuvres necessary to keep the unstable robot from tumbling. One stabilization mechanism used by flying insects may be to sense the horizon or Sun using the ocelli, a set of three light sensors distinct from the compound eyes. Here, we present an ocelli-inspired visual sensor and use it to stabilize a fly-sized robot. We propose a feedback controller that applies torque in proportion to the angular velocity of the source of light estimated by the ocelli. We demonstrate theoretically and empirically that this is sufficient to stabilize the robot's upright orientation. This constitutes the first known use of onboard sensors at this scale. Dipteran flies use halteres to provide gyroscopic velocity feedback, but it is unknown how other insects such as honeybees stabilize flight without these sensory organs. Our results, using a vehicle of similar size and dynamics to the honeybee, suggest how the ocelli could serve this role. PMID:24942846
NASA Astrophysics Data System (ADS)
Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald
2009-08-01
The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2017-11-01
Active cloaking in its basic form requires that the extinction cross-section (or energy efficiency) from a radiating body vanishes. In this analysis, this physical effect is demonstrated for an active cylindrically radiating acoustic source in a non-viscous fluid, undergoing periodic axisymmetric harmonic vibrations near a rigid corner (i.e., quarter-space). The rigorous multipole expansion method in cylindrical coordinates, the method of images, and the addition theorem of cylindrical wave functions are used to derive closed-form mathematical expressions for the radiating, amplification, and extinction cross-sections of the active source. Numerical computations are performed assuming monopole and dipole modal oscillations of the circular source. The results reveal some of the situations where the extinction energy efficiency factor of the active source vanishes depending on its size and location with respect to the rigid corner, thus, achieving total invisibility. Moreover, the extinction energy efficiency factor varies between positive or negative values. These effects also occur for higher-order modal oscillations of the active source. The results find potential applications in the development of acoustic cloaking devices and invisibility in underwater acoustics or other areas.
VizieR Online Data Catalog: VLA-COSMOS 3 GHz Large Project (Smolcic+, 2017)
NASA Astrophysics Data System (ADS)
Smolcic, V.; Novak, M.; Bondi, M.; Ciliegi, P.; Mooley, K. P.; Schinnerer, E.; Zamorani, G.; Navarrete, F.; Bourke, S.; Karim, A.; Vardoulaki, E.; Leslie, S.; Delhaize, J.; Carilli, C. L.; Myers, S. T.; Baran, N.; Delvecchio, I.; Miettinen, O.; Banfield, J.; Balokovic, M.; Bertoldi, F.; Capak, P.; Frail, D. A.; Hallinan, G.; Hao, H.; Herrera Ruiz, N.; Horesh, A.; Ilbert, O.; Intema, H.; Jelic, V.; Klockner, H.-R.; Krpan, J.; Kulkarni, S. R.; McCracken, H.; Laigle, C.; Middleberg, E.; Murphy, E.; Sargent, M.; Scoville, N. Z.; Sheth, K.
2016-10-01
The catalog contains sources selected down to a 5σ(σ~2.3uJy/beam) threshold. This catalog can be used for statistical analyses, accompanied with the corrections given in the data & catalog release paper. All completeness & bias corrections and source counts presented in the paper were calculated using this sample. The total fraction of spurious sources in the COSMOS 2 sq.deg. is below 2.7% within this catalog. However, an increase of spurious sources up to 24% at 5.0=5.5 for single component sources (MULTI=0). The total fraction of spurious sources in the COSMOS 2 sq.deg. within such a selected sample is below 0.4%, and the fraction of spurious sources is below 3% even at the lowest S/N (=5.5). Catalog Notes: 1. Maximum ID is 10966 although there are 10830 sources. Some IDs were removed by joining them into multi-component sources. 2. Peak surface brightness of sources [uJy/beam] is not reported, but can be obtained by multiplying SNR with RMS. 3. High NPIX usually indicates extended or very bright sources. 4. Reported positional errors on resolved and extended sources should be considered lower limits. 5. Multicomponent sources have errors and S/N column values set to -99.0 Additional data information: Catalog date: 21-Mar-2016 Source extractor: BLOBCAT v1.2 (http://blobcat.sourceforge.net/) Observations: 384 hours, VLA, S-band (2-4GHz), A+C array, 192 pointings Imaging software: CASA v4.2.2 (https://casa.nrao.edu/) Imaging algorithm: Multiscale multifrequency synthesis on single pointings Mosaic size: 30000x30000 pixels (3.3 GB) Pixel size: 0.2x0.2 arcsec2 Median rms noise in the COSMOS 2 sq.deg.: 2.3uJy/beam Beam is circular with FWHM=0.75 arcsec Bandwidth-smearing peak correction: 0% (no corrections applied) Resolved criteria: Sint/Speak>1+6*snr^(-1.44) Total area covered: 2.6 sq.deg. (1 data file).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Peter
2016-07-26
This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy ofmore » the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.« less
NASA Astrophysics Data System (ADS)
Mazzaracchio, Antonio; Marchetti, Mario
2010-03-01
Implicit ablation and thermal response software was developed to analyse and size charring ablative thermal protection systems for entry vehicles. A statistical monitor integrated into the tool, which uses the Monte Carlo technique, allows a simulation to run over stochastic series. This performs an uncertainty and sensitivity analysis, which estimates the probability of maintaining the temperature of the underlying material within specified requirements. This approach and the associated software are primarily helpful during the preliminary design phases of spacecraft thermal protection systems. They are proposed as an alternative to traditional approaches, such as the Root-Sum-Square method. The developed tool was verified by comparing the results with those from previous work on thermal protection system probabilistic sizing methodologies, which are based on an industry standard high-fidelity ablation and thermal response program. New case studies were analysed to establish thickness margins on sizing heat shields that are currently proposed for vehicles using rigid aeroshells for future aerocapture missions at Neptune, and identifying the major sources of uncertainty in the material response.
IN SITU MEASUREMENTS OF THE SIZE AND DENSITY OF TITAN AEROSOL ANALOGS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoerst, S. M.; Tolbert, M. A, E-mail: sarah.horst@colorado.edu
2013-06-10
The organic haze produced from complex CH{sub 4}/N{sub 2} chemistry in the atmosphere of Titan plays an important role in processes that occur in the atmosphere and on its surface. The haze particles act as condensation nuclei and are therefore involved in Titan's methane hydrological cycle. They also may behave like sediment on Titan's surface and participate in both fluvial and aeolian processes. Models that seek to understand these processes require information about the physical properties of the particles including their size and density. Although measurements obtained by Cassini-Huygens have placed constraints on the size of the haze particles, theirmore » densities remain unknown. We have conducted a series of Titan atmosphere simulation experiments and measured the size, number density, and particle density of Titan aerosol analogs, or tholins, for CH{sub 4} concentrations from 0.01% to 10% using two different energy sources, spark discharge and UV. We find that the densities currently in use by many Titan models are higher than the measured densities of our tholins.« less
NASA Astrophysics Data System (ADS)
Wimmer, C.; Fantz, U.; Aza, E.; Jovović, J.; Kraus, W.; Mimo, A.; Schiesko, L.
2017-08-01
The Neutral Beam Injection (NBI) system for fusion devices like ITER and, beyond ITER, DEMO requires large scale sources for negative hydrogen ions. BATMAN (Bavarian Test Machine for Negative ions) is a test facility attached with the prototype source for the ITER NBI (1/8 source size of the ITER source), dedicated to physical investigations due to its flexible access for diagnostics and exchange of source components. The required amount of negative ions is produced by surface conversion of hydrogen atoms or ions on caesiated surfaces. Several diagnostic tools (Optical Emission Spectroscopy, Cavity Ring-Down Spectroscopy for H-, Langmuir probes, Tunable Diode Laser Absorption Spectroscopy for Cs) allow the determination of plasma parameters in the ion source. Plasma parameters for two modifications of the standard prototype source have been investigated: Firstly, a second Cs oven has been installed in the bottom part of the back plate in addition to the regularly used oven in the top part of the back plate. Evaporation from the top oven only can lead to a vertically asymmetric Cs distribution in front of the plasma grid. Using both ovens, a symmetric Cs distribution can be reached - however, in most cases no significant change of the extracted ion current has been determined for varying Cs symmetry if the source is well-conditioned. Secondly, BATMAN has been equipped with a much larger, racetrack-shaped RF driver (area of 32×58 cm2) instead of the cylindrical RF driver (diameter of 24.5 cm). The main idea is that one racetrack driver could substitute two cylindrical drivers in larger sources with increased reliability and power efficiency. For the same applied RF power, the electron density is lower in the racetrack driver due to its five times higher volume. The fraction of hydrogen atoms to molecules, however, is at a similar level or even slightly higher, which is a promising result for application in larger sources.
NASA Astrophysics Data System (ADS)
Siebach, K. L.; Baker, M. B.; Grotzinger, J. P.; McLennan, S. M.; Gellert, R.; Thompson, L. M.; Hurowitz, J.
2017-12-01
Mineral distribution patterns in sediments of the Bradbury group in Gale crater, interpreted from observations by the Mars Science Laboratory rover Curiosity, show the importance of transport mechanics in source-to-sink processes on Mars. The Bradbury group is comprised of basalt-derived mudstones to conglomerates exposed along the modern floor of Gale crater and analyzed along a 9-km traverse of the Curiosity rover. Over 110 bulk chemistry analyses of the rocks were acquired, along with two XRD mineralogical analyses of the mudstone. These rocks are uniquely suited for analysis of source-to-sink processes because they exhibit a wide range of compositions, but (based on multiple chemical weathering proxies) they appear to have experienced negligible cation-loss during weathering and erosion. Chemical variations between analyses correlate with sediment grain sizes, with coarser-grained rocks enriched in plagioclase components SiO2, Al2O3, and Na2O, and finer-grained rocks enriched in components of mafic minerals, consistent with grain-size sorting of mineral fractions during sediment transport. Further geochemical and mineralogical modeling supports the importance of mineral fractionation: even though the limited XRD data suggests that some fraction (if not all) of the rocks contain clays and an amorphous component, models show that 90% of the compositions measured are consistent with sorting of primary igneous minerals from a plagioclase-phyric subalkaline basalt (i.e., no corrections for cation-loss are required). The distribution of K2O, modeled as a potassium feldspar component, is an exception to the major-element trends because it does not correlate with grain size, but has an elevation-dependent signal likely correlated with the introduction of a second source material. However, the dominant compositional trends within the Bradbury group sedimentary rocks are correlated with grain size and consistent with mineral fractionation of minimally-weathered plagioclase-phyric basalts; the plagioclase phenocrysts settle into coarser deposits and the finer deposits are dominated by mafic minerals.
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2007-07-10
The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and previous work suggests that the requirements for the EGF event are much less stringent. We can study more earthquakes using the coda-wave methods, while using direct wave methods for the best recorded subset of events so as to investigate any differences between the results of the two approaches. Finding 'perfect' EGF events for direct wave analysis is difficult, as is ascertaining the quality of a particular EGF event. We develop a multi-taper method to obtain time-domain source-time-functions by frequency division. If an earthquake and EGF event pair are able to produce a clear, time-domain source pulse then we accept the EGF event. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We use the well-recorded sequence of aftershocks of the M5 Au Sable Forks, NY, earthquake to test the method and also to obtain some of the first accurate source parameters for small earthquakes in eastern North America. We find that the stress drops are high, confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We simplify and improve the coda wave analysis method by calculating spectral ratios between different sized earthquakes. We first compare spectral ratio performance between local and near-regional S and coda waves in the San Francisco Bay region for moderate-sized events. The average spectral ratio standard deviations using coda are {approx}0.05 to 0.12, roughly a factor of 3 smaller than direct S-waves for 0.2 < f < 15.0 Hz. Also, direct wave analysis requires collocated pairs of earthquakes whereas the event-pairs (Green's function and target events) can be separated by {approx}25 km for coda amplitudes without any appreciable degradation. We then apply coda spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks. We observe a clear departure from self-similarity, consistent with previous studies using similar regional datasets.« less
Quantifying the source-sink balance and carbohydrate content in three tomato cultivars.
Li, Tao; Heuvelink, Ep; Marcelis, Leo F M
2015-01-01
Supplementary lighting is frequently applied in the winter season for crop production in greenhouses. The effect of supplementary lighting on plant growth depends on the balance between assimilate production in source leaves and the overall capacity of the plants to use assimilates. This study aims at quantifying the source-sink balance and carbohydrate content of three tomato cultivars differing in fruit size, and to investigate to what extent the source/sink ratio correlates with the potential fruit size. Cultivars Komeet (large size), Capricia (medium size), and Sunstream (small size, cherry tomato) were grown from 16 August to 21 November, at similar crop management as in commercial practice. Supplementary lighting (High Pressure Sodium lamps, photosynthetic active radiation at 1 m below lamps was 162 μmol photons m(-2) s(-1); maximum 10 h per day depending on solar irradiance level) was applied from 19 September onward. Source strength was estimated from total plant growth rate using periodic destructive plant harvests in combination with the crop growth model TOMSIM. Sink strength was estimated from potential fruit growth rate which was determined from non-destructively measuring the fruit growth rate at non-limiting assimilate supply, growing only one fruit on each truss. Carbohydrate content in leaves and stems were periodically determined. During the early growth stage, 'Komeet' and 'Capricia' showed sink limitation and 'Sunstream' was close to sink limitation. During this stage reproductive organs had hardly formed or were still small and natural irradiance was high (early September) compared to winter months. Subsequently, during the fully fruiting stage all three cultivars were strongly source-limited as indicated by the low source/sink ratio (average source/sink ratio from 50 days after planting onward was 0.17, 0.22, and 0.33 for 'Komeet,' 'Capricia,' and 'Sunstream,' respectively). This was further confirmed by the fact that pruning half of the fruits hardly influenced net leaf photosynthesis rates. Carbohydrate content in leaves and stems increased linearly with the source/sink ratio. We conclude that during the early growth stage under high irradiance, tomato plants are sink-limited and that the level of sink limitation differs between cultivars but it is not correlated with their potential fruit size. During the fully fruiting stage tomato plants are source-limited and the extent of source limitation of a cultivar is positively correlated with its potential fruit size.
Flame-vortex interactions imaged in microgravity
NASA Technical Reports Server (NTRS)
Driscoll, James F.; Dahm, Werner J. A.; Sichel, Martin
1995-01-01
The scientific objective is to obtain high quality color-enhanced digital images of a vortex exerting aerodynamic strain on premixed and nonpremixed flames with the complicating effects of buoyancy removed. The images will provide universal (buoyancy free) scaling relations that are required to improve several types of models of turbulent combustion, including KIVA-3, discrete vortex, and large-eddy simulations. The images will be used to help quantify several source terms in the models, including those due to flame stretch, flame-generated vorticity, flame curvature, and preferential diffusion, for a range of vortex sizes and flame conditions. The experiment is an ideal way to study turbulence-chemistry interactions and isolate the effect of vortices of different sizes and strengths in a repeatable manner. A parallel computational effort is being conducted which considers full chemistry and preferential diffusion.
CatSim: a new computer assisted tomography simulation environment
NASA Astrophysics Data System (ADS)
De Man, Bruno; Basu, Samit; Chandra, Naveen; Dunham, Bruce; Edic, Peter; Iatrou, Maria; McOlash, Scott; Sainath, Paavana; Shaughnessy, Charlie; Tower, Brendon; Williams, Eugene
2007-03-01
We present a new simulation environment for X-ray computed tomography, called CatSim. CatSim provides a research platform for GE researchers and collaborators to explore new reconstruction algorithms, CT architectures, and X-ray source or detector technologies. The main requirements for this simulator are accurate physics modeling, low computation times, and geometrical flexibility. CatSim allows simulating complex analytic phantoms, such as the FORBILD phantoms, including boxes, ellipsoids, elliptical cylinders, cones, and cut planes. CatSim incorporates polychromaticity, realistic quantum and electronic noise models, finite focal spot size and shape, finite detector cell size, detector cross-talk, detector lag or afterglow, bowtie filtration, finite detector efficiency, non-linear partial volume, scatter (variance-reduced Monte Carlo), and absorbed dose. We present an overview of CatSim along with a number of validation experiments.
Pencil-like mm-size electron beams produced with linear inductive voltage adders
NASA Astrophysics Data System (ADS)
Mazarakis, M. G.; Poukey, J. W.; Rovang, D. C.; Maenchen, J. E.; Cordova, S. R.; Menge, P. R.; Pepping, R.; Bennett, L.; Mikkelson, K.; Smith, D. L.; Halbleib, J.; Stygar, W. A.; Welch, D. R.
1997-02-01
We present the design, analysis, and results of the high brightness electron beam experiments currently under investigation at Sandia National Laboratories. The anticipated beam parameters are the following: energy 12 MeV, current 35-40 kA, rms radius 0.5 mm, and pulse duration 40 ns full width at half-maximum. The accelerator is SABRE, a pulsed linear inductive voltage adder modified to higher impedance, and the electron source is a magnetically immersed foilless electron diode. 20-30 T solenoidal magnets are required to insulate the diode and contain the beam to its extremely small-sized (1 mm) envelope. These experiments are designed to push the technology to produce the highest possible electron current in a submillimeter radius beam. Design, numerical simulations, and experimental results are presented.
An improved radiation metric. [for radiation pressure in strong gravitational fields
NASA Technical Reports Server (NTRS)
Noerdlinger, P. D.
1976-01-01
An improved radiation metric is obtained in which light rays make a small nonzero angle with the radius, thus representing a source of finite size. Kaufmann's previous solution is criticized. The stabilization of a scatterer near a source of gravitational field and radiation is slightly enhanced for sources of finite size.
NASA Technical Reports Server (NTRS)
Spence, Rodney L.
1993-01-01
The important principles of direct- and heterodyne-detection optical free-space communications are reviewed. Signal-to-noise-ratio (SNR) and bit-error-rate (BER) expressions are derived for both the direct-detection and heterodyne-detection optical receivers. For the heterodyne system, performance degradation resulting from received-signal and local oscillator-beam misalignment and laser phase noise is analyzed. Determination of interfering background power from local and extended background sources is discussed. The BER performance of direct- and heterodyne-detection optical links in the presence of Rayleigh-distributed random pointing and tracking errors is described. Finally, several optical systems employing Nd:YAG, GaAs, and CO2 laser sources are evaluated and compared to assess their feasibility in providing high-data-rate (10- to 1000-Mbps) Mars-to-Earth communications. It is shown that the root mean square (rms) pointing and tracking accuracy is a critical factor in defining the system transmitting laser-power requirements and telescope size and that, for a given rms error, there is an optimum telescope aperture size that minimizes the required power. The results of the analysis conducted indicate that, barring the achievement of extremely small rms pointing and tracking errors (less than 0.2 microrad), the two most promising types of optical systems are those that use an Nd:YAG laser (lambda = 1.064 microns) and high-order pulse position modulator (PPM) and direct detection, and those that use a CO2 laser (lambda = 10.6 microns) and phase shifting keying homodyne modulation and coherent detection. For example, for a PPM order of M = 64 and an rms pointing accuracy of 0.4 microrad, an Nd:YAG system can be used to implement a 100-Mbps Mars link with a 40-cm transmitting telescope, a 20-W laser, and a 10-m receiving photon bucket. Under the same conditions, a CO2 system would require 3-m transmitting and receiving telescopes and a 32-W laser to implement such a link. Other types of optical systems, such as a semiconductor laser systems, are impractical in the presence of large rms pointing errors because of the high power requirements of the 100-Mbps Mars link, even when optimal-size telescopes are used.
NASA Astrophysics Data System (ADS)
Collier, Jordan; Filipovic, Miroslav; Norris, Ray; Chow, Kate; Huynh, Minh; Banfield, Julie; Tothill, Nick; Sirothia, Sandeep Kumar; Shabala, Stanislav
2014-04-01
This proposal is a continuation of an extensive project (the core of Collier's PhD) to explore the earliest stages of AGN formation, using Gigahertz-Peaked Spectrum (GPS) and Compact Steep Spectrum (CSS) sources. Both are widely believed to represent the earliest stages of radio-loud AGN evolution, with GPS sources preceding CSS sources. In this project, we plan to (a) test this hypothesis, (b) place GPS and CSS sources into an evolutionary sequence with a number of other young AGN candidates, and (c) search for evidence of the evolving accretion mode. We will do this using high-resolution radio observations, with a number of other multiwavelength age indicators, of a carefully selected complete faint sample of 80 GPS/CSS sources. Analysis of the C2730 ELAIS-S1 data shows that we have so far met our goals, resolving the jets of 10/49 sources, and measuring accurate spectral indices from 0.843-10 GHz. This particular proposal is to almost triple the sample size by observing an additional 80 GPS/CSS sources in the Chandra Deep Field South (arguably the best-studied field) and allow a turnover frequency - linear size relation to be derived at >10-sigma. Sources found to be unresolved in our final sample will subsequently be observed with VLBI. Comparing those sources resolved with ATCA to the more compact sources resolved with VLBI will give a distribution of source sizes, helping to answer the question of whether all GPS/CSS sources grow to larger sizes.
Venturi vacuum systems for hypobaric chamber operations.
Robinson, R; Swaby, G; Sutton, T; Fife, C; Powell, M; Butler, B D
1997-11-01
Physiological studies of the effects of high altitude on man often require the use of a hypobaric chamber to simulate the reduced ambient pressures. Typical "altitude" chambers in use today require complex mechanical vacuum systems to evacuate the chamber air, either directly or via reservoir system. Use of these pumps adds to the cost of both chamber procurement and maintenance, and service of these pumps requires trained support personnel and regular upkeep. In this report we describe use of venturi vacuum pumps to perform the function of mechanical vacuum pumps for human and experimental altitude chamber operations. Advantages of the venturi pumps include their relatively low procurement cost, small size and light weight, ease of installation and plumbing, lack of moving parts, and independence from electrical power sources, fossil fuels and lubricants. Conversion of three hyperbaric chambers to combined hyper/hypobaric use is described.
Pan, Yuepeng; Tian, Shili; Li, Xingru; Sun, Ying; Li, Yi; Wentworth, Gregory R; Wang, Yuesi
2015-12-15
Public concerns over airborne trace elements (TEs) in metropolitan areas are increasing, but long-term and multi-site observations of size-resolved aerosol TEs in China are still lacking. Here, we identify highly elevated levels of atmospheric TEs in megacities and industrial sites in a Beijing-Tianjin-Hebei urban agglomeration relative to background areas, with the annual mean values of As, Pb, Ni, Cd and Mn exceeding the acceptable limits of the World Health Organization. Despite the spatial variability in concentrations, the size distribution pattern of each trace element was quite similar across the region. Crustal elements of Al and Fe were mainly found in coarse particles (2.1-9 μm), whereas the main fraction of toxic metals, such as Cu, Zn, As, Se, Cd and Pb, was found in submicron particles (<1.1 μm). These toxic metals were enriched by over 100-fold relative to the Earth's crust. The size distributions of Na, Mg, K, Ca, V, Cr, Mn, Ni, Mo and Ba were bimodal, with two peaks at 0.43-0.65 μm and 4.7-5.8 μm. The combination of the size distribution information, principal component analysis and air mass back trajectory model offered a robust technique for distinguishing the main sources for airborne TEs, e.g., soil dust, fossil fuel combustion and industrial emissions, at different sites. In addition, higher elemental concentrations coincided with westerly flow, indicating that polluted soil and fugitive dust were major sources of TEs on the regional scale. However, the contribution of coal burning, iron industry/oil combustion and non-ferrous smelters to atmospheric metal pollution in Northern China should be given more attention. Considering that the concentrations of heavy metals associated with fine particles in the target region were significantly higher than those in other Asian sites, the implementations of strict environmental standards in China are required to reduce the amounts of these hazardous pollutants released into the atmosphere. Copyright © 2015 Elsevier B.V. All rights reserved.
Beamline 10.3.2 at ALS: a hard X-ray microprobe for environmental and materials sciences.
Marcus, Matthew A; MacDowell, Alastair A; Celestre, Richard; Manceau, Alain; Miller, Tom; Padmore, Howard A; Sublett, Robert E
2004-05-01
Beamline 10.3.2 at the ALS is a bend-magnet line designed mostly for work on environmental problems involving heavy-metal speciation and location. It offers a unique combination of X-ray fluorescence mapping, X-ray microspectroscopy and micro-X-ray diffraction. The optics allow the user to trade spot size for flux in a size range of 5-17 microm in an energy range of 3-17 keV. The focusing uses a Kirkpatrick-Baez mirror pair to image a variable-size virtual source onto the sample. Thus, the user can reduce the effective size of the source, thereby reducing the spot size on the sample, at the cost of flux. This decoupling from the actual source also allows for some independence from source motion. The X-ray fluorescence mapping is performed with a continuously scanning stage which avoids the time overhead incurred by step-and-repeat mapping schemes. The special features of this beamline are described, and some scientific results shown.
Experimental Study of Radiation Efficiency from an Ingested Source inside a Human Body Model*.
Chan, Yawen; -H Meng, Max; Wu, K-L; Wang, Xiaona
2005-01-01
The attenuation of human body trunk at frequency range of 100MHz to 6GHz from an internal source was estimated using a simplified experimental model. Antennas were placed in the model which was filled with distilled water, 0.9% NaCl saline solution, and porcine body tissue alternately to determine the attenuation of the system. Saline has greater attenuation than water due to its higher conductivity, while porcine body tissue has attenuation bounded by saline solution and water. Estimated attenuation at the four ISM bands, 434MHz, 915MHz, 2.45GHz and 5.8GHz were given and all of these bands satisfied the safety and sensitivity requirements of a biomedical telemetry system. 915MHz and 2.45GHz are good choices for the wireless link because of their relatively larger electrical size of RF components such as antenna. In addition, with the growth in wireless LAN and Bluetooth technology, miniaturized antennas, camera modules, and other RF devices have been developed which can be employed in biomedical ingested or implanted devices. This paper gives a reference of attenuation values of a human body trunk of average size. It should be noted that the attenuation values can be different for different body size and different body composition, and therefore the values in this paper serves as a reference only.
Cooney, Daniel J; Hickey, Anthony J
2008-01-01
The influence of diesel exhaust particles (DEP) on the lungs and heart is currently a topic of great interest in inhalation toxicology. Epidemiological data and animal studies have implicated airborne particulate matter and DEP in increased morbidity and mortality due to a number of cardiopulmonary diseases including asthma, chronic obstructive pulmonary disorder, and lung cancer. The pathogeneses of these diseases are being studied using animal models and cell culture techniques. Real-time exposures to freshly combusted diesel fuel are complex and require significant infrastructure including engine operations, dilution air, and monitoring and control of gases. A method of generating DEP aerosols from a bulk source in an aerodynamic size range similar to atmospheric DEP would be a desirable and useful alternative. Metered dose inhaler technology was adopted to generate aerosols from suspensions of DEP in the propellant hydrofluoroalkane 134a. Inertial impaction data indicated that the particle size distributions of the generated aerosols were trimodal, with count median aerodynamic diameters less than 100 nm. Scanning electron microscopy of deposited particles showed tightly aggregated particles, as would be expected from an evaporative process. Chemical analysis indicated that there were no major changes in the mass proportion of 2 specific aromatic hydrocarbons (benzo[a]pyrene and benzo[k]fluoranthene) in the particles resulting from the aerosolization process. PMID:19337412
NASA Astrophysics Data System (ADS)
Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.
2018-05-01
Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.
Challenges for Synchrotron X-Ray Optics
NASA Astrophysics Data System (ADS)
Freund, Andreas K.
2002-12-01
It is the task of x-ray optics to adapt the raw beam generated by modern sources such as synchrotron storage rings to a great variety of experimental requirements in terms of intensity, spot size, polarization and other parameters. The very high quality of synchrotron radiation (source size of a few microns and beam divergence of a few micro-radians) and the extreme x-ray flux (power of several hundred Watts in a few square mm) make this task quite difficult. In particular the heat load aspect is very important in the conditioning process of the brute x-ray power to make it suitable for being used on the experimental stations. Cryogenically cooled silicon crystals and water-cooled diamond crystals can presently fulfill this task, but limits will soon be reached and new schemes and materials must be envisioned. A major tendency of instrument improvement has always been to concentrate more photons into a smaller spot utilizing a whole variety of focusing devices such as Fresnel zone plates, refractive lenses and systems based on bent surfaces, for example, Kirkpatrick-Baez systems. Apart from the resistance of the sample, the ultimate limits are determined by the source size and strength on one side, by materials properties, cooling, mounting and bending schemes on the other side, and fundamentally by the diffraction process. There is also the important aspect of coherence that can be both a nuisance and a blessing for the experiments, in particular for imaging techniques. Its conservation puts additional constraints on the quality of the optical elements. The overview of the present challenges includes the properties of present and also mentions aspects of future x-ray sources such as the "ultimate" storage ring and free electron lasers. These challenges range from the thermal performances of monochromators to the surface quality of mirrors, from coherence preservation of modern multilayers to short pulse preservation by crystals, and from micro- and nano-focusing techniques to the accuracy and stability of mechanical supports.
Source-water susceptibility assessment in Texas—Approach and methodology
Ulery, Randy L.; Meyer, John E.; Andren, Robert W.; Newson, Jeremy K.
2011-01-01
Public water systems provide potable water for the public's use. The Safe Drinking Water Act amendments of 1996 required States to prepare a source-water susceptibility assessment (SWSA) for each public water system (PWS). States were required to determine the source of water for each PWS, the origin of any contaminant of concern (COC) monitored or to be monitored, and the susceptibility of the public water system to COC exposure, to protect public water supplies from contamination. In Texas, the Texas Commission on Environmental Quality (TCEQ) was responsible for preparing SWSAs for the more than 6,000 public water systems, representing more than 18,000 surface-water intakes or groundwater wells. The U.S. Geological Survey (USGS) worked in cooperation with TCEQ to develop the Source Water Assessment Program (SWAP) approach and methodology. Texas' SWAP meets all requirements of the Safe Drinking Water Act and ultimately provides the TCEQ with a comprehensive tool for protection of public water systems from contamination by up to 247 individual COCs. TCEQ staff identified both the list of contaminants to be assessed and contaminant threshold values (THR) to be applied. COCs were chosen because they were regulated contaminants, were expected to become regulated contaminants in the near future, or were unregulated but thought to represent long-term health concerns. THRs were based on maximum contaminant levels from U.S. Environmental Protection Agency (EPA)'s National Primary Drinking Water Regulations. For reporting purposes, COCs were grouped into seven contaminant groups: inorganic compounds, volatile organic compounds, synthetic organic compounds, radiochemicals, disinfection byproducts, microbial organisms, and physical properties. Expanding on the TCEQ's definition of susceptibility, subject-matter expert working groups formulated the SWSA approach based on assumptions that natural processes and human activities contribute COCs in quantities that vary in space and time; that increased levels of COC-producing activities within a source area may increase susceptibility to COC exposure; and that natural and manmade conditions within the source area may increase, decrease, or have no observable effect on susceptibility to COC exposure. Incorporating these assumptions, eight SWSA components were defined: identification, delineation, intrinsic susceptibility, point- and nonpoint-source susceptibility, contaminant occurrence, area-of-primary influence, and summary components. Spatial datasets were prepared to represent approximately 170 attributes or indicators used in the assessment process. These primarily were static datasets (approximately 46 gigabytes (GB) in size). Selected datasets such as PWS surface-water-intake or groundwater-well locations and potential source of contamination (PSOC) locations were updated weekly. Completed assessments were archived, and that database is approximately 10 GB in size. SWSA components currently (2011) are implemented in the Source Water Assessment Program-Decision Support System (SWAP-DSS) computer software, specifically developed to produce SWSAs. On execution of the software, the components work to identify the source of water for the well or intake, assess intrinsic susceptibility of the water- supply source, assess susceptibility to contamination with COCs from point and nonpoint sources, identify any previous detections of COCs from existing water-quality databases, and summarize the results. Each water-supply source's susceptibility is assessed, source results are weighted by source capacity (when a PWS has multiple sources), and results are combined into a single SWSA for the PWS.'SWSA reports are generated using the software; during 2003, more than 6,000 reports were provided to PWS operators and the public. The ability to produce detailed or summary reports for individual sources, and detailed or summary reports for a PWS, by COC or COC group was a unique capability of SWAP-DSS. In 2004, the TCEQ began a rotating schedule for SWSA wherein one-third of PWSs statewide would be assessed annually, or sooner if protection-program activities deemed it necessary, and that schedule has continued to the present. Cooperative efforts by the TCEQ and the USGS for SWAP software maintenance and enhancements ended in 2011 with the TCEQ assuming responsibility for all tasks.
Modelling of caesium dynamics in the negative ion sources at BATMAN and ELISE
NASA Astrophysics Data System (ADS)
Mimo, A.; Wimmer, C.; Wünderlich, D.; Fantz, U.
2017-08-01
The knowledge of Cs dynamics in negative hydrogen ion sources is a primary issue to achieve the ITER requirements for the Neutral Beam Injection (NBI) systems, i.e. one hour operation with an accelerated ion current of 40 A of D- and a ratio between negative ions and co-extracted electrons below one. Production of negative ions is mostly achieved by conversion of hydrogen/deuterium atoms on a converter surface, which is caesiated in order to reduce the work function and increase the conversion efficiency. The understanding of the Cs transport and redistribution mechanism inside the source is necessary for the achievement of high performances. Cs dynamics was therefore investigated by means of numerical simulations performed with the Monte Carlo transport code CsFlow3D. Simulations of the prototype source (1/8 of the ITER NBI source size) have shown that the plasma distribution inside the source has the major effect on Cs dynamics during the pulse: asymmetry of the plasma parameters leads to asymmetry in Cs distribution in front of the plasma grid. The simulated time traces and the general simulation results are in agreement with the experimental measurements. Simulations performed for the ELISE testbed (half of the ITER NBI source size) have shown an effect of the vacuum phase time on the amount and stability of Cs during the pulse. The sputtering of Cs due to back-streaming ions was reproduced by the simulations and it is in agreement with the experimental observation: this can become a critical issue during long pulses, especially in case of continuous extraction as foreseen for ITER. These results and the acquired knowledge of Cs dynamics will be useful to have a better management of Cs and thus to reduce its consumption, in the direction of the demonstration fusion power plant DEMO.
The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.
ERIC Educational Resources Information Center
McCain, Thomas A.; Wakshlag, Jacob J.
The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…
Gallagher, Ruth W; Polanin, Joshua R
2015-02-01
Increasing professional nurses' and nursing students cultural competence has been identified as one way to decrease the disparity of care for vulnerable and minority groups, but effectiveness of training programs to increase competence remains equivocal. The purpose of this project is to synthesize educational interventions designed to increase cultural competence in professional nurses and nursing students. A systematic review and meta-analysis was conducted to synthesize all existing studies on increasing cultural competence. A comprehensive search and screen procedures was conducted to locate all cultural competence interventions implemented with professional nurses and nursing students. Two independent researchers screened and coded the included studies. Effect sizes were calculated for each study and a random-effects meta-analysis was conducted. A total of 25 studies were included in the review. Two independent syntheses were conducted given the disparate nature of the effect size metrics. For the synthesis of treatment-control designed studies, the results revealed a non-statistically significant increase in cultural competence (g¯=.38, 95% CI: -.05, .79, p=.08). Moderator analyses indicated significant variation as a function of the measurements, participant types, and funding source. The pretest-posttest effect size synthesis revealed a significant increase in overall cultural competence (g¯=.45, 95% CI: .24, .66, p<.01). Moderator analyses indicated, however, that the effect sizes varied as functions of the measurement, funding source, and publication type. Interventions to increase cultural competence have shown varied effectiveness. Greater research is required to improve these interventions and promote cultural competence. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bello, Dhimiter; Wardle, Brian L.; Yamamoto, Namiko; Guzman deVilloria, Roberto; Garcia, Enrique J.; Hart, Anastasios J.; Ahn, Kwangseog; Ellenbecker, Michael J.; Hallock, Marilyn
2009-01-01
This study investigated airborne exposures to nanoscale particles and fibers generated during dry and wet abrasive machining of two three-phase advanced composite systems containing carbon nanotubes (CNTs), micron-diameter continuous fibers (carbon or alumina), and thermoset polymer matrices. Exposures were evaluated with a suite of complementary instruments, including real-time particle number concentration and size distribution (0.005-20 μm), electron microscopy, and integrated sampling for fibers and respirable particulate at the source and breathing zone of the operator. Wet cutting, the usual procedure for such composites, did not produce exposures significantly different than background whereas dry cutting, without any emissions controls, provided a worst-case exposure and this article focuses here. Overall particle release levels, peaks in the size distribution of the particles, and surface area of released particles (including size distribution) were not significantly different for composites with and without CNTs. The majority of released particle surface area originated from the respirable (1-10 μm) fraction, whereas the nano fraction contributed 10% of the surface area. CNTs, either individual or in bundles, were not observed in extensive electron microscopy of collected samples. The mean number concentration of peaks for dry cutting was composite dependent and varied over an order of magnitude with highest values for thicker laminates at the source being >1 × 106 particles cm-3. Concentration of respirable fibers for dry cutting at the source ranged from 2 to 4 fibers cm-3 depending on the composite type. Further investigation is required and underway to determine the effects of various exposure determinants, such as specimen and tool geometry, on particle release and effectiveness of controls.
Transportation and utilization of aggregates for road construction
NASA Astrophysics Data System (ADS)
Fladvad, Marit; Wigum, Børge Johannes; Aurstad, Joralf
2017-04-01
Road construction relies on non-renewable aggregate resources as the main construction material. Sources for high-quality aggregate resources are scattered, and requirements for aggregate quality can cause long transport distances between quarry and road construction site. In European countries, the average aggregate consumption per capita is 5 tonnes per year (European Aggregates Association, 2016), while the corresponding figure for Norway is 11 tonnes (Neeb, 2015). Half the Norwegian aggregate production (sand, gravel and crushed rock) is used for road construction. In Norway, aggregate resources have been considered abundant. However, stricter requirement for aggregate quality, and increased concern for sustainability and environmental issues have spurred focus on reduction of transport lengths through better utilization of local aggregate materials. In this research project, information about pavement design and aggregate quality requirements were gathered from a questionnaire sent to selected experts from the World Road Organization (PIARC), European Committee for Standardization (CEN), and Nordic Road Association (NVF). The gathered data was compared to identify differences and similarities for aggregate use in the participating countries. Further, the data was compared to known data from Norway regarding: - amount of aggregates required for a road structure - aggregate transport lengths and related costs A total of 18 countries participated in the survey, represented by either road authorities, research institutions, or contractors. There are large variations in practice for aggregate use among the represented countries, and the selection of countries is sufficient to illustrate a variety in pavement designs, aggregate sizes, and quality requirements for road construction. There are considerable differences in both pavement thickness and aggregate sizes used in the studied countries. Total thicknesses for pavement structures varies from 220 mm to 2400 mm, and aggregate sizes for unbound materials varies from 19 mm to 600 mm. These results imply great differences in the amount of aggregate transport to road construction sites. Another important factor is the distances between the construction sites and the aggregate sources. For many projects, especially in countries in need of importing aggregates, aggregate transport will have considerable impact on sustainability assessment of the construction projects. If pavement design can be altered with the goal of achieving better utilization of local aggregates through adaption to the quality of local aggregates, aggregate transportation can be reduced. Reduced transport will alter the economical balance of a project, allowing reallocation of costs from transport to e.g. improved aggregate production. The overall result can be more profitable construction projects and a more sustainable development of road structures.
Interactive archives of scientific data
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.
1994-01-01
A focus on qualitative methods of presenting data shows that visualization provides a mechanism for browsing independent of the source of data and is an effective alternative to traditional image-based browsing of image data. To be generally applicable, such visualization methods, however, must be based upon an underlying data model with support for a broad class of data types and structures. Interactive, near-real-time browsing for data sets of interesting size today requires a browse server of considerable power. A symmetric multi-processor with very high internal and external bandwidth demonstrates the feasibility of this concept. Although this technology is likely to be available on the desktop within a few years, the increase in the size and complexity of achieved data will continue to exceed the capacity of 'worksation' systems. Hence, a higher class of performance, especially in bandwidth, will generally be required for on-demand browsing. A few experiments with differing digital compression techniques indicates that a MPEG-1 implementation within the context of a high-performance browse server (i.e., parallized) is a practical method of converting a browse product to a form suitable for network or CD-ROM distribution.
2017-01-01
Cell size distribution is highly reproducible, whereas the size of individual cells often varies greatly within a tissue. This is obvious in a population of Arabidopsis thaliana leaf epidermal cells, which ranged from 1,000 to 10,000 μm2 in size. Endoreduplication is a specialized cell cycle in which nuclear genome size (ploidy) is doubled in the absence of cell division. Although epidermal cells require endoreduplication to enhance cellular expansion, the issue of whether this mechanism is sufficient for explaining cell size distribution remains unclear due to a lack of quantitative understanding linking the occurrence of endoreduplication with cell size diversity. Here, we addressed this question by quantitatively summarizing ploidy profile and cell size distribution using a simple theoretical framework. We first found that endoreduplication dynamics is a Poisson process through cellular maturation. This finding allowed us to construct a mathematical model to predict the time evolution of a ploidy profile with a single rate constant for endoreduplication occurrence in a given time. We reproduced experimentally measured ploidy profile in both wild-type leaf tissue and endoreduplication-related mutants with this analytical solution, further demonstrating the probabilistic property of endoreduplication. We next extended the mathematical model by incorporating the element that cell size is determined according to ploidy level to examine cell size distribution. This analysis revealed that cell size is exponentially enlarged 1.5 times every endoreduplication round. Because this theoretical simulation successfully recapitulated experimentally observed cell size distributions, we concluded that Poissonian endoreduplication dynamics and exponential size-boosting are the sources of the broad cell size distribution in epidermal tissue. More generally, this study contributes to a quantitative understanding whereby stochastic dynamics generate steady-state biological heterogeneity. PMID:28926847
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Lan; Hill, K. W.; Bitter, M.
Here, a high spatial resolution of a few μm is often required for probing small-scale high-energy-density plasmas using high resolution x-ray imaging spectroscopy. This resolution can be achieved by adjusting system magnification to overcome the inherent limitation of the detector pixel size. Laboratory experiments on investigating the relation between spatial resolution and system magnification for a spherical crystal spectrometer are presented. Tungsten Lβ 2 rays from a tungsten-target micro-focus x-ray tube were diffracted by a Ge 440 crystal, which was spherically bent to a radius of 223 mm, and imaged onto an x-ray CCD with 13-μm pixel size. The source-to-crystalmore » (p) and crystal-to-detector (q) distances were varied to produce spatial magnifications ( M = q/p) ranging from 2 to 10. The inferred instrumental spatial width reduces with increasing system magnification M. However, the experimental measurement at each M is larger than the theoretical value of pixel size divided by M. Future work will focus on investigating possible broadening mechanisms that limit the spatial resolution.« less
Design of small Stirling dynamic isotope power system for robotic space missions
NASA Technical Reports Server (NTRS)
Bents, D. J.; Schreiber, J. G.; Withrow, C. A.; Mckissock, B. I.; Schmitz, P. C.
1992-01-01
Design of a multihundred-watt Dynamic Isotope Power System (DIPS) based on the U.S. Department of Energy (DOE) General Purpose Heat Source (GPHS) and small (multihundred-watt) free-piston Stirling engine (FPSE) technology is being pursued as a potential lower cost alternative to radioisotope thermoelectric generators (RTG's). The design is targeted at the power needs of future unmanned deep space and planetary surface exploration missions ranging from scientific probes to Space Exploration Initiative precursor missions. Power level for these missions is less than a kilowatt. Unlike previous DIPS designs which were based on turbomachinery conversion (e.g. Brayton), this small Stirling DIPS can be advantageously scaled down to multihundred-watt unit size while preserving size and mass competitiveness with RTG's. Preliminary characterization of units in the output power ranges 200-600 We indicate that on an electrical watt basis the GPHS/small Stirling DIPS will be roughly equivalent to an advanced RTG in size and mass but require less than a third of the isotope inventory.
Micro- and nano-hydroxyapatite as active reinforcement for soft biocomposites.
Munarin, F; Petrini, P; Gentilini, R; Pillai, R S; Dirè, S; Tanzi, M C; Sglavo, V M
2015-01-01
Pectin-based biocomposite hydrogels were produced by internal gelation, using different hydroxyapatite (HA) powders from commercial source or synthesized by the wet chemical method. HA possesses the double functionality of cross-linking agent and inorganic reinforcement. The mineralogical composition, grain size, specific surface area and microstructure of the hydroxyapatite powders are shown to strongly influence the properties of the biocomposites. Specifically, the grain size and specific surface area of the HA powders are strictly correlated to the gelling time and rheological properties of the hydrogels at room temperature. Pectin pH is also significant for the formation of ionic cross-links and therefore for the hydrogels stability at higher temperatures. The obtained results point out that micrometric-size hydroxyapatite can be proposed for applications which require rapid gelling kinetics and improved mechanical properties; conversely the nanometric hydroxyapatite synthesized in the present work seems the best choice to obtain homogeneous hydrogels with more easily controlled gelling kinetics. Copyright © 2014 Elsevier B.V. All rights reserved.
The site, size, spatial stability, and energetics of an X-ray flare kernel
NASA Technical Reports Server (NTRS)
Petrasso, R.; Gerassimenko, M.; Nolte, J.
1979-01-01
The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.
Multifrequency observations of a solar microwave burst with two-dimensional spatial resolution
NASA Technical Reports Server (NTRS)
Gary, Dale E.; Hurford, G. J.
1990-01-01
Frequency-agile interferometry observations using three baselines and the technique of frequency synthesis were used to obtain two-dimensional positions of multiple microwave sources at several frequency ranges in a solar flare. Source size and brightness temperature spectra were obtained near the peak of the burst. The size spectrum shows that the source size decreases rapidly with increasing frequency, but the brightness temperature spectrum can be well-fitted by gyrosynchrotron emission from a nonthermal distribution of electrons with power-law index of 4.8. The spatial structure of the burst showed several characteristics in common with primary/secondary bursts discussed by Nakajima et al. (1985). A source of coherent plasma emission at low frequencies is found near the secondary gyrosynchrotron source, associated with the leader spots of the active region.
Modeling the X-Ray Process, and X-Ray Flaw Size Parameter for POD Studies
NASA Technical Reports Server (NTRS)
Khoshti, Ajay
2014-01-01
Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.
Modeling the X-ray Process, and X-ray Flaw Size Parameter for POD Studies
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2014-01-01
Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances, the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters, including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.
Jones, Hayley E.; Martin, Richard M.; Lewis, Sarah J.; Higgins, Julian P.T.
2017-01-01
Abstract Meta‐analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1‐sided P value and a total sample size from each study (or equivalently a 2‐sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta‐analyses, allowing for comparison of results, and an example from when a meta‐analysis was not possible. PMID:28453179
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long
2014-11-01
The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.
Terahertz imaging with compressed sensing and phase retrieval.
Chan, Wai Lam; Moravec, Matthew L; Baraniuk, Richard G; Mittleman, Daniel M
2008-05-01
We describe a novel, high-speed pulsed terahertz (THz) Fourier imaging system based on compressed sensing (CS), a new signal processing theory, which allows image reconstruction with fewer samples than traditionally required. Using CS, we successfully reconstruct a 64 x 64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels, which defines the image in the Fourier plane, and observe improved reconstruction quality when we apply phase correction. For our chosen image, only about 12% of the pixels are required for reassembling the image. In combination with phase retrieval, our system has the capability to reconstruct images with only a small subset of Fourier amplitude measurements and thus has potential application in THz imaging with cw sources.
NASA Astrophysics Data System (ADS)
Xu, Chang-Qing; Gan, Yi; Sun, Jian
2012-03-01
Laser displays require red, green and blue (RGB) laser sources each with a low-cost, a high wall-plug efficiency, and a small size. However, semiconductor chips that directly emit green light with sufficient power and efficiency are not currently available on the market. A practical solution to the "green" bottleneck is to employ diode pumped solid state laser (DPSSL) technology, in which a frequency doubling crystal is used. In this paper, recent progress of MgO doped periodically poled lithium niobate (MgO:PPLN) frequency doubling optical chips will be presented. It is shown that MgO:PPLN can satisfy all of the requirements for laser displays and is ready for mass production.
Suspended sediments from upstream tributaries as the source of downstream river sites
NASA Astrophysics Data System (ADS)
Haddadchi, Arman; Olley, Jon
2014-05-01
Understanding the efficiency with which sediment eroded from different sources is transported to the catchment outlet is a key knowledge gap that is critical to our ability to accurately target and prioritise management actions to reduce sediment delivery. Sediment fingerprinting has proven to be an efficient approach to determine the sources of sediment. This study examines the suspended sediment sources from Emu Creek catchment, south eastern Queensland, Australia. In addition to collect suspended sediments from different sites of the streams after the confluence of tributaries and outlet of the catchment, time integrated suspended samples from upper tributaries were used as the source of sediment, instead of using hillslope and channel bank samples. Totally, 35 time-integrated samplers were used to compute the contribution of suspended sediments from different upstream waterways to the downstream sediment sites. Three size fractions of materials including fine sand (63-210 μm), silt (10-63 μm) and fine silt and clay (<10 μm) were used to find the effect of particle size on the contribution of upper sediments as the sources of sediment after river confluences. And then samples were analysed by ICP-MS and -OES to find 41 sediment fingerprints. According to the results of Student's T-distribution mixing model, small creeks in the middle and lower part of the catchment were major source in different size fractions, especially in silt (10-63 μm) samples. Gowrie Creek as covers southern-upstream part of the catchment was a major contributor at the outlet of the catchment in finest size fraction (<10 μm) Large differences between the contributions of suspended sediments from upper tributaries in different size fractions necessitate the selection of appropriate size fraction on sediment tracing in the catchment and also major effect of particle size on the movement and deposition of sediments.
Po-210 and Pb-210 as atmospheric tracers and global atmospheric Pb-210 fallout: a review.
Baskaran, M
2011-05-01
Over the past ∼ 5 decades, the distribution of (222)Rn and its progenies (mainly (210)Pb, (210)Bi and (210)Po) have provided a wealth of information as tracers to quantify several atmospheric processes that include: i) source tracking and transport time scales of air masses; ii) the stability and vertical movement of air masses iii) removal rate constants and residence times of aerosols; iv) chemical behavior of analog species; and v) washout ratios and deposition velocities of aerosols. Most of these applications require that the sources and sink terms of these nuclides are well characterized. Utility of (210)Pb, (210)Bi and (210)Po as atmospheric tracers requires that data on the (222)Rn emanation rates is well documented. Due to low concentrations of (226)Ra in surface waters, the (222)Rn emanation rates from the continent is about two orders of magnitude higher than that of the ocean. This has led to distinctly higher (210)Pb concentrations in continental air masses compared to oceanic air masses. The highly varying concentrations of (210)Pb in air as well the depositional fluxes have yielded insight on the sources and transit times of aerosols. In an ideal enclosed air mass (closed system with respect to these nuclides), the residence times of aerosols obtained from the activity ratios of (210)Pb/(222)Rn, (210)Bi/(210)Pb, and (210)Po/(210)Pb are expected to agree with each other, but a large number of studies have indicated discordance between the residence times obtained from these three pairs. Recent results from the distribution of these nuclides in size-fractionated aerosols appear to yield consistent residence time in smaller-size aerosols, possibly suggesting that larger size aerosols are derived from resuspended dust. The residence times calculated from the (210)Pb/(222)Rn, (210)Bi/(210)Pb, and (210)Po/(210)Pb activity ratios published from 1970's are compared to those data obtained in size-fractionated aerosols in this decade and possible reasons for the discordance is discussed with some key recommendations for future studies. The existing global atmospheric inventory data of (210)Pb is re-evaluated and a 'global curve' for the depositional fluxes of (210)Pb is established. A current global budget for atmospheric (210)Po and (210)Pb is also established. The relative importance of dry fallout of (210)Po and (210)Pb at different latitudes is evaluated. The global values for the deposition velocities of aerosols using (210)Po and (210)Pb are synthesized. Copyright © 2010 Elsevier Ltd. All rights reserved.
System-size and beam energy dependence of the space-time extent of the pion emission source
NASA Astrophysics Data System (ADS)
Pak, Robert; Phenix Collaboration
2014-09-01
Two-pion interferometry measurements are used to extract the Gaussian source radii Rout ,Rside and Rlong , of the pion emission sources produced in d + Au, Cu +Cu and Au +Au collisions for several beam collision energies at PHENIX experiment. The extracted radii, which are compared to recent STAR and ALICE data, show characteristic scaling patterns as a function of the initial transverse geometric size of the collision system, and the transverse mass of the emitted pion pairs. These scaling patterns indicate a linear dependence of Rside on the initial transverse size, as well as a smaller freeze-out size for the d + Au system. Mathematical combinations of the extracted radii generally associated with the emission source duration and expansion rate exhibit non-monotonic behavior, suggesting a change in the expansion dynamics over this beam energy range.
Nutritional genomics: defining the dietary requirement and effects of choline.
Zeisel, Steven H
2011-03-01
As it becomes evident that single nucleotide polymorphisms (SNPs) in humans can create metabolic inefficiencies, it is reasonable to ask if such SNPs influence dietary requirements. Epidemiologic studies that examine SNPs relative to risks for diseases are common, but there are few examples of clinically sized nutrition studies that examine how SNPs influence metabolism. Studies on how SNPs influence the dietary requirement for choline provide a model for how we might begin examining the effects of SNPs on nutritional phenotypes using clinically sized studies (clinical nutrigenomics). Most men and postmenopausal women develop liver or muscle dysfunction when deprived of dietary choline. More than one-half of premenopausal women may be resistant to choline deficiency-induced organ dysfunction, because estrogen induces the gene [phosphatidylethanolamine-N-methyltransferase (PEMT)] that catalyzes endogenous synthesis of phosphatidylcholine, which can subsequently yield choline. Those premenopausal women that do require a dietary source of choline have a SNP in PEMT, making them unresponsive to estrogen induction of PEMT. It is important to recognize differences in dietary requirements for choline in women, because during pregnancy, maternal dietary choline modulates fetal brain development in rodent models. Because choline metabolism and folate metabolism intersect at the methylation of homocysteine, manipulations that limit folate availability also increase the use of choline as a methyl donor. People with a SNPs in MTHFD1 (a gene of folate metabolism that controls the use of folate as a methyl donor) are more likely to develop organ dysfunction when deprived of choline; their dietary requirement is increased because of increased need for choline as a methyl donor.
13 CFR 121.411 - What are the size procedures for SBA's Section 8(d) Subcontracting Program?
Code of Federal Regulations, 2013 CFR
2013-01-01
... SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards... maintaining a small business source list. Even though a concern is on a small business source list, it must still qualify and self-certify as a small business at the time it submits its offer as a section 8(d...
13 CFR 121.411 - What are the size procedures for SBA's Section 8(d) Subcontracting Program?
Code of Federal Regulations, 2011 CFR
2011-01-01
... SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards... maintaining a small business source list. Even though a concern is on a small business source list, it must still qualify and self-certify as a small business at the time it submits its offer as a section 8(d...
13 CFR 121.411 - What are the size procedures for SBA's Section 8(d) Subcontracting Program?
Code of Federal Regulations, 2012 CFR
2012-01-01
... SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards... maintaining a small business source list. Even though a concern is on a small business source list, it must still qualify and self-certify as a small business at the time it submits its offer as a section 8(d...
Validation of the Transient Structural Response of a Threaded Assembly: Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott W.; Hemez, Francois M.; Robertson, Amy N.
2004-04-01
This report explores the application of model validation techniques in structural dynamics. The problem of interest is the propagation of an explosive-driven mechanical shock through a complex threaded joint. The study serves the purpose of assessing whether validating a large-size computational model is feasible, which unit experiments are required, and where the main sources of uncertainty reside. The results documented here are preliminary, and the analyses are exploratory in nature. The results obtained to date reveal several deficiencies of the analysis, to be rectified in future work.
Optimal fusion offset in splicing photonic crystal fibers
NASA Astrophysics Data System (ADS)
Jin, Wa; Bi, Weihong; Fu, Guangwei
2013-08-01
Heat transfer is very complicate in fusion splicing process of photonic crystal fibers (PCFs) due to different structures and sizes of air hole, which requires different fusion splicing power and offsets of heat source. Based on the heat transfer characteristics, this paper focus on the optimal splicing offset splicing the single mode fiber and PCFs with a CO2 laser irradiation. The theory and experiments both show that the research results can effectively calculate the optimal fusion splicing offset and guide the practical splicing between PCFs and SMFs.
The DZERO Level 3 Data Acquisition System
NASA Astrophysics Data System (ADS)
Angstadt, R.; Brooijmans, G.; Chapin, D.; Clements, M.; Cutts, D.; Haas, A.; Hauser, R.; Johnson, M.; Kulyavtsev, A.; Mattingly, S. E. K.; Mulders, M.; Padley, P.; Petravick, D.; Rechenmacher, R.; Snyder, S.; Watts, G.
2004-06-01
The DZERO experiment began RunII datataking operation at Fermilab in spring 2001. The physics program of the experiment requires the Level 3 data acquisition (DAQ) system system to handle average event sizes of 250 kilobytes at a rate of 1 kHz. The system routes and transfers event fragments of approximately 1-20 kilobytes from 63 VME crate sources to any of approximately 100 processing nodes. It is built upon a Cisco 6509 Ethernet switch, standard PCs, and commodity VME single board computers (SBCs). The system has been in full operation since spring 2002.
NASA Astrophysics Data System (ADS)
Srivastava, Arun; Gupta, Sandeep; Jain, V. K.
2009-03-01
A study of the winter time size distribution and source apportionment of total suspended particulate matter (TSPM) and associated heavy metal concentrations have been carried out for the city of Delhi. This study is important from the point of view of implementation of compressed natural gas (CNG) as alternate of diesel fuel in the public transport system in 2001 to reduce the pollution level. TSPM were collected using a five-stage cascade impactor at six sites in the winters of 2005-06. The results of size distribution indicate that a major portion (~ 40%) of TSPM concentration is in the form of PM0.7 (< 0.7 μm). Similar trends were observed with most of the heavy metals associated with various size fractions of TSPM. A very good correlation between coarse and fine size fraction of TSPM was observed. It was also observed that the metals associated with coarse particles have more chances of correlation with other metals; rather they are associated with fine particles. Source apportionment was carried out separately in coarse and fine size modes of TSPM by Chemical Mass Balance Receptor Model (CMB8) as well as by Principle Component Analysis (PCA) of SPSS. Source apportionment by PCA reveals that there are two major sources (possibly vehicular and crustal re-suspension) in both coarse and fine size fractions. Results obtained by CMB8 show the dominance of vehicular pollutants and crustal dust in fine and coarse size mode respectively. Noticeably the dominance of vehicular pollutants are now confined to fine size only whilst during pre CNG era it dominated both coarse and fine size mode. An increase of 42.5, 44.4, 48.2, 38.6 and 38.9% in the concentrations of TSPM, PM10.9, coarse particles, fine particles and lead respectively was observed during pre (2001) to post CNG (2005-06) period.
Evidence Integration in Natural Acoustic Textures during Active and Passive Listening
Rupp, Andre; Celikel, Tansu
2018-01-01
Abstract Many natural sounds can be well described on a statistical level, for example, wind, rain, or applause. Even though the spectro-temporal profile of these acoustic textures is highly dynamic, changes in their statistics are indicative of relevant changes in the environment. Here, we investigated the neural representation of change detection in natural textures in humans, and specifically addressed whether active task engagement is required for the neural representation of this change in statistics. Subjects listened to natural textures whose spectro-temporal statistics were modified at variable times by a variable amount. Subjects were instructed to either report the detection of changes (active) or to passively listen to the stimuli. A subset of passive subjects had performed the active task before (passive-aware vs passive-naive). Psychophysically, longer exposure to pre-change statistics was correlated with faster reaction times and better discrimination performance. EEG recordings revealed that the build-up rate and size of parieto-occipital (PO) potentials reflected change size and change time. Reduced effects were observed in the passive conditions. While P2 responses were comparable across conditions, slope and height of PO potentials scaled with task involvement. Neural source localization identified a parietal source as the main contributor of change-specific potentials, in addition to more limited contributions from auditory and frontal sources. In summary, the detection of statistical changes in natural acoustic textures is predominantly reflected in parietal locations both on the skull and source level. The scaling in magnitude across different levels of task involvement suggests a context-dependent degree of evidence integration. PMID:29662943
Ree, Moonhor
2014-05-01
For advanced functional polymers such as biopolymers, biomimic polymers, brush polymers, star polymers, dendritic polymers, and block copolymers, information about their surface structures, morphologies, and atomic structures is essential for understanding their properties and investigating their potential applications. Grazing incidence X-ray scattering (GIXS) is established for the last 15 years as the most powerful, versatile, and nondestructive tool for determining these structural details when performed with the aid of an advanced third-generation synchrotron radiation source with high flux, high energy resolution, energy tunability, and small beam size. One particular merit of this technique is that GIXS data can be obtained facilely for material specimens of any size, type, or shape. However, GIXS data analysis requires an understanding of GIXS theory and of refraction and reflection effects, and for any given material specimen, the best methods for extracting the form factor and the structure factor from the data need to be established. GIXS theory is reviewed here from the perspective of practical GIXS measurements and quantitative data analysis. In addition, schemes are discussed for the detailed analysis of GIXS data for the various self-assembled nanostructures of functional homopolymers, brush, star, and dendritic polymers, and block copolymers. Moreover, enhancements to the GIXS technique are discussed that can significantly improve its structure analysis by using the new synchrotron radiation sources such as third-generation X-ray sources with picosecond pulses and partial coherence and fourth-generation X-ray laser sources with femtosecond pulses and full coherence. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Evidence Integration in Natural Acoustic Textures during Active and Passive Listening.
Górska, Urszula; Rupp, Andre; Boubenec, Yves; Celikel, Tansu; Englitz, Bernhard
2018-01-01
Many natural sounds can be well described on a statistical level, for example, wind, rain, or applause. Even though the spectro-temporal profile of these acoustic textures is highly dynamic, changes in their statistics are indicative of relevant changes in the environment. Here, we investigated the neural representation of change detection in natural textures in humans, and specifically addressed whether active task engagement is required for the neural representation of this change in statistics. Subjects listened to natural textures whose spectro-temporal statistics were modified at variable times by a variable amount. Subjects were instructed to either report the detection of changes (active) or to passively listen to the stimuli. A subset of passive subjects had performed the active task before (passive-aware vs passive-naive). Psychophysically, longer exposure to pre-change statistics was correlated with faster reaction times and better discrimination performance. EEG recordings revealed that the build-up rate and size of parieto-occipital (PO) potentials reflected change size and change time. Reduced effects were observed in the passive conditions. While P2 responses were comparable across conditions, slope and height of PO potentials scaled with task involvement. Neural source localization identified a parietal source as the main contributor of change-specific potentials, in addition to more limited contributions from auditory and frontal sources. In summary, the detection of statistical changes in natural acoustic textures is predominantly reflected in parietal locations both on the skull and source level. The scaling in magnitude across different levels of task involvement suggests a context-dependent degree of evidence integration.
NASA Astrophysics Data System (ADS)
Pushkarev, A. B.; Kovalev, Y. Y.
2015-10-01
We have measured the angular sizes of radio cores of active galactic nuclei (AGNs) and analysed their sky distributions and frequency dependences to study synchrotron opacity in AGN jets and the strength of angular broadening in the interstellar medium. We have used archival very long baseline interferometry (VLBI) data of more than 3000 compact extragalactic radio sources observed at frequencies, ν, from 2 to 43 GHz to measure the observed angular size of VLBI cores. We have found a significant increase in the angular sizes of the extragalactic sources seen through the Galactic plane (|b| ≲ 10°) at 2, 5 and 8 GHz, about one-third of which show significant scattering. These sources are mainly detected in directions to the Galactic bar, the Cygnus region and a region with galactic longitudes 220° ≲ l ≲ 260° (the Fitzgerald window). The strength of interstellar scattering of the AGNs is found to correlate with the Galactic Hα intensity, free-electron density and Galactic rotation measure. The dependence of scattering strengths on source redshift is insignificant, suggesting that the dominant scattering screens are located in our Galaxy. The observed angular size of Sgr A* is found to be the largest among thousands of AGNs observed over the sky; we discuss possible reasons for this strange result. Excluding extragalactic radio sources with significant scattering, we find that the angular size of opaque cores in AGNs scales typically as ν-1, confirming predictions of a conical synchrotron jet model with equipartition.
First results from the new RIKEN superconducting electron cyclotron resonance ion source (invited).
Nakagawa, T; Higurashi, Y; Ohnishi, J; Aihara, T; Tamura, M; Uchiyama, A; Okuno, H; Kusaka, K; Kidera, M; Ikezawa, E; Fujimaki, M; Sato, Y; Watanabe, Y; Komiyama, M; Kase, M; Goto, A; Kamigaito, O; Yano, Y
2010-02-01
The next generation heavy ion accelerator facility, such as the RIKEN radio isotope (RI) beam factory, requires an intense beam of high charged heavy ions. In the past decade, performance of the electron cyclotron resonance (ECR) ion sources has been dramatically improved with increasing the magnetic field and rf frequency to enhance the density and confinement time of plasma. Furthermore, the effects of the key parameters (magnetic field configuration, gas pressure, etc.) on the ECR plasma have been revealed. Such basic studies give us how to optimize the ion source structure. Based on these studies and modern superconducting (SC) technology, we successfully constructed the new 28 GHz SC-ECRIS, which has a flexible magnetic field configuration to enlarge the ECR zone and to optimize the field gradient at ECR point. Using it, we investigated the effect of ECR zone size, magnetic field configuration, and biased disk on the beam intensity of the highly charged heavy ions with 18 GHz microwaves. In this article, we present the structure of the ion source and first experimental results with 18 GHz microwave in detail.
Comparison of two optimized readout chains for low light CIS
NASA Astrophysics Data System (ADS)
Boukhayma, A.; Peizerat, A.; Dupret, A.; Enz, C.
2014-03-01
We compare the noise performance of two optimized readout chains that are based on 4T pixels and featuring the same bandwidth of 265kHz (enough to read 1Megapixel with 50frame/s). Both chains contain a 4T pixel, a column amplifier and a single slope analog-to-digital converter operating a CDS. In one case, the pixel operates in source follower configuration, and in common source configuration in the other case. Based on analytical noise calculation of both readout chains, an optimization methodology is presented. Analytical results are confirmed by transient simulations using 130nm process. A total input referred noise bellow 0.4 electrons RMS is reached for a simulated conversion gain of 160μV/e-. Both optimized readout chains show the same input referred 1/f noise. The common source based readout chain shows better performance for thermal noise and requires smaller silicon area. We discuss the possible drawbacks of the common source configuration and provide the reader with a comparative table between the two readout chains. The table contains several variants (column amplifier gain, in-pixel transistor sizes and type).
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
2010-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
Transfer of management training from alternative perspectives.
Taylor, Paul J; Russ-Eft, Darlene F; Taylor, Hazel
2009-01-01
One hundred seven management training evaluations were meta-analyzed to compare effect sizes for the transfer of managerial training derived from different rating sources (self, superior, peer, and subordinate) and broken down by both study- and training-related variables. For studies as a whole, and interpersonal management skills training studies in particular, transfer effects based on trainees' self-ratings, and to a lesser extent ratings from their superiors, were largest and most varied across studies. In contrast, transfer effects based on peer ratings, and particularly subordinate ratings, were substantially smaller and more homogeneous. This pattern was consistent across different sources of studies, features of evaluation design, and within a subset of 14 studies that each included all 4 rating sources. Across most rating sources, transfer of training was greatest for studies conducted in nonmilitary settings, when raters were likely to have known whether the manager being rated had attended training, when criteria were targeted to training content, when training content was derived from an analysis of tasks and skill requirements, and when training included opportunities for practice. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Size Matters: What Are the Characteristic Source Areas for Urban Planning Strategies?
Fan, Chao; Myint, Soe W.; Wang, Chenghao
2016-01-01
Urban environmental measurements and observational statistics should reflect the properties generated over an adjacent area of adequate length where homogeneity is usually assumed. The determination of this characteristic source area that gives sufficient representation of the horizontal coverage of a sensing instrument or the fetch of transported quantities is of critical importance to guide the design and implementation of urban landscape planning strategies. In this study, we aim to unify two different methods for estimating source areas, viz. the statistical correlation method commonly used by geographers for landscape fragmentation and the mechanistic footprint model by meteorologists for atmospheric measurements. Good agreement was found in the intercomparison of the estimate of source areas by the two methods, based on 2-m air temperature measurement collected using a network of weather stations. The results can be extended to shed new lights on urban planning strategies, such as the use of urban vegetation for heat mitigation. In general, a sizable patch of landscape is required in order to play an effective role in regulating the local environment, proportional to the height at which stakeholders’ interest is mainly concerned. PMID:27832111
Kumar, Pawan; Kumar, Sushil; Yadav, Sudesh
2018-02-01
Size distribution, water-soluble inorganic ions (WSII), and organic carbon (OC) and elemental carbon (EC) in size-segregated aerosols were investigated during a year-long sampling in 2010 over New Delhi. Among different size fractions of PM 10 , PM 0.95 was the dominant fraction (45%) followed by PM 3-7.2 (20%), PM 7.2-10 (15%), PM 0.95-1.5 (10%), and PM 1.5-3 (10%). All size fractions exceeded the ambient air quality standards of India for PM 2.5 . Annual average mass size distributions of ions were specific to size and ion(s); Ca 2+ , Mg 2+ , K + , NO 3 - , and Cl - followed bimodal distribution while SO 4 2- and NH 4 + ions showed one mode in PM 0.95 . The concentrations of secondary WSII (NO 3 - , SO 4 2- , and NH 4 + ) increased in winters due to closed and moist atmosphere whereas open atmospheric conditions in summers lead to dispersal of pollutants. NH 4 + and Ca 2+ were dominant neutralization ions but in different size fractions. The summer-time dust transport from upwind region by S SW winds resulted in significantly high concentrations of PM 0.95 and PM 3-7.2 and PM 7.2-10 . This indicted influence of dust generation in Thar Desert and its transport is size selective in nature in downwind direction. The mixing of different sources (geogenic, coal combustions, biomass burning, plastic burning, incinerators, and vehicular emissions sources) for soluble ions in different size fractions was noticed in principle component analysis. Total carbon (TC = EC + OC) constituted 8-31% of the total PM 0.95 mass, and OC dominated over EC. Among EC, char (EC1) dominated over soot (EC2 + EC3). High SOC contribution (82%) to OC and OC/EC ratio of 2.7 suggested possible role of mineral dust and high photochemical activity in SOC production. Mass concentrations of aerosols and WSII and their contributions to each size fraction of PM 10 are governed by nature of sources, emission strength of source(s), and seasonality in meteorological parameters.
NASA Astrophysics Data System (ADS)
Johnson, E. R.; Rowland, R. D.; Protokowicz, J.; Inamdar, S. P.; Kan, J.; Vargas, R.
2016-12-01
Extreme storm events have tremendous erosive energy which is capable of mobilizing vast amounts of material from watershed sources into fluvial systems. This complex mixture of sediment and particulate organic matter (POM) is a nutrient source, and has the potential to impact downstream water quality. The impact of POM on receiving aquatic systems can vary not only by the total amount exported but also by the various sources involved and the particle sizes of POM. This study examines the composition of POM in potential sources and within-event POM by: (1) determining the amount and quality of dissolved organic matter (DOM) that can be leached from coarse, medium and fine particle classes; (2) assessing the C and N content and isotopic character of within-event POM; and (3) coupling physical and chemical properties to evaluate storm event POM influence on stream water. Storm event POM samples and source sediments were collected from a forested headwater catchment (second order stream) in the Piedmont region of Maryland. Samples were sieved into three particle classes - coarse (2mm-1mm), medium (1mm-250µm) and fine (<250µm). Extractions were performed for three particle class sizes and the resulting fluorescent organic matter was analyzed. Carbon (C) and Nitrogen (N) amount, C:N ratio, and isotopic analysis of 13C and 15N were performed on solid state event and source material. Future work will include examination of microbial communities associated with POM particle size classes. Physical size class separation of within-event POM exhibited differences in C:N ratios, δ15N composition, and extracted DOM lability. Smaller size classes exhibited lower C:N ratios, more enriched δ15N and more recalcitrant properties in leached DOM. Source material had varying C:N ratios and contributions to leached DOM. These results indicate that both source and size class strongly influence the POM contribution to fluvial systems during large storm events.
Robust reflective pupil slicing technology
NASA Astrophysics Data System (ADS)
Meade, Jeffrey T.; Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.
2014-07-01
Tornado Spectral Systems (TSS) has developed the High Throughput Virtual Slit (HTVSTM), robust all-reflective pupil slicing technology capable of replacing the slit in research-, commercial- and MIL-SPEC-grade spectrometer systems. In the simplest configuration, the HTVS allows optical designers to remove the lossy slit from pointsource spectrometers and widen the input slit of long-slit spectrometers, greatly increasing throughput without loss of spectral resolution or cross-dispersion information. The HTVS works by transferring etendue between image plane axes but operating in the pupil domain rather than at a focal plane. While useful for other technologies, this is especially relevant for spectroscopic applications by performing the same spectral narrowing as a slit without throwing away light on the slit aperture. HTVS can be implemented in all-reflective designs and only requires a small number of reflections for significant spectral resolution enhancement-HTVS systems can be efficiently implemented in most wavelength regions. The etendueshifting operation also provides smooth scaling with input spot/image size without requiring reconfiguration for different targets (such as different seeing disk diameters or different fiber core sizes). Like most slicing technologies, HTVS provides throughput increases of several times without resolution loss over equivalent slitbased designs. HTVS technology enables robust slit replacement in point-source spectrometer systems. By virtue of pupilspace operation this technology has several advantages over comparable image-space slicer technology, including the ability to adapt gracefully and linearly to changing source size and better vertical packing of the flux distribution. Additionally, this technology can be implemented with large slicing factors in both fast and slow beams and can easily scale from large, room-sized spectrometers through to small, telescope-mounted devices. Finally, this same technology is directly applicable to multi-fiber spectrometers to achieve similar enhancement. HTVS also provides the ability to anamorphically "stretch" the slit image in long-slit spectrometers, allowing the instrument designer to optimize the plate scale in the dispersion axis and cross-dispersion axes independently without sacrificing spatial information. This allows users to widen the input slit, with the associated gain of throughput and loss of spatial selectivity, while maintaining the spectral resolution of the spectrometer system. This "stretching" places increased requirements on detector focal plane height, as with image slicing techniques, but provides additional degrees of freedom to instrument designers to build the best possible spectrometer systems. We discuss the details of this technology for an astronomical context, covering the applicability from small telescope mounted spectrometers through long-slit imagers and radial-velocity engines. This powerful tool provides additional degrees of freedom when designing a spectrometer, enabling instrument designers to further optimize systems for the required scientific goals.
Hydrogen Generation Through Renewable Energy Sources at the NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Colozza, Anthony; Prokopius, Kevin
2007-01-01
An evaluation of the potential for generating high pressure, high purity hydrogen at the NASA Glenn Research Center (GRC) was performed. This evaluation was based on producing hydrogen utilizing a prototype Hamilton Standard electrolyzer that is capable of producing hydrogen at 3000 psi. The present state of the electrolyzer system was determined to identify the refurbishment requirements. The power for operating the electrolyzer would be produced through renewable power sources. Both wind and solar were considered in the analysis. The solar power production capability was based on the existing solar array field located at NASA GRC. The refurbishment and upgrade potential of the array field was determined and the array output was analyzed with various levels of upgrades throughout the year. The total available monthly and yearly energy from the array was determined. A wind turbine was also sized for operation. This sizing evaluated the wind potential at the site and produced an operational design point for the wind turbine. Commercially available wind turbines were evaluated to determine their applicability to this site. The system installation and power integration were also addressed. This included items such as housing the electrolyzer, power management, water supply, gas storage, cooling and hydrogen dispensing.
Irei, Satoshi
2016-01-01
Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511
El-Kassaby, Yousry A; Funda, Tomas; Lai, Ben S K
2010-01-01
The impact of female reproductive success on the mating system, gene flow, and genetic diversity of the filial generation was studied using a random sample of 801 bulk seed from a 49-clone Pseudotsuga menziesii seed orchard. We used microsatellite DNA fingerprinting and pedigree reconstruction to assign each seed's maternal and paternal parents and directly estimated clonal reproductive success, selfing rate, and the proportion of seed sired by outside pollen sources. Unlike most family array mating system and gene flow studies conducted on natural and experimental populations, which used an equal number of seeds per maternal genotype and thus generating unbiased inferences only on male reproductive success, the random sample we used was a representative of the entire seed crop; therefore, provided a unique opportunity to draw unbiased inferences on both female and male reproductive success variation. Selfing rate and the number of seed sired by outside pollen sources were found to be a function of female fertility variation. This variation also substantially and negatively affected female effective population size. Additionally, the results provided convincing evidence that the use of clone size as a proxy to fertility is questionable and requires further consideration.
A practical and systematic review of Weibull statistics for reporting strengths of dental materials
Quinn, George D.; Quinn, Janet B.
2011-01-01
Objectives To review the history, theory and current applications of Weibull analyses sufficient to make informed decisions regarding practical use of the analysis in dental material strength testing. Data References are made to examples in the engineering and dental literature, but this paper also includes illustrative analyses of Weibull plots, fractographic interpretations, and Weibull distribution parameters obtained for a dense alumina, two feldspathic porcelains, and a zirconia. Sources Informational sources include Weibull's original articles, later articles specific to applications and theoretical foundations of Weibull analysis, texts on statistics and fracture mechanics and the international standards literature. Study Selection The chosen Weibull analyses are used to illustrate technique, the importance of flaw size distributions, physical meaning of Weibull parameters and concepts of “equivalent volumes” to compare measured strengths obtained from different test configurations. Conclusions Weibull analysis has a strong theoretical basis and can be of particular value in dental applications, primarily because of test specimen size limitations and the use of different test configurations. Also endemic to dental materials, however, is increased difficulty in satisfying application requirements, such as confirming fracture origin type and diligence in obtaining quality strength data. PMID:19945745
Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans; Swertz, Morris A
2016-07-15
While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect : m.a.swertz@rug.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K. Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans
2016-01-01
Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Availability and Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect. Contact: m.a.swertz@rug.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153686
The Potential for Harvesting Energy from the Movement of Trees
McGarry, Scott; Knight, Chris
2011-01-01
Over the last decade, wireless devices have decreased in size and power requirements. These devices generally use batteries as a power source but can employ additional means of power, such as solar, thermal or wind energy. However, sensor networks are often deployed in conditions of minimal lighting and thermal gradient such as densely wooded environments, where even normal wind energy harvesting is limited. In these cases a possible source of energy is from the motion of the trees themselves. We investigated the amount of energy and power available from the motion of a tree in a sheltered position, during Beaufort 4 winds. We measured the work performed by the tree to lift a mass, we measured horizontal acceleration of free movement, and we determined the angular deflection of the movement of the tree trunk, to determine the energy and power available to various types of harvesting devices. We found that the amount of power available from the tree, as demonstrated by lifting a mass, compares favourably with the power required to run a wireless sensor node. PMID:22163695
NERVA-Derived Nuclear Thermal Propulsion Dual Mode Operation
NASA Astrophysics Data System (ADS)
Zweig, Herbert R.; Hundal, Rolv
1994-07-01
Generation of electrical power using the nuclear heat source of a NERVA-derived nuclear thermal rocket engine is presented. A 111,200 N thrust engine defined in a study for NASA-LeRC in FY92 is the reference engine for a three-engine vehicle for which a 50 kWe capacity is required. Processes are described for energy extraction from the reactor and for converting the energy to electricity. The tie tubes which support the reactor fuel elements are the source of thermal energy. The study focuses on process systems using Stirling cycle energy conversion operating at 980 K and an alternate potassium-Rankine system operating at 1,140 K. Considerations are given of the effect of the power production on turbopump operation, ZrH moderator dissociation, creep strain in the tie tubes, hydrogen permeation through the containment materials, requirements for a backup battery system, and the effects of potential design changes on reactor size and criticality. Nuclear considerations include changing tie tube materials to TZM, changing the moderator to low vapor-pressure yttrium hydride, and changing the fuel form from graphite matrix to a carbon-carbide composite.
The potential for harvesting energy from the movement of trees.
McGarry, Scott; Knight, Chris
2011-01-01
Over the last decade, wireless devices have decreased in size and power requirements. These devices generally use batteries as a power source but can employ additional means of power, such as solar, thermal or wind energy. However, sensor networks are often deployed in conditions of minimal lighting and thermal gradient such as densely wooded environments, where even normal wind energy harvesting is limited. In these cases a possible source of energy is from the motion of the trees themselves. We investigated the amount of energy and power available from the motion of a tree in a sheltered position, during Beaufort 4 winds. We measured the work performed by the tree to lift a mass, we measured horizontal acceleration of free movement, and we determined the angular deflection of the movement of the tree trunk, to determine the energy and power available to various types of harvesting devices. We found that the amount of power available from the tree, as demonstrated by lifting a mass, compares favourably with the power required to run a wireless sensor node.
Enhanced labelling on alcoholic drinks: reviewing the evidence to guide alcohol policy.
Martin-Moreno, Jose M; Harris, Meggan E; Breda, Joao; Møller, Lars; Alfonso-Sanchez, Jose L; Gorgojo, Lydia
2013-12-01
Consumer and public health organizations have called for better labelling on alcoholic drinks. However, there is a lack of consensus about the best elements to include. This review summarizes alcohol labelling policy worldwide and examines available evidence to support enhanced labelling. A literature review was carried out in June-July 2012 on Scopus using the key word 'alcohol' combined with 'allergens', 'labels', 'nutrition information', 'ingredients', 'consumer information' and/or 'warning'. Articles discussing advertising and promotion of alcohol were excluded. A search through Google and the System for Grey Literature in Europe (SIGLE) identified additional sources on alcohol labelling policies, mainly from governmental and organizational websites. Five elements were identified as potentially useful to consumers: (i) a list of ingredients, (ii) nutritional information, (iii) serving size and servings per container, (iv) a definition of 'moderate' intake and (v) a health warning. Alcohol labelling policy with regard to these aspects is quite rudimentary in most countries, with few requiring a list of ingredients or health warnings, and none requiring basic nutritional information. Only one country (Australia) requires serving size and servings per container to be displayed. Our study suggests that there are both potential advantages and disadvantages to providing consumers with more information about alcohol products. Current evidence seems to support prompt inclusion of a list of ingredients, nutritional information (usually only kcal) and health warnings on labels. Standard drink and serving size is useful only when combined with other health education efforts. A definition of 'moderate intake' and recommended drinking guidelines are best suited to other contexts.
Propagation properties of cylindrical sinc Gaussian beam
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.; Bayraktar, Mert
2016-09-01
We investigate the propagation properties of cylindrical sinc Gaussian beam in turbulent atmosphere. Since an analytic solution is hardly derivable, the study is carried out with the aid of random phase screens. Evolutions of the beam intensity profile, beam size and kurtosis parameter are analysed. It is found that on the source plane, cylindrical sinc Gaussian beam has a dark hollow appearance, where the side lobes also start to emerge with increase in width parameter and Gaussian source size. During propagation, beams with small width and Gaussian source size exhibit off-axis behaviour, losing the dark hollow shape, accumulating the intensity asymmetrically on one side, whereas those with large width and Gaussian source size retain dark hollow appearance even at long propagation distances. It is seen that the beams with large widths expand more in beam size than the ones with small widths. The structure constant values chosen do not seem to alter this situation. The kurtosis parameters of the beams having small widths are seen to be larger than the ones with the small widths. Again the choice of the structure constant does not change this trend.
Spot size measurement of a flash-radiography source using the pinhole imaging method
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Qin; Chen, Nan; Cheng, Jin-Ming; Xie, Yu-Tong; Liu, Yun-Long; Long, Quan-Hong
2016-07-01
The spot size of the X-ray source is a key parameter of a flash-radiography facility, and is usually quoted as an evaluation of the resolving power. The pinhole imaging technique is applied to measure the spot size of the Dragon-I linear induction accelerator, by which a two-dimensional spatial distribution of the source spot is obtained. Experimental measurements are performed to measure the spot image when the transportation and focusing of the electron beam are tuned by adjusting the currents of solenoids in the downstream section. The spot size of full-width at half maximum and that defined from the spatial frequency at half peak value of the modulation transfer function are calculated and discussed.
NASA Astrophysics Data System (ADS)
Abbaszadeh, Shiva; Karim, Karim S.; Karanassios, Vassili
2013-05-01
Traditionally, samples are collected on-site (i.e., in the field) and are shipped to a lab for chemical analysis. An alternative is offered by using portable chemical analysis instruments that can be used on-site (i.e., in the field). Many analytical measurements by optical emission spectrometry require use of light-sources and of spectral lines that are in the Ultra-Violet (UV, ~200 nm - 400 nm wavelength) region of the spectrum. For such measurements, a portable, battery-operated, fiber-optic spectrometer equipped with an un-cooled, linear, solid-state detector may be used. To take full advantage of the advanced measurement capabilities offered by state-of-the-art solid-state detectors, cooling of the detector is required. But cooling and other thermal management hamper portability and use on-site because they add size and weight and they increase electrical power requirements. To address these considerations, an alternative was implemented, as described here. Specifically, a microfabricated solid-state detector for measurement of UV photons will be described. Unlike solid-state detectors developed on crystalline Silicon, this miniaturized and low-cost detector utilizes amorphous Selenium (a-Se) as its photosensitive material. Due to its low dark current, this detector does not require cooling, thus it is better suited for portable use and for chemical measurements on-site. In this paper, a microplasma will be used as a light-source of UV photons for the a-Se detector. For example, spectra acquired using a microplasma as a light-source will be compared with those obtained with a portable, fiber-optic spectrometer equipped with a Si-based 2080-element detector. And, analytical performance obtained by introducing ng-amounts of analytes into the microplasma will be described.
Investigating the unification of LOFAR-detected powerful AGN in the Boötes field
NASA Astrophysics Data System (ADS)
Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.
2017-08-01
Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.
Performance of a Low Activity Beta-Sensitive SR{sup 90} Water Monitor for Fukushima
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zickefoose, J.; Bronson, F.; Ilie, G.
There are large volumes of contaminated water from the stabilization efforts at the damaged Fukushima Nuclear Power Plants. This water is being processed to remove radioactivity for eventual release to the environment. An on-line continuously operating system to confirm that the clean-up system is working properly, and to provide prompt feedback of the results is required by the system operator. While gamma emitting nuclides allow for the straight forward approach of gamma spectroscopy to identify and quantify radioactivity in water, pure beta emitting nuclides such as Sr{sup 90} pose a challenging problem. The relatively short range of beta radiation inmore » water requires optimization of the measurement geometry in terms of the source-detector distance and source-detector interface while retaining a background sensitivity low enough to meet the Minimum Detectable Concentration (MDC) of 10 Bq/kg in 180 minutes. This issue is complicated by the continuum nature of the beta spectrum which does not allow for simple nuclide identification. The use of the Monte-Carlo code MCNP to estimate system performance before prototyping vastly increases the success of the end product. Various parameters such as detector size and thickness, water chamber size, water chamber construction materials were evaluated to help choose the optimum geometry. The final design was a system consisting of two large-area (16 x 35 cm) and thin (0.15 mm) plastic scintillators placed very close to a sealed a water chamber. The size of the chamber was optimized to obtain the maximum efficiency for the nuclide being measured (Sr/Y{sup 90}) but to minimize the efficiency for possible interferences (Ru/Rh{sup 106}, Cs{sup 137}). A thin carbon fiber window was selected with adequate material and thickness to contain the water under pressure, but also thin enough (0.5 mm) to allow enough beta radiation to pass through to the active detector volume. The entire measurement geometry is then housed in a thick lead shield to reduce contributions from external sources to an acceptable level. Data acquisition is accomplished through customized application-specific software that allows for long counting times to attain a low MDC, but also simultaneously provides alarms on short averaging times to achieve a fast response to sudden changes in activity concentration. Multiple monitors are then linked to supervisory software where real time data and alarms are available for analysis in remote locations. The system also allows for remote operation of the unit; check sources, background checks, systems settings and more may be accessed remotely. Testing of the production devices has shown that we can achieve the 10 Bk/kg MDC requirement for Sr{sup 90} in equilibrium with Y{sup 90} with a count time of approximately 20 minutes. (authors)« less
Kim, Yong Ho; Krantz, Q Todd; McGee, John; Kovalcik, Kasey D; Duvall, Rachelle M; Willis, Robert D; Kamal, Ali S; Landis, Matthew S; Norris, Gary A; Gilmour, M Ian
2016-11-01
The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible for health effects however are not fully understood. Size-fractionated PM (coarse, fine, and ultrafine) samples were collected using a ChemVol sampler at an urban site (G.T. Craig (GTC)) and rural site (Chippewa Lake (CLM)) from July 2009 to June 2010, and then chemically analyzed. The resulting speciated PM data were apportioned by EPA positive matrix factorization to identify emission sources for each size fraction and location. For comparisons with the ChemVol results, PM samples were also collected with sequential dichotomous and passive samplers, and evaluated for source contributions to each sampling site. The ChemVol results showed that annual average concentrations of PM, elemental carbon, and inorganic elements in the coarse fraction at GTC were ∼2, ∼7, and ∼3 times higher than those at CLM, respectively, while the smaller size fractions at both sites showed similar annual average concentrations. Seasonal variations of secondary aerosols (e.g., high NO 3 - level in winter and high SO 4 2- level in summer) were observed at both sites. Source apportionment results demonstrated that the PM samples at GTC and CLM were enriched with local industrial sources (e.g., steel plant and coal-fired power plant) but their contributions were influenced by meteorological conditions and the emission source's operation conditions. Taken together the year-long PM collection and data analysis provides valuable insights into the characteristics and sources of PM impacting the Cleveland airshed in both the urban center and the rural upwind background locations. These data will be used to classify the PM samples for toxicology studies to determine which PM sources, species, and size fractions are of greatest health concern. Copyright © 2016 Elsevier Ltd. All rights reserved.
Physics and engineering design of the accelerator and electron dump for SPIDER
NASA Astrophysics Data System (ADS)
Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.
2011-06-01
The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.
Open-Source Conceptual Sizing Models for the Hyperloop Passenger Pod
NASA Technical Reports Server (NTRS)
Chin, Jeffrey C.; Gray, Justin S.; Jones, Scott M.; Berton, Jeffrey J.
2015-01-01
Hyperloop is a new mode of transportation proposed as an alternative to California's high speed rail project, with the intended benefits of higher performance at lower overall costs. It consists of a passenger pod traveling through a tube under a light vacuum and suspended on air bearings. The pod travels up to transonic speeds resulting in a 35 minute travel time between the intended route from Los Angeles and San Francisco. Of the two variants outlined, the smaller system includes a 1.1 meter tall passenger capsule traveling through a 2.2 meter tube at 700 miles per hour. The passenger pod features water-based heat exchangers as well as an on-board compression system that reduces the aerodynamic drag as it moves through the tube. Although the original proposal looks very promising, it assumes that tube and pod dimensions are independently sizable without fully acknowledging the constraints of the compressor system on the pod geometry. This work focuses on the aerodynamic and thermodynamic interactions between the two largest systems; the tube and the pod. Using open-source toolsets, a new sizing method is developed based on one-dimensional thermodynamic relationships that accounts for the strong interactions between these sub-systems. These additional considerations require a tube nearly twice the size originally considered and limit the maximum pod travel speed to about 620 miles per hour. Although the results indicate that Hyperloop will need to be larger and slightly slower than originally intended, the estimated travel time only increases by approximately five minutes, so the overall performance is not dramatically affected. In addition, the proposed on-board heat exchanger is not an ideal solution to achieve reasonable equilibrium air temperatures within the tube. Removal of this subsystem represents a potential reduction in weight, energy requirements and complexity of the pod. In light of these finding, the core concept still remains a compelling possibility, although additional engineering and economic analyses are markedly necessary before a more complete design can be developed.
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Duderstadt, E. C.; Wein, D.; Titran, R. H.
1978-01-01
A Mini Brayton space power generation system required the development of a Columbium alloy heat exchanger to transfer heat from a radioisotope heat source to a He/Xe working fluid. A light-weight design featured the simultaneous diffusion welding of 148 longitudinal fins in an annular heat exchanger about 9-1/2 in. in diameter, 13-1/2 in. in length and 1/4 in. in radial thickness. To complete the heat exchanger, additional gas ducting elements and attachment supports were added by GTA welding in a vacuum-purged inert atmosphere welding chamber. The development required the modification of an existing large size hot isostatic press to achieve HIP capabilities of 2800 F and 10,000 psi for at least 3 hr. Excellent diffusion welds were achieved in a high-quality component which met all system requirements.
Compact Laser System for Field Deployable Ultracold Atom Sensors
NASA Astrophysics Data System (ADS)
Pino, Juan; Luey, Ben; Anderson, Mike
2013-05-01
As ultracold atom sensors begin to see their way to the field, there is a growing need for small, accurate, and robust laser systems to cool and manipulate atoms for sensing applications such as magnetometers, gravimeters, atomic clocks and inertial sensing. In this poster we present a laser system for Rb, roughly the size of a paperback novel, capable of generating and controlling light sufficient for the most complicated of cold atom sensors. The system includes >100dB of non-mechanical, optical shuttering, the ability to create short, microsecond pulses, a Demux stage to port light onto different optical paths, and an atomically referenced, frequency agile laser source. We will present data to support the system, its Size Weight and Power (SWaP) requirements, as well as laser stability and performance. funded under DARPA
NASA Astrophysics Data System (ADS)
Gallovič, F.
2017-09-01
Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.
AIRBORNE PARTICLE SIZES AND SOURCES FOUND IN INDOOR AIR
The paper summarizes results of a literature search into the sources, sizes, and concentrations of particles in indoor air, including the various types: plant, animal, mineral, combustion, home/personal care, and radioactive aerosols. This information, presented in a summary figu...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less
Rostad, C.E.; Rees, T.F.; Daniel, S.R.
1998-01-01
An on-board technique was developed that combined discharge-weighted pumping to a high-speed continuous-flow centrifuge for isolation of the particulate-sized material with ultrafiltration for isolation of colloid-sized material. In order to address whether these processes changed the particle sizes during isolation, samples of particles in suspension were collected at various steps in the isolation process to evaluate changes in particle size. Particle sizes were determined using laser light-scattering photon correlation spectroscopy and indicated no change in size during the colloid isolation process. Mississippi River colloid particle sizes from twelve sites from Minneapolis to below New Orleans were compared with sizes from four tributaries and three seasons, and from predominantly autochthonous sources upstream to more allochthonous sources downstream. ?? 1998 John Wiley Sons, Ltd.
Head-mounted LED for optogenetic experiments of freely-behaving animal
NASA Astrophysics Data System (ADS)
Kwon, Ki Yong; Gnade, Andrew G.; Rush, Alexander D.; Patten, Craig D.
2016-03-01
Recent developments in optogenetics have demonstrated the ability to target specific types of neurons with sub-millisecond temporal precision via direct optical stimulation of genetically modified neurons in the brain. In most applications, the beam of a laser is coupled to an optical fiber, which guides and delivers the optical power to the region of interest. Light emitting diodes (LEDs) are an alternative light source for optogenetics and they provide many advantages over a laser based system including cost, size, illumination stability, and fast modulation. Their compact size and low power consumption make LEDs suitable light sources for a wireless optogenetic stimulation system. However, the coupling efficiency of an LED's output light into an optical fiber is lower than a laser due to its noncollimated output light. In typical chronic optogenetic experiment, the output of the light source is transmitted to the brain through a patch cable and a fiber stub implant, and this configuration requires two fiber-to-fiber couplings. Attenuation within the patch cable is potential source of optical power loss. In this study, we report and characterize a recently developed light delivery method for freely-behaving animal experiments. We have developed a head-mounted light source that maximizes the coupling efficiency of an LED light source by eliminating the need for a fiber optic cable. This miniaturized LED is designed to couple directly to the fiber stub implant. Depending on the desired optical power output, the head-mounted LED can be controlled by either a tethered (high power) or battery-powered wireless (moderate power) controller. In the tethered system, the LED is controlled through 40 gauge micro coaxial cable which is thinner, more flexible, and more durable than a fiber optic cable. The battery-powered wireless system uses either infrared or radio frequency transmission to achieve real-time control. Optical, electrical, mechanical, and thermal characteristics of the head-mounted LED were evaluated.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
Accelerator-driven transmutation of spent fuel elements
Venneri, Francesco; Williamson, Mark A.; Li, Ning
2002-01-01
An apparatus and method is described for transmuting higher actinides, plutonium and selected fission products in a liquid-fuel subcritical assembly. Uranium may also be enriched, thereby providing new fuel for use in conventional nuclear power plants. An accelerator provides the additional neutrons required to perform the processes. The size of the accelerator needed to complete fuel cycle closure depends on the neutron efficiency of the supported reactors and on the neutron spectrum of the actinide transmutation apparatus. Treatment of spent fuel from light water reactors (LWRs) using uranium-based fuel will require the largest accelerator power, whereas neutron-efficient high temperature gas reactors (HTGRs) or CANDU reactors will require the smallest accelerator power, especially if thorium is introduced into the newly generated fuel according to the teachings of the present invention. Fast spectrum actinide transmutation apparatus (based on liquid-metal fuel) will take full advantage of the accelerator-produced source neutrons and provide maximum utilization of the actinide-generated fission neutrons. However, near-thermal transmutation apparatus will require lower standing
NASA Astrophysics Data System (ADS)
Khizhanok, Andrei
Development of a compact source of high-spectral brilliance and high impulse frequency gamma rays has been in scope of Fermi National Accelerator Laboratory for quite some time. Main goal of the project is to develop a setup to support gamma rays detection test and gamma ray spectroscopy. Potential applications include but not limited to nuclear astrophysics, nuclear medicine, oncology ('gamma knife'). Present work covers multiple interconnected stages of development of the interaction region to ensure high levels of structural strength and vibrational resistance. Inverse Compton scattering is a complex phenomenon, in which charged particle transfers a part of its energy to a photon. It requires extreme precision as the interaction point is estimated to be 20 microm. The slightest deflection of the mirrors will reduce effectiveness of conversion by orders of magnitude. For acceptable conversion efficiency laser cavity also must have >1000 finesse value, which requires a trade-off between size, mechanical stability, complexity, and price of the setup. This work focuses on advantages and weak points of different designs of interaction regions as well as in-depth description of analyses performed. This includes laser cavity amplification and finesse estimates, natural frequency mapping, harmonic analysis. Structural analysis is required as interaction must occur under high vacuum conditions.
Whitman, Richard L.; Nevers, Meredith B.
2004-01-01
Monitoring beaches for recreational water quality is becoming more common, but few sampling designs or policy approaches have evaluated the efficacy of monitoring programs. The authors intensively sampled water for E. coli (N=1770) at 63rd Street Beach, Chicago for 6 months in 2000 in order to (1) characterize spatial-temporal trends, (2) determine between and within transect variation, and (3) estimate sample size requirements and determine sampling reliability.E. coli counts were highly variable within and between sampling sites but spatially and diurnally autocorrelated. Variation in counts decreased with water depth and time of day. Required number of samples was high for 70% precision around the critical closure level (i.e., 6 within or 24 between transect replicates). Since spatial replication may be cost prohibitive, composite sampling is an alternative once sources of error have been well defined. The results suggest that beach monitoring programs may be requiring too few samples to fulfill management objectives desired. As the recreational water quality national database is developed, it is important that sampling strategies are empirically derived from a thorough understanding of the sources of variation and the reliability of collected data. Greater monitoring efficacy will yield better policy decisions, risk assessments, programmatic goals, and future usefulness of the information.
Sohn, Martin Y; Barnes, Bryan M; Silver, Richard M
2018-03-01
Accurate optics-based dimensional measurements of features sized well-below the diffraction limit require a thorough understanding of the illumination within the optical column and of the three-dimensional scattered fields that contain the information required for quantitative metrology. Scatterfield microscopy can pair simulations with angle-resolved tool characterization to improve agreement between the experiment and calculated libraries, yielding sub-nanometer parametric uncertainties. Optimized angle-resolved illumination requires bi-telecentric optics in which a telecentric sample plane defined by a Köhler illumination configuration and a telecentric conjugate back focal plane (CBFP) of the objective lens; scanning an aperture or an aperture source at the CBFP allows control of the illumination beam angle at the sample plane with minimal distortion. A bi-telecentric illumination optics have been designed enabling angle-resolved illumination for both aperture and source scanning modes while yielding low distortion and chief ray parallelism. The optimized design features a maximum chief ray angle at the CBFP of 0.002° and maximum wavefront deviations of less than 0.06 λ for angle-resolved illumination beams at the sample plane, holding promise for high quality angle-resolved illumination for improved measurements of deep-subwavelength structures using deep-ultraviolet light.
Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm
Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed
2008-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581
Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.
Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed
2004-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.
NASA Astrophysics Data System (ADS)
Sakamoto, Kimiko M.; Laing, James R.; Stevens, Robin G.; Jaffe, Daniel A.; Pierce, Jeffrey R.
2016-06-01
Biomass-burning aerosols have a significant effect on global and regional aerosol climate forcings. To model the magnitude of these effects accurately requires knowledge of the size distribution of the emitted and evolving aerosol particles. Current biomass-burning inventories do not include size distributions, and global and regional models generally assume a fixed size distribution from all biomass-burning emissions. However, biomass-burning size distributions evolve in the plume due to coagulation and net organic aerosol (OA) evaporation or formation, and the plume processes occur on spacial scales smaller than global/regional-model grid boxes. The extent of this size-distribution evolution is dependent on a variety of factors relating to the emission source and atmospheric conditions. Therefore, accurately accounting for biomass-burning aerosol size in global models requires an effective aerosol size distribution that accounts for this sub-grid evolution and can be derived from available emission-inventory and meteorological parameters. In this paper, we perform a detailed investigation of the effects of coagulation on the aerosol size distribution in biomass-burning plumes. We compare the effect of coagulation to that of OA evaporation and formation. We develop coagulation-only parameterizations for effective biomass-burning size distributions using the SAM-TOMAS large-eddy simulation plume model. For the most-sophisticated parameterization, we use the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) to build a parameterization of the aged size distribution based on the SAM-TOMAS output and seven inputs: emission median dry diameter, emission distribution modal width, mass emissions flux, fire area, mean boundary-layer wind speed, plume mixing depth, and time/distance since emission. This parameterization was tested against an independent set of SAM-TOMAS simulations and yields R2 values of 0.83 and 0.89 for Dpm and modal width, respectively. The size distribution is particularly sensitive to the mass emissions flux, fire area, wind speed, and time, and we provide simplified fits of the aged size distribution to just these input variables. The simplified fits were tested against 11 aged biomass-burning size distributions observed at the Mt. Bachelor Observatory in August 2015. The simple fits captured over half of the variability in observed Dpm and modal width even though the freshly emitted Dpm and modal widths were unknown. These fits may be used in global and regional aerosol models. Finally, we show that coagulation generally leads to greater changes in the particle size distribution than OA evaporation/formation does, using estimates of OA production/loss from the literature.
Testing exposure of a jet engine to a dilute volcanic-ash cloud
NASA Astrophysics Data System (ADS)
Guffanti, M.; Mastin, L. G.; Schneider, D. J.; Holliday, C. R.; Murray, J. J.
2013-12-01
An experiment to test the effects of volcanic-ash ingestion by a jet engine is being planned for 2014 by a consortium of U.S. Government agencies and engine manufacturers, under the auspices of NASA's Vehicle Integrated Propulsion Research Program. The experiment, using a 757-type engine, will be an on-ground, on-wing test carried out at Edwards Air Force Base, California. The experiment will involve the use of advanced jet-engine sensor technology for detecting and diagnosing engine health. A primary test objective is to determine the effect on the engine of many hours of exposure to ash concentrations (1 and 10 mg/cu m) representative of ash clouds many 100's to >1000 km from a volcanic source, an aviation environment of great interest since the 2010 Eyjafjallajökull, Iceland, eruption. A natural volcanic ash will be used; candidate sources are being evaluated. Data from previous ash/aircraft encounters, as well as published airborne measurements of the Eyjafjallajökull ash cloud, suggest the ash used should be composed primarily of glassy particles of andesitic to rhyolitic composition (SiO2 of 57-77%), with some mineral crystals, and a few tens of microns in size. Collected ash will be commercially processed less than 63 microns in size with the expectation that the ash particles will be further pulverized to smaller sizes in the engine during the test. For a nominally planned 80 hour test at multiple ash-concentration levels, the test will require roughly 500 kg of processed (appropriately sized) ash to be introduced into the engine core. Although volcanic ash clouds commonly contain volcanic gases such as sulfur dioxide, testing will not include volcanic gas or aerosol interactions as these present complex processes beyond the scope of the planned experiment. The viscous behavior of ash particles in the engine is a key issue in the experiment. The small glassy ash particles are expected to soften in the engine's hot combustion chamber, then stick to cooler parts of the turbine. Composition (primarily silica content) and dissolved water content, both of which affect the softening temperature of silicate melts, will be taken into account when evaluating candidate ash sources, although the practicalities of collecting, shipping, and processing a substantial amount of ash are a major decision factor in source selection.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
What is the effect of area size when using local area practice style as an instrument?
Brooks, John M; Tang, Yuexin; Chapman, Cole G; Cook, Elizabeth A; Chrischilles, Elizabeth A
2013-08-01
Discuss the tradeoffs inherent in choosing a local area size when using a measure of local area practice style as an instrument in instrumental variable estimation when assessing treatment effectiveness. Assess the effectiveness of angiotensin converting-enzyme inhibitors and angiotensin receptor blockers on survival after acute myocardial infarction for Medicare beneficiaries using practice style instruments based on different-sized local areas around patients. We contrasted treatment effect estimates using different local area sizes in terms of the strength of the relationship between local area practice styles and individual patient treatment choices; and indirect assessments of the assumption violations. Using smaller local areas to measure practice styles exploits more treatment variation and results in smaller standard errors. However, if treatment effects are heterogeneous, the use of smaller local areas may increase the risk that local practice style measures are dominated by differences in average treatment effectiveness across areas and bias results toward greater effectiveness. Local area practice style measures can be useful instruments in instrumental variable analysis, but the use of smaller local area sizes to generate greater treatment variation may result in treatment effect estimates that are biased toward higher effectiveness. Assessment of whether ecological bias can be mitigated by changing local area size requires the use of outside data sources. Copyright © 2013 Elsevier Inc. All rights reserved.
Negative ion-driven associated particle neutron generator
Antolak, A. J.; Leung, K. N.; Morse, D. H.; ...
2015-10-09
We describe an associated particle neutron generator that employs a negative ion source to produce high neutron flux from a small source size. Furthermore, negative ions produced in an rf-driven plasma source are extracted through a small aperture to form a beam which bombards a positively biased, high voltage target electrode. Electrons co-extracted with the negative ions are removed by a permanent magnet electron filter. The use of negative ions enables high neutron output (100% atomic ion beam), high quality imaging (small neutron source size), and reliable operation (no high voltage breakdowns). Finally, the neutron generator can operate in eithermore » pulsed or continuous-wave (cw) mode and has been demonstrated to produce 10 6 D-D n/s (equivalent to similar to 10 8 D-T n/s) from a 1 mm-diameter neutron source size to facilitate high fidelity associated particle imaging.« less
Fabrication of Pt nanowires with a diffraction-unlimited feature size by high-threshold lithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Li, E-mail: lil@cust.edu.cn, E-mail: wangz@cust.edu.cn, E-mail: kq-peng@bnu.edu.cn; Zhang, Ziang; Yu, Miao
2015-09-28
Although the nanoscale world can already be observed at a diffraction-unlimited resolution using far-field optical microscopy, to make the step from microscopy to lithography still requires a suitable photoresist material system. In this letter, we consider the threshold to be a region with a width characterized by the extreme feature size obtained using a Gaussian beam spot. By narrowing such a region through improvement of the threshold sensitization to intensity in a high-threshold material system, the minimal feature size becomes smaller. By using platinum as the negative photoresist, we demonstrate that high-threshold lithography can be used to fabricate nanowire arraysmore » with a scalable resolution along the axial direction of the linewidth from the micro- to the nanoscale using a nanosecond-pulsed laser source with a wavelength λ{sub 0} = 1064 nm. The minimal feature size is only several nanometers (sub λ{sub 0}/100). Compared with conventional polymer resist lithography, the advantages of high-threshold lithography are sharper pinpoints of laser intensity triggering the threshold response and also higher robustness allowing for large area exposure by a less-expensive nanosecond-pulsed laser.« less
Muddled or mixed? Inferring palaeoclimate from size distributions of deep-sea clastics
NASA Astrophysics Data System (ADS)
Weltje, G. J.; Prins, M. A.
2003-04-01
One of the outstanding problems of palaeoclimate reconstruction from physico-chemical properties of terrigenous deep-sea sediments is the fact that most basin fills are mixtures of sediment populations derived from different sources and transported to the site of deposition by different mechanisms. Conventional approaches to palaeoclimate reconstruction from deep-sea sediments, which ignore this common fact, often fail to recognise the true significance of variations in sediment properties. We formulate a set of requirements that each proposed palaeoenvironmental indicator should fulfil, and focus on the intrinsic coupling between grain size and chemical composition. A critical review of past achievements in grain-size analysis is given to provide a starting point for a conceptual model of spatio-temporal grain-size variation in terms of dynamic populations. Each dynamic population results from a characteristic combination of production and transport mechanisms that corresponds to a distinct subpopulation in the data analysed. The mathematical-statistical equivalent of the conceptual model may be solved by means of the end-member modelling algorithm EMMA. Applications of the model to several ocean basins are discussed, as well as methods to examine the validity of the palaeoclimate reconstructions.
Chemical Composition and Source Apportionment of Size ...
The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible for health effects however are not fully understood. Size-fractionated PM (coarse, fine, and ultrafine) samples were collected using a ChemVol sampler at an urban site (G.T. Craig (GTC)) and rural site (Chippewa Lake (CLM)) from July 2009 to June 2010, and then chemically analyzed. The resulting speciated PM data were apportioned by EPA positive matrix factorization to identify emission sources for each size fraction and location. For comparisons with the ChemVol results, PM samples were also collected with sequential dichotomous and passive samplers, and evaluated for source contributions to each sampling site. The ChemVol results showed that annual average concentrations of PM, elemental carbon, and inorganic elements in the coarse fraction at GTC were ~ 2, ~7, and ~3 times higher than those at CLM, respectively, while the smaller size fractions at both sites showed similar annual average concentrat ions. Seasonal variations of secondary aerosols (e.g., high N03- level in winter and high SO42- level in summer) were observed at both sites. Source apportionment results demonstrated that the PM samples at GTC and CLM were enriched with local industrial sources (e.g., steel plant and coa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe; Camilla, S.
The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the referencemore » material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.« less
Effective doping of low energy ions into superfluid helium droplets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Chen, Lei; Freund, William M.
2015-08-21
We report a facile method of doping cations from an electrospray ionization (ESI) source into superfluid helium droplets. By decelerating and stopping the ion pulse of reserpine and substance P from an ESI source in the path of the droplet beam, about 10{sup 4} ion-doped droplets (one ion per droplet) can be recorded, corresponding to a pickup efficiency of nearly 1 out of 1000 ions. We attribute the success of this simple approach to the long residence time of the cations in the droplet beam. The resulting size of the doped droplets, on the order of 10{sup 5}/droplet, is measuredmore » using deflection and retardation methods. Our method does not require an ion trap in the doping region, which significantly simplifies the experimental setup and procedure for future spectroscopic and diffraction studies.« less
NASA Technical Reports Server (NTRS)
Castro, Stephanie L.; Bailey, Sheila G.; Raffaelle, Ryne P.; Banger, Kulbinder K.; Hepp, Aloysius F.
2002-01-01
Single-source precursors are molecules which contain all the necessary elements for synthesis of a desired material. Thermal decomposition of the precursor results in the formation of the material with the correct stoichiometry, as a nanocrystalline powder or a thin film. Nanocrystalline materials hold potential as components of next-generation Photovoltaic (PV) devices. Presented here are the syntheses of CuInS2 and CuInSe2 nanocrystals from the precursors (PPh3)2CuIn(SEt)4 and (PPh3)2CuIn(SePh)4, respectively. The size of the nanocrystals varies with the reaction temperature; a minimum of 200 C is required for the formation of the smallest CuInS2 crystals (approximately 1.6 nm diameter); at 300 C, crystals are approximately 7 nm.
SOURCE STRENGTHS OF ULTRAFINE AND FINE PARTICLES DUE TO COOKING WITH A GAS STOVE
Cooking, particularly frying, is an important source of particles indoors. Few studies have measured a full range of particle sizes, including ultrafine particles, produced during cooking. In this study, semicontinuous instruments with fine size discriminating ability were us...
A determination of the mass of Sagittarius A* from its radio spectral and source size measurements
NASA Technical Reports Server (NTRS)
Melia, Fulvio; Jokipii, J. R.; Narayanan, Ajay
1992-01-01
There is growing evidence that Sgr A* may be a million solar mass black hole accreting from the Galactic center wind. A consideration of the spectral and source size characteristics associated with this process can offer at least two distinct means of inferring the mass M, complementing the more traditional dynamical arguments. We show that M is unmistakably correlated with both the radio spectral index and the critical wavelength below which the intrinsic source size dominates over the angular broadening due to scattering in the interstellar medium. Current observations can already rule out a mass much in excess of 2 x 10 exp 6 solar masses and suggest a likely value close to 1 x 10 exp 6 solar masses, in agreement with an earlier study matching the radio and high-energy spectral components. We anticipate that such a mass may be confirmed with the next generation of source-size observations using milliarcsecond angular resolution at 0.5 - 1 cm wavelengths.
Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G
2017-06-01
Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
Gravitationally bound BCS state as dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, Stephon; Cormack, Sam, E-mail: stephon_alexander@brown.edu, E-mail: samuel.c.cormack.gr@dartmouth.edu
2017-04-01
We explore the possibility that fermionic dark matter undergoes a BCS transition to form a superfluid. This requires an attractive interaction between fermions and we describe a possible source of this interaction induced by torsion. We describe the gravitating fermion system with the Bogoliubov-de Gennes formalism in the local density approximation. We solve the Poisson equation along with the equations for the density and gap energy of the fermions to find a self-gravitating, superfluid solution for dark matter halos. In order to produce halos the size of dwarf galaxies, we require a particle mass of ∼ 200 eV. We findmore » a maximum attractive coupling strength before the halo becomes unstable. If dark matter halos do have a superfluid component, this raises the possibility that they contain vortex lines.« less
VLBI observations of galactic nuclei at 18 centimeters - NGC 1052, NGC 4278, M82, and M104
NASA Technical Reports Server (NTRS)
Shaffer, D. B.; Marscher, A. P.
1979-01-01
Compact radio sources about a light year in size have been detected in the nuclei of the galaxies NGC 1052, NGC 3034 (M82), NGC 4278, and NGC 4594 (M104) at a wavelength of 18 cm. The compact nucleus detected in M81 at 6 cm was not seen at 18 cm. The compact source in M82 is unique among extragalactic sources in its size-spectrum relationship. It is either broadened by scattering within M82 or it lies behind, and is absorbed by, an H II region. In these galaxies, the size of the nuclear radio source at 18 cm is larger than it is at higher frequencies. The nucleus of the giant radio galaxy DA 240 was not detected.
Effects of changes in size, speed and distance on the perception of curved 3D trajectories
Zhang, Junjun; Braunstein, Myron L.; Andersen, George J.
2012-01-01
Previous research on the perception of 3D object motion has considered time to collision, time to passage, collision detection and judgments of speed and direction of motion, but has not directly studied the perception of the overall shape of the motion path. We examined the perception of the magnitude of curvature and sign of curvature of the motion path for objects moving at eye level in a horizontal plane parallel to the line of sight. We considered two sources of information for the perception of motion trajectories: changes in angular size and changes in angular speed. Three experiments examined judgments of relative curvature for objects moving at different distances. At the closest distance studied, accuracy was high with size information alone but near chance with speed information alone. At the greatest distance, accuracy with size information alone decreased sharply but accuracy for displays with both size and speed information remained high. We found similar results in two experiments with judgments of sign of curvature. Accuracy was higher for displays with both size and speed information than with size information alone, even when the speed information was based on parallel projections and was not informative about sign of curvature. For both magnitude of curvature and sign of curvature judgments, information indicating that the trajectory was curved increased accuracy, even when this information was not directly relevant to the required judgment. PMID:23007204
Harrison, Sean; Jones, Hayley E; Martin, Richard M; Lewis, Sarah J; Higgins, Julian P T
2017-09-01
Meta-analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1-sided P value and a total sample size from each study (or equivalently a 2-sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta-analyses, allowing for comparison of results, and an example from when a meta-analysis was not possible. Copyright © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.
Beam Stability R&D for the APS MBA Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sereno, Nicholas S.; Arnold, Ned D.; Bui, Hanh D.
2015-01-01
Beam diagnostics required for the APS Multi-bend acromat (MBA) are driven by ambitious beam stability requirements. The major AC stability challenge is to correct rms beam motion to 10% the rms beam size at the insertion device source points from0.01 to 1000 Hz. The vertical plane represents the biggest challenge forAC stability, which is required to be 400 nm rms for a 4-micron vertical beam size. In addition to AC stability, long-term drift over a period of seven days is required to be 1 micron or less. Major diagnostics R&D components include improved rf beam position processing using commercially availablemore » FPGA-based BPM processors, new X-ray beam position monitors based on hard X-ray fluorescence from copper and Compton scattering off diamond, mechanical motion sensing to detect and correct long-term vacuum chamber drift, a new feedback system featuring a tenfold increase in sampling rate, and a several-fold increase in the number of fast correctors and BPMs in the feedback algorithm. Feedback system development represents a major effort, and we are pursuing development of a novel algorithm that integrates orbit correction for both slow and fast correctors down to DC simultaneously. Finally, a new data acquisition system (DAQ) is being developed to simultaneously acquire streaming data from all diagnostics as well as the feedback processors for commissioning and fault diagnosis. Results of studies and the design effort are reported.« less
Phase Imaging using Focusing Polycapillary Optics
NASA Astrophysics Data System (ADS)
Bashir, Sajid
The interaction of X rays in diagnostic energy range with soft tissues can be described by Compton scattering and by the complex refractive index, which together characterize the attenuation properties of the tissue and the phase imparted to X rays passing through it. Many soft tissues exhibit extremely similar attenuation, so that their discrimination using conventional radiography, which generates contrast in an image through differential attenuation, is challenging. However, these tissues will impart phase differences significantly greater than attenuation differences to the X rays passing through them, so that phase-contrast imaging techniques can enable their discrimination. A major limitation to the widespread adoption of phase-contrast techniques is that phase contrast requires significant spatial coherence of the X-ray beam, which in turn requires specialized sources. For tabletop sources, this often requires a small (usually in the range of 10-50 micron) X-ray source. In this work, polycapillary optics were employed to create a small secondary source from a large spot rotating anode. Polycapillary optics consist of arrays of small hollow glass tubes through which X rays can be guided by total internal reflection from the tube walls. By tapering the tubes to guide the X rays to a point, they can be focused to a small spot which can be used as a secondary source. The polycapillary optic was first aligned with the X-ray source. The spot size was measured using a computed radiography image plate. Images were taken at a variety of optic-to-object and object-to-detector distances and phase-contrast edge enhancement was observed. Conventional absorption images were also acquired at a small object-to detector distances for comparison. Background division was performed to remove strong non-uniformity due to the optics. Differential phase contrast reconstruction demonstrates promising preliminary results. This manuscript is divided into six chapters. The second chapter describes the limitations of conventional imaging methods and benefits of the phase imaging. Chapter three covers different types of X-ray photon interactions with matter. Chapter four describes the experimental set-up and different types of images acquired along with their analysis. Chapter five summarizes the findings in this project and describes future work as well.
Dependence of Microlensing on Source Size and Lens Mass
NASA Astrophysics Data System (ADS)
Congdon, A. B.; Keeton, C. R.
2007-11-01
In gravitational lensed quasars, the magnification of an image depends on the configuration of stars in the lensing galaxy. We study the statistics of the magnification distribution for random star fields. The width of the distribution characterizes the amount by which the observed magnification is likely to differ from models in which the mass is smoothly distributed. We use numerical simulations to explore how the width of the magnification distribution depends on the mass function of stars, and on the size of the source quasar. We then propose a semi-analytic model to describe the distribution width for different source sizes and stellar mass functions.
NASA Astrophysics Data System (ADS)
Smith, David R.; Gowda, Vinay R.; Yurduseven, Okan; Larouche, Stéphane; Lipworth, Guy; Urzhumov, Yaroslav; Reynolds, Matthew S.
2017-01-01
Wireless power transfer (WPT) has been an active topic of research, with a number of WPT schemes implemented in the near-field (coupling) and far-field (radiation) regimes. Here, we consider a beamed WPT scheme based on a dynamically reconfigurable source aperture transferring power to receiving devices within the Fresnel region. In this context, the dynamic aperture resembles a reconfigurable lens capable of focusing power to a well-defined spot, whose dimension can be related to a point spread function. The necessary amplitude and phase distribution of the field imposed over the aperture can be determined in a holographic sense, by interfering a hypothetical point source located at the receiver location with a plane wave at the aperture location. While conventional technologies, such as phased arrays, can achieve the required control over phase and amplitude, they typically do so at a high cost; alternatively, metasurface apertures can achieve dynamic focusing with potentially lower cost. We present an initial tradeoff analysis of the Fresnel region WPT concept assuming a metasurface aperture, relating the key parameters such as spot size, aperture size, wavelength, and focal distance, as well as reviewing system considerations such as the availability of sources and power transfer efficiency. We find that approximate design formulas derived from the Gaussian optics approximation provide useful estimates of system performance, including transfer efficiency and coverage volume. The accuracy of these formulas is confirmed through numerical studies.
Optoelectronic microdevices for combined phototherapy
NASA Astrophysics Data System (ADS)
Zharov, Vladimir P.; Menyaev, Yulian A.; Hamaev, V. A.; Antropov, G. M.; Waner, Milton
2000-03-01
In photomedicine in some of cases radiation delivery to local zones through optical fibers can be changed for the direct placing of tiny optical sources like semiconductor microlasers or light diodes in required zones of ears, nostrils, larynx, nasopharynx cochlea or alimentary tract. Our study accentuates the creation of optoelectronic microdevices for local phototherapy and functional imaging by using reflected light. Phototherapeutic micromodule consist of the light source, microprocessor and miniature optics with different kind of power supply: from autonomous with built-in batteries to remote supply by using pulsed magnetic field and supersmall coils. The developed prototype photomodule has size (phi) 8X16 mm and work duration with built-in battery and light diode up several hours at the average power from several tenths of mW to few mW. Preliminary clinical tests developed physiotherapeutic micrimodules in stomatology for treating the inflammation and in otolaryngology for treating tonsillitis and otitis are presented. The developed implanted electro- optical sources with typical size (phi) 4X0,8 mm and with remote supply were used for optical stimulation of photosensitive retina structure and electrostimulation of visual nerve. In this scheme the superminiature coil with 30 electrical integrated levels was used. Such devices were implanted in eyes of 175 patients with different vision problems during clinical trials in Institute of Eye's Surgery in Moscow. For functional imaging of skin layered structure LED arrays coupled photodiodes arrays were developed. The possibilities of this device for study drug diffusion and visualization small veins are discussed.
Liu, Shuxin; Wang, Haibin; Yin, Hengbo; Wang, Hong; He, Jichuan
2014-03-01
The carbon coated LiFePO4 (LiFePO4/C) nanocomposites materials were successfully synthesized by sol-gel method. The microstructure and morphology of LiFePO4/C nanocomposites were characterized by X-ray diffraction, Raman spectroscopy and scanning electron microscopy. The results showed that the carbon layers decomposed by different dispersant and carbon source had different graphitization degree, and the sugar could decompose to form more graphite-like structure carbon. The carbon source and heat-treatment temperature had some effect on the particle size and morphology, the sample LFP-S700 synthesized by adding sugar as carbon source at 700 degrees C had smaller particle size, uniform size distribution and spherical shape. The electrochemical behavior of LiFePO4/C nanocomposites was analyzed using galvanostatic measurements and cyclic voltammetry (CV). The results showed that the sample LFP-S700 had higher discharge specific capacities, higher apparent lithium ion diffusion coefficient and lower charge transfer resistance. The excellent electrochemical performance of sample LFP-S700 could be attributed to its high graphitization degree of carbon, smaller particle size and uniform size distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldib, A; Chibani, O; Chen, L
Purpose: Tremendous technological developments were made for conformal therapy techniques with linear accelerators, while less attention was paid to cobalt-60 units. The aim of the current study is to explore the dosimetric benefits of a novel rotating gamma ray system enhanced with interchangeable source sizes and multi-leaf collimator (MLC). Material and Methods: CybeRT is a novel rotating gamma ray machine with a ring gantry that ensures an iso-center accuracy of less than 0.3 mm. The new machine has a 70cm source axial distance allowing for improved penumbra compared to conventional machines. MCBEAM was used to simulate Cobalt-60 beams from themore » CybeRT head, while the MCPLAN code was used for modeling the MLC and for phantom/patient dose calculation. The CybeRT collimation will incorporate a system allowing for interchanging source sizes. In this work we have created phase space files for 1cm and 2cm source sizes. Evaluation of the system was done by comparing CybeRT beams with the 6MV beams in a water phantom and in patient geometry. Treatment plans were compared based on isodose distributions and dose volume histograms. Results: Profiles for the 1cm source were comparable to that from 6MV in the order of 6mm for 10×10 cm{sup 2} field size at the depth of maximum dose. This could ascribe to Cobalt-60 beams producing lowerenergy secondary electrons. Although, the 2cm source have a larger penumbra however it could be still used for large targets with proportionally increased dose rate. For large lung targets, the difference between cobalt and 6MV plans is clinically insignificant. Our preliminary results showed that interchanging source sizes will allow cobalt beams for volumetric arc therapy of both small lesions and large tumors. Conclusion: The CybeRT system will be a cost effective machine capable of performing advanced radiation therapy treatments of both small tumors and large target volumes.« less
Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering
Geist, Eric L.
2012-01-01
Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
NASA Technical Reports Server (NTRS)
Bloemhof, E. E.; Danen, R. M.; Gwinn, C. R.
1996-01-01
We describe how high spatial resolution imaging of circumstellar dust at a wavelength of about 10 micron, combined with knowledge of the source spectral energy distribution, can yield useful information about the sizes of the individual dust grains responsible for the infrared emission. Much can be learned even when only upper limits to source size are available. In parallel with high-resolution single-telescope imaging that may resolve the more extended mid-infrared sources, we plan to apply these less direct techniques to interpretation of future observations from two-element optical interferometers, where quite general arguments may be made despite only crude imaging capability. Results to date indicate a tendency for circumstellar grain sizes to be rather large compared to the Mathis-Rumpl-Nordsieck size distribution traditionally thought to characterize dust in the general interstellar medium. This may mean that processing of grains after their initial formation and ejection from circumstellar atmospheres adjusts their size distribution to the ISM curve; further mid-infrared observations of grains in various environments would help to confirm this conjecture.
75 FR 43107 - Revocation of Requirements for Full-Size Baby Cribs and Non-Full-Size Baby Cribs
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
... Improvement Act of 2008 (``CPSIA'') requires the United States Consumer Product Safety Commission (``CPSC'' or... CONSUMER PRODUCT SAFETY COMMISSION 16 CFR Parts 1508 and 1509 [CPSC Docket No. CPSC-2010-0075] Revocation of Requirements for Full-Size Baby Cribs and Non-Full- Size Baby Cribs AGENCY: Consumer Product...
Sample preparation of metal alloys by electric discharge machining
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1976-01-01
Electric discharge machining was investigated as a noncontaminating method of comminuting alloys for subsequent chemical analysis. Particulate dispersions in water were produced from bulk alloys at a rate of about 5 mg/min by using a commercially available machining instrument. The utility of this approach was demonstrated by results obtained when acidified dispersions were substituted for true acid solutions in an established spectrochemical method. The analysis results were not significantly different for the two sample forms. Particle size measurements and preliminary results from other spectrochemical methods which require direct aspiration of liquid into flame or plasma sources are reported.
Apparatus for electroplating particles of small dimension
Yu, C.M.; Illige, J.D.
1980-09-19
The thickness, uniformity, and surface smoothness requirements for surface coatings of glass microspheres for use as targets for laser fusion research are critical. Because of thier minute size, the microspheres are difficult to manipulate and control in electroplating systems. The electroplating apparatus of the present invention addresses these problems by providing a cathode cell having a cell chamber, a cathode and an anode electrically isolated from each other and connected to an electrical power source. During the plating process, the cathode is controllably vibrated along with solution pulse to maintain the particles in random free motion so as to attain the desired properties.
Pulsed beam of extremely large helium droplets
NASA Astrophysics Data System (ADS)
Kuma, Susumu; Azuma, Toshiyuki
2017-12-01
We generated a pulsed helium droplet beam with average droplet diameters of up to 2 μ m using a solenoid pulsed valve operated at temperatures as low as 7 K. The droplet diameter was controllable over two orders of magnitude, or six orders of the number of atoms per droplet, by lowering the valve temperature from 21 to 7 K. A sudden droplet size change attributed to the so-called ;supercritical expansion; was firstly observed in pulsed mode, which is necessary to obtain the micrometer-scale droplets. This beam source is beneficial for experiments that require extremely large helium droplets in intense, pulsed form.
Feasibility Study of Thin Film Thermocouple Piles
NASA Technical Reports Server (NTRS)
Sisk, R. C.
2001-01-01
Historically, thermopile detectors, generators, and refrigerators based on bulk materials have been used to measure temperature, generate power for spacecraft, and cool sensors for scientific investigations. New potential uses of small, low-power, thin film thermopiles are in the area of microelectromechanical systems since power requirements decrease as electrical and mechanical machines shrink in size. In this research activity, thin film thermopile devices are fabricated utilizing radio frequency sputter coating and photoresist lift-off techniques. Electrical characterizations are performed on two designs in order to investigate the feasibility of generating small amounts of power, utilizing any available waste heat as the energy source.
OEM fiber laser rangefinder for long-distance measurement
NASA Astrophysics Data System (ADS)
Corman, Alexandre; Chiquet, Frédéric; Avisse, Thomas; Le Flohic, Marc
2015-05-01
SensUp designs and manufactures electro-optical systems based on laser technology, in particular from fiber lasers. Indeed, that kind of source enables us to get a significant peak power with huge repetition rates at the same time, thus combining some characteristics of the two main technologies on the telemetry field today: laser diodes and solid-state lasers. The OEM (Original Equipment Manufacturer) fiber Laser RangeFinder (LRF) set out below, aims to fit the SWaP (Size Weight and Power) requirements of military markets, and might turn out to be a real alternative to other technologies usually used in range finding systems.
Read margin analysis of crossbar arrays using the cell-variability-aware simulation method
NASA Astrophysics Data System (ADS)
Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon
2018-02-01
This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.
NASA Astrophysics Data System (ADS)
Anoshkin, A. N.; Osokin, V. M.; Tretyakov, A. A.; Potrakhov, N. N.; Bessonov, V. B.
2017-02-01
In the article on the example of the straightener blade made of polymer composite materials, discusses the advantages of using the method of microfocus X-ray for nondestructive testing of aviation products. Described basic types of defects characteristics occurring in a similar type parts both during their manufacture and during their operation, namely, interlayer delamination, pores and wrinkles. Peculiarities of microfocus X-ray are shown, which is the use of radiation sources with a focal spot size of less than 100 μm. These features make it possible to increase the details and therefore, to minimize the size of detected defects in transmission. On the basis of experimental studies were defined radiographic signs of major types of defects, typical for products made of polymeric composite materials. Calculated time costs of personnel required for high-resolution X-ray recording and evaluation of test results.
Evidence of microbeads from personal care product contaminating the sea.
Cheung, Pui Kwan; Fok, Lincoln
2016-08-15
Plastic microbeads in personal care products have been identified as a source of marine pollution. Yet, their existence in the environment is rarely reported. During two surface manta trawls in the coastal waters of Hong Kong, eleven blue, spherical microbeads were captured. Their sizes (in diameters) ranged from 0.332 to 1.015mm. These microbeads possessed similar characteristics in terms of colour, shape and size with those identified and extracted from a facial scrub available in the local market. The FT-IR spectrum of the captured microbeads also matched those from the facial scrub. It was likely that the floating microbeads at the sea surface originated from a facial scrub and they have bypassed or escaped the sewage treatment system in Hong Kong. Timely voluntary or legislative actions are required to prevent more microbeads from entering the aquatic environment. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleeman, M.J.; Schauer, J.J.; Cass, G.R.
A dilution source sampling system is augmented to measure the size-distributed chemical composition of fine particle emissions from air pollution sources. Measurements are made using a laser optical particle counter (OPC), a differential mobility analyzer/condensation nucleus counter (DMA/CNC) combination, and a pair of microorifice uniform deposit impactors (MOUDIs). The sources tested with this system include wood smoke (pine, oak, eucalyptus), meat charbroiling, and cigarettes. The particle mass distributions from all wood smoke sources have a single mode that peaks at approximately 0.1--0.2 {micro}m particle diameter. The smoke from meat charbroiling shows a major peak in the particle mass distribution atmore » 0.1--0.2 {micro}m particle diameter, with some material present at larger particle sizes. Particle mass distributions from cigarettes peak between 0.3 and 0.4 {micro}m particle diameter. Chemical composition analysis reveals that particles emitted from the sources tested here are largely composed of organic compounds. Noticeable concentrations of elemental carbon are found in the particles emitted from wood burning. The size distributions of the trace species emissions from these sources also are presented, including data for Na, K, Ti, Fe, Br, Ru, Cl, Al, Zn, Ba, Sr, V, Mn, Sb, La, Ce, as well as sulfate, nitrate, and ammonium ion when present in statistically significant amounts. These data are intended for use with air quality models that seek to predict the size distribution of the chemical composition of atmospheric fine particles.« less
A novel injection-locked amplitude-modulated magnetron at 1497 MHz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neubauer, Michael; Wang, Haipeng
2015-12-15
Thomas Jefferson National Accelerator Facility (JLab) uses low efficiency klystrons in the CEBAF machine. In the older portion they operate at 30% efficiency with a tube mean time between failure (MTBF) of five to six years. A highly efficient source (>55-60%) must provide a high degree of backwards compatibility, both in size and voltage requirements, to replace the klystron presently used at JLab, while providing energy savings. Muons, Inc. is developing a highly reliable, highly efficient RF source based upon a novel injection-locked amplitude modulated (AM) magnetron with a lower total cost of ownership, >80% efficiency, and MTBF of sixmore » to seven years. The design of the RF source is based upon a single injection-locked magnetron system at 8 kW capable of operating up to 13 kW, using the magnetron magnetic field to achieve the AM required for backwards compatibility to compensate for microphonics and beam loads. A novel injection-locked 1497 MHz 8 kW AM magnetron with a trim magnetic coil was designed and its operation numerically simulated during the Phase I project. The low-level RF system to control the trim field and magnetron anode voltage was designed and modeled for operation at the modulation frequencies of the microphonics. A plan for constructing a prototype magnetron and control system was developed.« less
Active implant for optoacoustic natural sound enhancement
NASA Astrophysics Data System (ADS)
Mohrdiek, S.; Fretz, M.; Jose James, R.; Spinola Durante, G.; Burch, T.; Kral, A.; Rettenmaier, A.; Milani, R.; Putkonen, M.; Noell, W.; Ortsiefer, M.; Daly, A.; Vinciguerra, V.; Garnham, C.; Shah, D.
2017-02-01
This paper summarizes the results of an EU project called ACTION: ACTive Implant for Optoacoustic Natural sound enhancement. The project is based on a recent discovery that relatively low levels of pulsed infrared laser light are capable of triggering activity in hair cells of the partially hearing (hearing impaired) cochlea and vestibule. The aim here is the development of a self-contained, smart, highly miniaturized system to provide optoacoustic stimuli directly from an array of miniature light sources in the cochlea. Optoacoustic compound action potentials (oaCAP) are generated by the light source fully inserted into the unmodified cochlea. Previously, the same could only be achieved with external light sources connected to a fiber optic light guide. This feat is achieved by integrating custom made VCSEL arrays at a wavelength of about 1550 nm onto small flexible substrates. The laser light is collimated by a specially designed silicon-based ultra-thin lens (165 um thick) to get the energy density required for the generation of oaCAP signals. A dramatic miniaturization of the packaging technology is also required. A long term biocompatible and hermetic sapphire housing with a size of less than a 1 cubic millimeter and miniature Pt/PtIr feedthroughs is developed, using a low temperature laser assisted process for sealing. A biofouling thin film protection layer is developed to avoid fibrinogen and cell growth on the system.
NASA Astrophysics Data System (ADS)
Schaetz, Thomas; Hay, Bernd; Walden, Lars; Ziegler, Wolfram
1999-04-01
With the ongoing shrinking of design rules, the complexity of photomasks does increase continuously. Features are getting smaller and denser, their characterization requires sophisticated procedures. Looking for the deviation from their target value and their linewidth variation is not sufficient any more. In addition, measurements of corner rounding and line end shortening are necessary to define the pattern fidelity on the mask. Otherwise printing results will not be satisfying. Contacts and small features are suffering mainly from imaging inaccuracies. The size of the contacts as an example may come out too small on the photomask and therefore reduces the process window in lithography. In order to meet customer requirements for pattern fidelity, a measurement algorithm and a measurement procedure needs to be introduced and specifications to be defined. In this paper different approaches are compared, allowing an automatic qualification of photomask by optical light microscopy based on a MueTec CD-metrology system, the newly developed MueTec 2030UV, provided with a 365 nm light source. The i-line illumination allows to resolve features down to 0.2 micrometers size with good repeatability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Lan, E-mail: lgao@pppl.gov; Hill, K. W.; Bitter, M.
A high spatial resolution of a few μm is often required for probing small-scale high-energy-density plasmas using high resolution x-ray imaging spectroscopy. This resolution can be achieved by adjusting system magnification to overcome the inherent limitation of the detector pixel size. Laboratory experiments on investigating the relation between spatial resolution and system magnification for a spherical crystal spectrometer are presented. Tungsten Lβ{sub 2} rays from a tungsten-target micro-focus x-ray tube were diffracted by a Ge 440 crystal, which was spherically bent to a radius of 223 mm, and imaged onto an x-ray CCD with 13-μm pixel size. The source-to-crystal (p)more » and crystal-to-detector (q) distances were varied to produce spatial magnifications (M = q/p) ranging from 2 to 10. The inferred instrumental spatial width reduces with increasing system magnification M. However, the experimental measurement at each M is larger than the theoretical value of pixel size divided by M. Future work will focus on investigating possible broadening mechanisms that limit the spatial resolution.« less
Spatial resolution of a spherical x-ray crystal spectrometer at various magnifications
Gao, Lan; Hill, K. W.; Bitter, M.; ...
2016-08-23
Here, a high spatial resolution of a few μm is often required for probing small-scale high-energy-density plasmas using high resolution x-ray imaging spectroscopy. This resolution can be achieved by adjusting system magnification to overcome the inherent limitation of the detector pixel size. Laboratory experiments on investigating the relation between spatial resolution and system magnification for a spherical crystal spectrometer are presented. Tungsten Lβ 2 rays from a tungsten-target micro-focus x-ray tube were diffracted by a Ge 440 crystal, which was spherically bent to a radius of 223 mm, and imaged onto an x-ray CCD with 13-μm pixel size. The source-to-crystalmore » (p) and crystal-to-detector (q) distances were varied to produce spatial magnifications ( M = q/p) ranging from 2 to 10. The inferred instrumental spatial width reduces with increasing system magnification M. However, the experimental measurement at each M is larger than the theoretical value of pixel size divided by M. Future work will focus on investigating possible broadening mechanisms that limit the spatial resolution.« less
Advantages offered by high average power picosecond lasers
NASA Astrophysics Data System (ADS)
Moorhouse, C.
2011-03-01
As electronic devices shrink in size to reduce material costs, device size and weight, thinner material thicknesses are also utilized. Feature sizes are also decreasing, which is pushing manufacturers towards single step laser direct write process as an attractive alternative to conventional, multiple step photolithography processes by eliminating process steps and the cost of chemicals. The fragile nature of these thin materials makes them difficult to machine either mechanically or with conventional nanosecond pulsewidth, Diode Pumped Solids State (DPSS) lasers. Picosecond laser pulses can cut materials with reduced damage regions and selectively remove thin films due to the reduced thermal effects of the shorter pulsewidth. Also, the high repetition rate allows high speed processing for industrial applications. Selective removal of thin films for OLED patterning, silicon solar cells and flat panel displays is discussed, as well as laser cutting of transparent materials with low melting point such as Polyethylene Terephthalate (PET). For many of these thin film applications, where low pulse energy and high repetition rate are required, throughput can be increased by the use of a novel technique to using multiple beams from a single laser source is outlined.
A geometric approach to identify cavities in particle systems
NASA Astrophysics Data System (ADS)
Voyiatzis, Evangelos; Böhm, Michael C.; Müller-Plathe, Florian
2015-11-01
The implementation of a geometric algorithm to identify cavities in particle systems in an open-source python program is presented. The algorithm makes use of the Delaunay space tessellation. The present python software is based on platform-independent tools, leading to a portable program. Its successful execution provides information concerning the accessible volume fraction of the system, the size and shape of the cavities and the group of atoms forming each of them. The program can be easily incorporated into the LAMMPS software. An advantage of the present algorithm is that no a priori assumption on the cavity shape has to be made. As an example, the cavity size and shape distributions in a polyethylene melt system are presented for three spherical probe particles. This paper serves also as an introductory manual to the script. It summarizes the algorithm, its implementation, the required user-defined parameters as well as the format of the input and output files. Additionally, we demonstrate possible applications of our approach and compare its capability with the ones of well documented cavity size estimators.
The Visibility of Earth Transits
NASA Technical Reports Server (NTRS)
Castellano, Tim; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
The recent detection of planetary transits of the solar-like star HD 209458 at a distance of 47 parsecs suggest that transits can reveal the presence of Jupiter-size planetary companions in the solar neighborhood. Recent space-based transit searches have achieved photometric precision within an order of magnitude of that required to detect the much smaller transit signal of an earth-size planet around a solar-size star. Laboratory experiments in the presence of realistic noise sources have shown that CCDs can achieve photometric precision adequate to detect the 9.6 E-5 dimming, of the Sun due to a transit of the Earth. Space-based solar irradiance monitoring has shown that the intrinsic variability of the Sun would not preclude such a detection. Transits of the Sun by the Earth would be detectable by observers that reside within a narrow band of sky positions near the ecliptic plane, if the observers possess current Earth epoch levels of technology and astronomical expertise. A catalog of candidate target stars, their properties, and simulations of the photometric Earth transit signal detectability at each target is presented.
The Visibility of Earth Transits
NASA Technical Reports Server (NTRS)
Castellano, Timothy P.; Doyle, Laurance; McIntosh, Dawn; DeVincenzi, Donald (Technical Monitor)
2000-01-01
The recent photometric detection of planetary transits of the solar-like star HD 209458 at a distance of 47 parsecs suggest that transits can reveal the presence of Jupiter-size planetary companions in the solar neighborhood. Recent space-based transit searches have achieved photometric precision within an order of magnitude of that required to detect the much smaller transit signal of an earth-size planet across a solar-size star. Laboratory experiments in the presence of realistic noise sources have shown that CCDs can achieve photometric precision adequate to detect the 9.6 E-5 dimming of the Sun due to a transit of the Earth. Space-based solar irradiance monitoring has shown that the intrinsic variability of the Sun would not preclude such a detection. Transits of the Sun by the Earth would be detectable by observers that reside within a narrow band of sky positions near the ecliptic plane, if the observers possess current Earth epoch levels of technology and astronomical expertise. A catalog of solar-like stars that satisfy the geometric condition for Earth transit visibility are presented.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Largo, Remo; Stolzmann, Paul; Fankhauser, Christian D; Poyet, Cédric; Wolfsgruber, Pirmin; Sulser, Tullio; Alkadhi, Hatem; Winklhofer, Sebastian
2016-06-01
This study investigates the capabilities of low tube voltage computed tomography (CT) and dual-energy CT (DECT) for predicting successful shock wave lithotripsy (SWL) of urinary stones in vitro. A total of 33 urinary calculi (six different chemical compositions; mean size 6 ± 3 mm) were scanned using a dual-source CT machine with single- (120 kVp) and dual-energy settings (80/150, 100/150 Sn kVp) resulting in six different datasets. The attenuation (Hounsfield Units) of calculi was measured on single-energy CT images and the dual-energy indices (DEIs) were calculated from DECT acquisitions. Calculi underwent SWL and the number of shock waves for successful disintegration was recorded. The prediction of required shock waves regarding stone attenuation/DEI was calculated using regression analysis (adjusted for stone size and composition) and the correlation between CT attenuation/DEI and the number of shock waves was assessed for all datasets. The median number of shock waves for successful stone disintegration was 72 (interquartile range 30-361). CT attenuation/DEI of stones was a significant, independent predictor (P < 0.01) for the number of required shock waves with the best prediction at 80 kVp (β estimate 0.576) (P < 0.05). Correlation coefficients between attenuation/DEI and the number of required shock waves ranged between ρ = 0.31 and 0.68 showing the best correlation at 80 kVp (P < 0.001). The attenuation of urinary stones at low tube voltage CT is the best predictor for successful stone disintegration, being independent of stone composition and size. DECT shows no added value for predicting the success of SWL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosensteel, B.A.
1996-03-01
Executive Order 11990, Protection of Wetlands, (May 24, 1977) requires that federal agencies avoid, to the extent possible, adverse impacts associated with the destruction and modification of wetlands and that they avoid direct and indirect support of wetlands development when there is a practicable alternative. In accordance with Department of Energy (DOE) Regulations for Compliance with Floodplains and Wetlands Environmental Review Requirements (Subpart B, 10 CFR 1022.11), surveys for wetland presence or absence were conducted in both the Melton Valley and the Bethel Valley Groundwater Operable Units (GWOU) on the DOE Oak Ridge Reservation (ORR) from October 1994 through Septembermore » 1995. As required by the Energy and Water Development Appropriations Act of 1992, wetlands were identified using the criteria and methods set forth in the Wetlands Delineation Manual (Army Corps of Engineers, 1987). Wetlands were identified during field surveys that examined and documented vegetation, soils, and hydrologic evidence. Most of the wetland boundary locations and wetland sizes are approximate. Boundaries of wetlands in Waste Area Grouping (WAG) 2 and on the former proposed site of the Advanced Neutron Source in the upper Melton Branch watershed were located by civil survey during previous wetland surveys; thus, the boundary locations and areal sizes in these areas are accurate. The wetlands were classified according to the system developed by Cowardin et al. (1979) for wetland and deepwater habitats of the United States. A total of 215 individual wetland areas ranging in size from 0.002 ha to 9.97 ha were identified in the Bethel Valley and Melton Valley GWOUs. The wetlands are classified as palustrine forested broad-leaved deciduous (PFO1), palustrine scrub-shrub broad-leaved deciduous (PSS1), and palustrine persistent emergent (PEM1).« less
NASA Astrophysics Data System (ADS)
Tian, S. L.; Pan, Y. P.; Wang, Y. S.
2015-03-01
More size-resolved chemical information is needed before the physicochemical characteristics and sources of airborne particles can be understood, but this information remains unavailable in most regions of China due to a paucity of measurement data. In this study, we report a one-year observation of various chemical species in size-segregated particle samples collected in urban Beijing, a mega city that experiences severe haze episodes. In addition to fine particles, the measured particle size distributions showed high concentrations of coarse particles during the haze periods. The abundance and chemical compositions of the particles in this study were temporally and spatially variable, with major contributions from organic matter and secondary inorganic aerosols. The contribution of the organic matter to the mass decreased from 37.9 to 33.1%, whereas the total contribution of SO42-, NO3- and NH4+ increased from 19.1 to 32.3% on non-haze and haze days, respectively. Due to heterogeneous reactions and hygroscopic growth, the peaks in the size distributions of organic carbon, SO42-, NO3-, NH4+, Cl-, K+ and Cu shifted from 0.43-0.65 μm on non-haze days to 0.65-1.1 μm on haze days. Although the size distributions are similar for the heavy metals Pb, Cd and Tl during the observation period, their concentrations increased by a factor of more than 1.5 on haze days compared with non-haze days. We found that NH4+ with a size range of 0.43-0.65 μm, SO42- and NO3- with a size range of 0.65-1.1 μm and Ca2+ with a size range of 5.8-9 μm as well as the meteorological factors of relative humidity and wind speed were responsible for the haze pollution when the visibility was less than 15 km. Source apportionment using positive matrix factorization identified six common sources: secondary inorganic aerosols (26.1% for fine particles vs. 9.5% for coarse particles), coal combustion (19 vs. 23.6%), primary emissions from vehicles (5.9 vs. 8.0%), biomass burning (8.5 vs. 2.9%), industrial pollution (6.3 vs. 8.5%) and mineral dust (16.1 vs. 35.1%). The first four factors were higher on haze days, while the latter factors were higher on non-haze days. The sources generally increased with decreasing size with the exception of mineral dust. However, two peaks were consistently found in the fine and coarse particles. The contributing sources also varied with the wind direction; coal and oil combustion products increased during southern flows, indicating that any mitigation strategy should consider the wind pattern, especially during the haze periods. The findings indicated that the PM2.5-based dataset is insufficient for the Chinese source control policy, and detailed size-resolved information is urgently needed to characterize the important sources in urban regions and better understand severe haze pollution.
Source and Size of Social Support Network on Sedentary Behavior Among Older Adults.
Loprinzi, Paul D; Crush, Elizabeth A
2018-01-01
To examine the association of source of social support and size of social support network on sedentary behavior among older adults. Cross-sectional. National Health and Nutrition Examination Survey 2003 to 2006. 2519 older adults (60+ years). Sedentary behavior was assessed via accelerometry over a 7-day period. Social support was assessed via self-report. Sources evaluated include spouse, son, daughter, sibling, neighbor, church member, and friend. Regarding size of social network, participants were asked, "In general, how many close friends do you have?" Multivariable linear regression. After adjustment, there was no evidence of an association between the size of social support network and sedentary behavior. With regard to specific sources of social support, spousal social support was associated with less sedentary behavior (β = -11.6; 95% confidence interval: -20.7 to -2.5), with evidence to suggest that this was only true for men. Further, an inverse association was observed between household size and sedentary behavior, with those having a greater number of individuals in the house having lower levels of sedentary behavior. These associations occurred independent of moderate-to-vigorous physical activity, age, gender, race-ethnicity, measured body mass index, total cholesterol, self-reported smoking status, and physician diagnosis of congestive heart failure, coronary artery disease, stroke, cancer, hypertension, or diabetes. Spouse-specific emotion-related social support (particularly for men) and household size were associated with less sedentary behavior.
7 CFR 51.1216 - Size requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Standards for Grades of Peaches Size § 51.1216 Size requirements. (a) The numerical count or a count-size... closed container shall be indicated on the container. (b) When the numerical count is not shown the...
7 CFR 51.1216 - Size requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Peaches Size § 51.1216 Size requirements. (a) The numerical count or a count-size... closed container shall be indicated on the container. (b) When the numerical count is not shown the...
7 CFR 51.1216 - Size requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Standards for Grades of Peaches Size § 51.1216 Size requirements. (a) The numerical count or a count-size... closed container shall be indicated on the container. (b) When the numerical count is not shown the...
Gene and protein nomenclature in public databases
Fundel, Katrin; Zimmer, Ralf
2006-01-01
Background Frequently, several alternative names are in use for biological objects such as genes and proteins. Applications like manual literature search, automated text-mining, named entity identification, gene/protein annotation, and linking of knowledge from different information sources require the knowledge of all used names referring to a given gene or protein. Various organism-specific or general public databases aim at organizing knowledge about genes and proteins. These databases can be used for deriving gene and protein name dictionaries. So far, little is known about the differences between databases in terms of size, ambiguities and overlap. Results We compiled five gene and protein name dictionaries for each of the five model organisms (yeast, fly, mouse, rat, and human) from different organism-specific and general public databases. We analyzed the degree of ambiguity of gene and protein names within and between dictionaries, to a lexicon of common English words and domain-related non-gene terms, and we compared different data sources in terms of size of extracted dictionaries and overlap of synonyms between those. The study shows that the number of genes/proteins and synonyms covered in individual databases varies significantly for a given organism, and that the degree of ambiguity of synonyms varies significantly between different organisms. Furthermore, it shows that, despite considerable efforts of co-curation, the overlap of synonyms in different data sources is rather moderate and that the degree of ambiguity of gene names with common English words and domain-related non-gene terms varies depending on the considered organism. Conclusion In conclusion, these results indicate that the combination of data contained in different databases allows the generation of gene and protein name dictionaries that contain significantly more used names than dictionaries obtained from individual data sources. Furthermore, curation of combined dictionaries considerably increases size and decreases ambiguity. The entries of the curated synonym dictionary are available for manual querying, editing, and PubMed- or Google-search via the ProThesaurus-wiki. For automated querying via custom software, we offer a web service and an exemplary client application. PMID:16899134
NASA Astrophysics Data System (ADS)
Ogulei, David; Hopke, Philip K.; Zhou, Liming; Patrick Pancras, J.; Nair, Narayanan; Ondov, John M.
Several multivariate data analysis methods have been applied to a combination of particle size and composition measurements made at the Baltimore Supersite. Partial least squares (PLS) was used to investigate the relationship (linearity) between number concentrations and the measured PM2.5 mass concentrations of chemical species. The data were obtained at the Ponca Street site and consisted of six days' measurements: 6, 7, 8, 18, 19 July, and 21 August 2002. The PLS analysis showed that the covariance between the data could be explained by 10 latent variables (LVs), but only the first four of these were sufficient to establish the linear relationship between the two data sets. More LVs could not make the model better. The four LVs were found to better explain the covariance between the large sized particles and the chemical species. A bilinear receptor model, PMF2, was then used to simultaneously analyze the size distribution and chemical composition data sets. The resolved sources were identified using information from number and mass contributions from each source (source profiles) as well as meteorological data. Twelve sources were identified: oil-fired power plant emissions, secondary nitrate I, local gasoline traffic, coal-fired power plant, secondary nitrate II, secondary sulfate, diesel emissions/bus maintenance, Quebec wildfire episode, nucleation, incinerator, airborne soil/road-way dust, and steel plant emissions. Local sources were mostly characterized by bi-modal number distributions. Regional sources were characterized by transport mode particles (0.2- 0.5μm).
Comparison of hybrid receptor models to locate PCB sources in Chicago
NASA Astrophysics Data System (ADS)
Hsu, Ying-Kuang; Holsen, Thomas M.; Hopke, Philip K.
Results of three hybrid receptor models, potential source contribution function (PSCF), concentration weighted trajectory (CWT), and residence time weighted concentration (RTWC), were compared for locating polychlorinated biphenyl (PCB) sources contributing to the atmospheric concentrations in Chicago. Variations of these models, including PSCF using mean and 75% criterion concentrations, joint probability PSCF (JP-PSCF), changes of point filters and grid cell sizes for RTWC, and PSCF using wind trajectories started at different altitudes, are also discussed. Modeling results were relatively consistent between models. However, no single model provided as complete information as was obtained by using all of them. CWT and 75% PSCF appears to be able to distinguish between larger sources and moderate ones. RTWC resolved high potential source areas. RTWC and JP-PSCF pooling data from all sampling sites removed the trailing effect often seen in PSCF modeling. PSCF results using average concentration criteria, appears to identify both moderate and major sources. Each model has advantages and disadvantages. However, used in combination, they provide information that is not available if only one of them is used. For short-range atmospheric transport, PSCF results were consistent when using wind trajectories starting at different heights. Based on the archived PCB data, the modeling results indicate there is a large potential source area between Joliet and Kankakee, IL, and two moderate sources to the northwest and south of Chicago. On the south side of Chicago in the neighborhood of Lake Calumet, several PCB sources were identified. Other unidentified potential source location(s) will require additional upwind/downwind field sampling to verify modeling results.
Preventing Molecular and Particulate Infiltration in a Confined Volume
NASA Technical Reports Server (NTRS)
Scialdone, John J.
1999-01-01
Contaminants from an instrument's self-generated sources or from sources external to the instrument may degrade its critical surfaces and/or create an environment which limits the instrument's intended performance. Analyses have been carried out on a method to investigate the required purging flow of clean, dry gas to prevent the ingestion of external contaminants into the instrument container volume. The pressure to be maintained and the required flow are examined in terms of their effectiveness in preventing gaseous and particulate contaminant ingestion and abatement of self-generated contaminants in the volume. The required venting area or the existing volume venting area is correlated to the volume to be purged, the allowable pressure differential across the volume, the external contaminant partial pressure, and the sizes of the ambient particulates. The diffusion of external water vapor into the volume while it was being purged was experimentally obtained in terms of an infiltration time constant. That data and the acceptable fraction of the outside pressure into the volume indicate the required flow of purge gas expressed in terms of volume change per unit time. The exclusion of particulates is based on the incoming velocity of the particles and the exit flow speed and density of the purge gas. The purging flow pressures needed to maintain the required flows through the vent passages are indicated. The purge gas must prevent or limit the entrance of the external contaminants to the critical locations of the instrument. It should also prevent self- contamination from surfaces, reduce material outgassing, and sweep out the outgassed products. Systems and facilities that can benefit from purging may be optical equipment, clinical facilities, manufacturing facilities, clean rooms, and other systems requiring clean environments.
NASA Astrophysics Data System (ADS)
Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Perley, R. A.
2016-09-01
This is the first of two papers describing the observations and cataloguing of deep 3-GHz observations of the Lockman Hole North using the Karl G. Jansky Very Large Array. The aim of this paper is to investigate, through the use of simulated images, the uncertainties and accuracy of source-finding routines, as well as to quantify systematic effects due to resolution, such as source confusion and source size. While these effects are not new, this work is intended as a particular case study that can be scaled and translated to other surveys. We use the simulations to derive uncertainties in the fitted parameters, as well as bias corrections for the actual catalogue (presented in Paper II). We compare two different source-finding routines, OBIT and AEGEAN, and two different effective resolutions, 8 and 2.75 arcsec. We find that the two routines perform comparably well, with OBIT being slightly better at de-blending sources, but slightly worse at fitting resolved sources. We show that 30-70 per cent of sources are missed or fit inaccurately once the source size becomes larger than the beam, possibly explaining source count errors in high-resolution surveys. We also investigate the effect of blending, finding that any sources with separations smaller than the beam size are fit as single sources. We show that the use of machine-learning techniques can correctly identify blended sources up to 90 per cent of the time, and prior-driven fitting can lead to a 70 per cent improvement in the number of de-blended sources.
Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M
2018-06-01
Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
A first near real-time seismology-based landquake monitoring system.
Chao, Wei-An; Wu, Yih-Min; Zhao, Li; Chen, Hongey; Chen, Yue-Gau; Chang, Jui-Ming; Lin, Che-Min
2017-03-02
Hazards from gravity-driven instabilities on hillslope (termed 'landquake' in this study) are an important problem facing us today. Rapid detection of landquake events is crucial for hazard mitigation and emergency response. Based on the real-time broadband data in Taiwan, we have developed a near real-time landquake monitoring system, which is a fully automatic process based on waveform inversion that yields source information (e.g., location and mechanism) and identifies the landquake source by examining waveform fitness for different types of source mechanisms. This system has been successfully tested offline using seismic records during the passage of the 2009 Typhoon Morakot in Taiwan and has been in online operation during the typhoon season in 2015. In practice, certain levels of station coverage (station gap < 180°), signal-to-noise ratio (SNR ≥ 5.0), and a threshold of event size (volume >10 6 m 3 and area > 0.20 km 2 ) are required to ensure good performance (fitness > 0.6 for successful source identification) of the system, which can be readily implemented in other places in the world with real-time seismic networks and high landquake activities.
A first near real-time seismology-based landquake monitoring system
Chao, Wei-An; Wu, Yih-Min; Zhao, Li; Chen, Hongey; Chen, Yue-Gau; Chang, Jui-Ming; Lin, Che-Min
2017-01-01
Hazards from gravity-driven instabilities on hillslope (termed ‘landquake’ in this study) are an important problem facing us today. Rapid detection of landquake events is crucial for hazard mitigation and emergency response. Based on the real-time broadband data in Taiwan, we have developed a near real-time landquake monitoring system, which is a fully automatic process based on waveform inversion that yields source information (e.g., location and mechanism) and identifies the landquake source by examining waveform fitness for different types of source mechanisms. This system has been successfully tested offline using seismic records during the passage of the 2009 Typhoon Morakot in Taiwan and has been in online operation during the typhoon season in 2015. In practice, certain levels of station coverage (station gap < 180°), signal-to-noise ratio (SNR ≥ 5.0), and a threshold of event size (volume >106 m3 and area > 0.20 km2) are required to ensure good performance (fitness > 0.6 for successful source identification) of the system, which can be readily implemented in other places in the world with real-time seismic networks and high landquake activities. PMID:28252039
SAPT units turn-on in an interference-dominant environment. [Stand Alone Pressure Transducer
NASA Technical Reports Server (NTRS)
Peng, W.-C.; Yang, C.-C.; Lichtenberg, C.
1990-01-01
A stand alone pressure transducer (SAPT) is a credit-card-sized smart pressure sensor inserted between the tile and the aluminum skin of a space shuttle. Reliably initiating the SAPT units via RF signals in a prelaunch environment is a challenging problem. Multiple-source interference may exist if more than one GSE (ground support equipment) antenna is turned on at the same time to meet the simultaneity requirement of 10 ms. A polygon model for orbiter, external tank, solid rocket booster, and tail service masts is used to simulate the prelaunch environment. Geometric optics is then applied to identify the coverage areas and the areas which are vulnerable to multipath and/or multiple-source interference. Simulation results show that the underside areas of an orbiter have incidence angles exceeding 80 deg. For multipath interference, both sides of the cargo bay areas are found to be vulnerable to a worst-case multipath loss exceeding 20 dB. Multiple-source interference areas are also identified. Mitigation methods for the coverage and interference problem are described. It is shown that multiple-source interference can be eliminated (or controlled) using the time-division-multiplexing method or the time-stamp approach.
The South Australian Safe Drinking Water Act: summary of the first year of operation.
Froscio, Suzanne M; Bolton, Natalie; Cooke, Renay; Wittholz, Michelle; Cunliffe, David
2016-06-01
The Safe Drinking Water Act 2011 was introduced in South Australia to provide clear direction to drinking water providers on how to achieve water safety. The Act requires drinking water providers to register with SA Health and develop a risk management plan (RMP) for their water supply that includes operational and verification monitoring plans and an incident notification and communication protocol. During the first year of operation, 212 drinking water providers registered under the Act, including one major water utility and a range of small to medium sized providers in regional and remote areas of the State. Information was captured on water source(s) used and water treatment. Rainwater was the most frequently reported drinking water source (66%), followed by bore water (13%), on-supply or carting of mains water (13%), mixed source (rainwater with bore water backup) (6%) and surface water (3%). The majority of providers (91%) treated the water supply, 87% used disinfection. During the first year of operation, 16 water quality incidents were formally reported to SA Health. These included both microbial and chemical incidents. Case studies presented highlight how the RMPs are assisting drinking water providers to identify incidents of potential health concern and implement corrective actions.
Apparatus and Method for Increasing the Diameter of Metal Alloy Wires Within a Molten Metal Pool
Hartman, Alan D.; Argetsinger, Edward R.; Hansen, Jeffrey S.; Paige, Jack I.; King, Paul E.; Turner, Paul C.
2002-01-29
In a dip forming process the core material to be coated is introduced directly into a source block of coating material eliminating the need for a bushing entrance component. The process containment vessel or crucible is heated so that only a portion of the coating material becomes molten, leaving a solid portion of material as the entrance port of, and seal around, the core material. The crucible can contain molten and solid metals and is especially useful when coating core material with reactive metals. The source block of coating material has been machined to include a close tolerance hole of a size and shape to closely fit the core material. The core material moves first through the solid portion of the source block of coating material where the close tolerance hole has been machined, then through a solid/molten interface, and finally through the molten phase where the diameter of the core material is increased. The crucible may or may not require water-cooling depending upon the type of material used in crucible construction. The system may operate under vacuum, partial vacuum, atmospheric pressure, or positive pressure depending upon the type of source material being used.
Apparatus and method for increasing the diameter of metal alloy wires within a molten metal pool
Hartman, Alan D.; Argetsinger, Edward R.; Hansen, Jeffrey S.; Paige, Jack I.; King, Paul E.; Turner, Paul C.
2002-01-29
In a dip forming process the core material to be coated is introduced directly into a source block of coating material eliminating the need for a bushing entrance component. The process containment vessel or crucible is heated so that only a portion of the coating material becomes molten, leaving a solid portion of material as the entrance port of, and seal around, the core material. The crucible can contain molten and solid metals and is especially useful when coating core material with reactive metals. The source block of coating material has been machined to include a close tolerance hole of a size and shape to closely fit the core material. The core material moves first through the solid portion of the source block of coating material where the close tolerance hole has been machined, then through a solid/molten interface, and finally through the molten phase where the diameter of the core material is increased. The crucible may or may not require water-cooling depending upon the type of material used in crucible construction. The system may operate under vacuum, partial vacuum, atmospheric pressure, or positive pressure depending upon the type of source material being used.
Multi-keV X-ray area source intensity at SGII laser facility
NASA Astrophysics Data System (ADS)
Wang, Rui-rong; An, Hong-hai; Xie, Zhi-yong; Wang, Wei
2018-05-01
Experiments for investigating the feasibility of multi-keV backlighters for several different metallic foil targets were performed at the Shenguang II (SGII) laser facility in China. Emission spectra in the energy range of 1.65-7.0 keV were measured with an elliptically bent crystal spectrometer, and the X-ray source size was measured with a pinhole camera. The X-ray intensity near 4.75 keV and the X-ray source size for titanium targets at different laser intensity irradiances were studied. By adjusting the total laser energy at a fixed focal spot size, laser intensity in the range of 1.5-5.0 × 1015 W/cm2, was achieved. The results show that the line emission intensity near 4.75 keV and the X-ray source size are dependent on the laser intensity and increase as the laser intensity increases. However, an observed "peak" in the X-ray intensity near 4.75 keV occurs at an irradiance of 4.0 × 1015 W/cm2. For the employed experimental conditions, it was confirmed that the laser intensity could play a significant role in the development of an efficient multi-keV X-ray source. The experimental results for titanium indicate that the production of a large (˜350 μm in diameter) intense backlighter source of multi-keV X-rays is feasible at the SGII facility.
[Principles of energy sources of totally implantable hearing aids for inner ear hearing loss].
Baumann, J W; Leysieffer, H
1998-02-01
A fully implantable hearing aid consists of a sound receptor (microphone), an electronic amplifier including active audio-signal processing, an electromechanical transducer (actuator) for stimulating the ear by vibration, and an energy source. The energy source may be either a primary cell or a rechargeable (secondary) cell. As the energy requirements of an implantable hearing aid are dependent on the operating principle of the actuator, the operating principles of electromagnetic and piezoelectric transducers were examined with respect to their relative power consumption. The analysis showed that the energy requirements of an implantable hearing aid are significantly increased when an electromagnetic transducer is used. The power consumption of a piezoelectric transducer was found to be less than that of the electronic components alone. The energy needed to run a fully implantable hearing aid under these conditions would be 38 mWH per day. Primary cells cannot provide the energy needed for a minimum operation time of 5 years (70 WH), and therefore rechargeable cells must be used. A theoretical appraisal was carried out on nickel-cadmium, nickel-metal hydride, and lithium-ion cells to determine their suitability as well as to assess the risks associated with their use in an implant. Safety measures were drawn up from the results. Ni-MH cells were found to be the most suitable for use as an energy source for implantable hearing-aids because they are more robust than Li ion cells and their storage capacity is double that of Ni-Cd cells of similar size.
NASA Astrophysics Data System (ADS)
Khojasteh, Malak; Kresin, Vitaly V.
2016-12-01
We describe the production of size selected manganese nanoclusters using a dc magnetron sputtering/aggregation source. Since nanoparticle production is sensitive to a range of overlapping operating parameters (in particular, the sputtering discharge power, the inert gas flow rates, and the aggregation length) we focus on a detailed map of the influence of each parameter on the average nanocluster size. In this way it is possible to identify the main contribution of each parameter to the physical processes taking place within the source. The discharge power and argon flow supply the atomic vapor, and argon also plays the crucial role in the formation of condensation nuclei via three-body collisions. However, neither the argon flow nor the discharge power have a strong effect on the average nanocluster size in the exiting beam. Here the defining role is played by the source residence time, which is governed by the helium supply and the aggregation path length. The size of mass selected nanoclusters was verified by atomic force microscopy of deposited particles.
Controlling ZIF-67 crystals formation through various cobalt sources in aqueous solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Xiangli; Jiangsu Key Laboratory of Advanced Metallic Materials, Nanjing 211189; Xing, Tiantian
2016-03-15
Zeolitic imidazolate frameworks ZIF-67 were prepared under hydrothermal (120 °C) and non-hydrothermal (room temperature) from various cobalt sources and 2-methylimidazolate (Hmim) in aqueous solution within 30 min. The particle size and morphology were found to be related to the reactivity of the cobalt salt, Hmim/Co{sup 2+} molar ratios and experimental condition. Using Co(NO{sub 3}){sub 2} as cobalt source, small-sized ZIF-67 crystals with agglomeration were formed. For CoCl{sub 2}, small-sized rhombic dodecahedron were obtained. While large-sized crystals of rhombic dodecahedron structure were obtained from CoSO{sub 4} and Co(OAc){sub 2}. Under hydrothermal condition, the size of ZIF-67 crystals tended to be moremore » uniform and the morphology were more regular comparing to non-hydrothermal condition. This study provides a simple way to control the size and morphology of ZIF-67 crystals prepared in aqueous solution. - Graphical abstract: Zeolitic imidazolate frameworks ZIF-67 were prepared under hydrothermal (120 °C) and non-hydrothermal (room temperature) from four different cobalt sources (Co(NO{sub 3}){sub 2}, CoCl{sub 2}, CoSO{sub 4} and Co(OAc){sub 2}) in aqueous solution within 30 min. The particle size and morphology were found to be related to the reactivity of the cobalt salt, Hmim/Co{sup 2+} molar ratios and experimental condition. - Highlights: • The particle size and morphology were determined by the reactivity of cobalt salt. • ZIF-67 could be prepared from CoSO{sub 4} and Co(OAc){sub 2} at Hmim/Co{sup 2+} molar ratio of 10. • Uniform and regular particles were obtained under hydrothermal condition.« less
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2018-03-19
Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) [1, 2] versus size constancy in grasping (mediated by the dorsal visual stream) [3-6], in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.
2015-03-01
Apertured averaged scintillation requires the evaluation of rather complicated irradiance covariance function. Here we develop a much simpler numerical method based on our earlier introduced semi-analytic approach. Using this method, we calculate aperture averaged scintillation of fully and partially coherent Gaussian, annular Gaussian flat topped and dark hollow beams. For comparison, the principles of equal source beam power and normalizing the aperture averaged scintillation with respect to received power are applied. Our results indicate that for fully coherent beams, upon adjusting the aperture sizes to capture 10 and 20% of the equal source power, Gaussian beam needs the largest aperture opening, yielding the lowest aperture average scintillation, whilst the opposite occurs for annular Gaussian and dark hollow beams. When assessed on the basis of received power normalized aperture averaged scintillation, fixed propagation distance and aperture size, annular Gaussian and dark hollow beams seem to have the lowest scintillation. Just like the case of point-like scintillation, partially coherent beams will offer less aperture averaged scintillation in comparison to fully coherent beams. But this performance improvement relies on larger aperture openings. Upon normalizing the aperture averaged scintillation with respect to received power, fully coherent beams become more advantageous than partially coherent ones.
Precision and resolution in laser direct microstructuring with bursts of picosecond pulses
NASA Astrophysics Data System (ADS)
Mur, Jaka; Petkovšek, Rok
2018-01-01
Pulsed laser sources facilitate various applications, including efficient material removal in different scientific and industrial applications. Commercially available laser systems in the field typically use a focused laser beam of 10-20 μm in diameter. In line with the ongoing trends of miniaturization, we have developed a picosecond fiber laser-based system combining fast beam deflection and tight focusing for material processing and optical applications. We have predicted and verified the system's precision, resolution, and minimum achievable feature size for material processing applications. The analysis of the laser's performance requirements for the specific applications of high-precision laser processing is an important aspect for further development of the technique. We have predicted and experimentally verified that maximal edge roughness of single-micrometer-sized features was below 200 nm, including the laser's energy and positioning stability, beam deflection, the effect of spot spacing, and efficient isolation of mechanical vibrations. We have demonstrated that a novel fiber laser operating regime in bursts of pulses increases the laser energy stability. The results of our research improve the potential of fiber laser sources for material processing applications and facilitate their use through enabling the operation at lower pulse energies in bursts as opposed to single pulse regimes.
Yang, Manman; Wang, Zongyuan; Wang, Wei; Liu, Chang-Jun
2014-01-01
Argon glow discharge has been employed as a cheap, environmentally friendly, and convenient electron source for simultaneous reduction of HAuCl4 and PdCl2 on the anodic aluminum oxide (AAO) substrate. The thermal imaging confirms that the synthesis is operated at room temperature. The reduction is conducted with a short time (30 min) under the pressure of approximately 100 Pa. This room-temperature electron reduction operates in a dry way and requires neither hydrogen nor extra heating nor chemical reducing agent. The analyses using X-ray photoelectron spectroscopy (XPS) confirm all the metallic ions have been reduced. The characterization with X-ray diffraction (XRD) and high-resolution transmission electron microscopy (HRTEM) shows that AuPd alloyed nanoparticles are formed. There also exist some highly dispersed Au and Pd monometallic particles that cannot be detected by XRD and transmission electron microscopy (TEM) because of their small particle sizes. The observed AuPd alloyed nanoparticles are spherical with an average size of 14 nm. No core-shell structure can be observed. The room-temperature electron reduction can be operated in a larger scale. It is an easy way for the synthesis of AuPd alloyed nanoparticles.
NASA Astrophysics Data System (ADS)
Suh, Y.; Shin, K.
2011-12-01
Manila clams sampled in Seonjae Island, Korea with shell lengths (SL) below 19.76 mm in average showed a significantly depleted carbon and nitrogen isotope values (P<0.05) by 0.80~1.41 %. This size related variation can be caused by either altered carbon and nutrient source or by affected isotopic incorporation rates and discrimination factors. In order to examine size-related diet shift in manila clams, R. philippinarum with different sizes that were constantly fed on known mixed microalgae for several months were sampled from Incheon Fisheries Hacheries Research Institute (IFRI). These manila clams have shown a high intra-species variation in growth rate with a maximum difference of more or less 2.30 cm. The smallest size groups (3.68±0.17 mm and 6.88±0.21 mm) obtained their nutrition from both P. tricornutum and aggregated organic matter that consists of dead or decomposed microalgae or other detritus. Bigger size groups (10.92±0.34 mm and 14.81±0.25 mm) obtained most of their energy from P.tricorutum and also from other phytoplankton unlike the biggest size group (21.15±1.02 mm) that feeds mainly on fresh microalgae of all diets fed. This variation in diet reveals that smaller clams mostly inhale dead or decomposed microalgae that sinks on the bottom while the bigger clams uptake more fresh ones that are still alive. This variation in feeding behavior could have been caused by morphological constraints such as limited siphon length. The results suggest that manila clams greater than and below 19.76 mm in average have different feeding behavior and P. tricornutum and I. galbana were the two most preferred diets for manila clams cultured in IFHRI. The result of fatty acid composition of manila clams in relation to size or growth rate suggests that fast growing clams would have rapid metabolism of fatty acids not required by the animals and an accumulation of the essential fatty acids (PUFA). In addition, their higher energy requirement and more active state of development would further diminish lipid reserve of the species.
Modeling Explosion Induced Aftershocks
NASA Astrophysics Data System (ADS)
Kroll, K.; Ford, S. R.; Pitarka, A.; Walter, W. R.; Richards-Dinger, K. B.
2017-12-01
Many traditional earthquake-explosion discrimination tools are based on properties of the seismic waveform or their spectral components. Common discrimination methods include estimates of body wave amplitude ratios, surface wave magnitude scaling, moment tensor characteristics, and depth. Such methods are limited by station coverage and noise. Ford and Walter (2010) proposed an alternate discrimination method based on using properties of aftershock sequences as a means of earthquakeexplosion differentiation. Previous studies have shown that explosion sources produce fewer aftershocks that are generally smaller in magnitude compared to aftershocks of similarly sized earthquake sources (Jarpe et al., 1994, Ford and Walter, 2010). It has also been suggested that the explosion-induced aftershocks have smaller Gutenberg- Richter b-values (Ryall and Savage, 1969) and that their rates decay faster than a typical Omori-like sequence (Gross, 1996). To discern whether these observations are generally true of explosions or are related to specific site conditions (e.g. explosion proximity to active faults, tectonic setting, crustal stress magnitudes) would require a thorough global analysis. Such a study, however, is hindered both by lack of evenly distributed explosion-sources and the availability of global seismicity data. Here, we employ two methods to test the efficacy of explosions at triggering aftershocks under a variety of physical conditions. First, we use the earthquake rate equations from Dieterich (1994) to compute the rate of aftershocks related to an explosion source assuming a simple spring-slider model. We compare seismicity rates computed with these analytical solutions to those produced by the 3D, multi-cycle earthquake simulator, RSQSim. We explore the relationship between geological conditions and the characteristics of the resulting explosion-induced aftershock sequence. We also test hypothesis that aftershock generation is dependent upon the frequency content of the passing dynamic seismic waves as suggested by Parsons and Velasco (2009). Lastly, we compare all results of explosion-induced aftershocks with aftershocks generated by similarly sized earthquake sources. Prepared by LLNL under Contract DE-AC52-07NA27344.
ProFound: Source Extraction and Application to Modern Survey Data
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Davies, L. J. M.; Driver, S. P.; Koushan, S.; Taranu, D. S.; Casura, S.; Liske, J.
2018-05-01
We introduce PROFOUND, a source finding and image analysis package. PROFOUND provides methods to detect sources in noisy images, generate segmentation maps identifying the pixels belonging to each source, and measure statistics like flux, size, and ellipticity. These inputs are key requirements of PROFIT, our recently released galaxy profiling package, where the design aim is that these two software packages will be used in unison to semi-automatically profile large samples of galaxies. The key novel feature introduced in PROFOUND is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. We apply PROFOUND in a number of simulated and real-world cases, and demonstrate that it behaves reasonably given its stated design goals. In particular, it offers good initial parameter estimation for PROFIT, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the PROFOUND and PROFIT pipeline, and adoption is being encouraged by publicly releasing the software for the open source R data analysis platform under an LGPL-3 license on GitHub (github.com/asgr/ProFound).
Alexander, R.B.; Smith, R.A.; Schwarz, G.E.; Boyer, E.W.; Nolan, J.V.; Brakebill, J.W.
2008-01-01
Seasonal hypoxia in the northern Gulf of Mexico has been linked to increased nitrogen fluxes from the Mississippi and Atchafalaya River Basins, though recent evidence shows that phosphorus also influences productivity in the Gulf. We developed a spatially explicit and structurally detailed SPARROW water-quality model that reveals important differences in the sources and transport processes that control nitrogen (N) and phosphorus (P) delivery to the Gulf. Our model simulations indicate that agricultural sources in the watersheds contribute more than 70% of the delivered N and P. However, corn and soybean cultivation is the largest contributor of N (52%), followed by atmospheric deposition sources (16%); whereas P originates primarily from animal manure on pasture and rangelands (37%), followed by corn and soybeans (25%), other crops (18%), and urban sources (12%). The fraction of in-stream P and N load delivered to the Gulf increases with stream size, but reservoir trapping of P causes large local- and regional-scale differences in delivery. Our results indicate the diversity of management approaches required to achieve efficient control of nutrient loads to the Gulf. These include recognition of important differences in the agricultural sources of N and P, the role of atmospheric N, attention to P sources downstream from reservoirs, and better control of both N and P in close proximity to large rivers. ?? 2008 American Chemical Society.
An experimental MOSFET approach to characterize (192)Ir HDR source anisotropy.
Toye, W C; Das, K R; Todd, S P; Kenny, M B; Franich, R D; Johnston, P N
2007-09-07
The dose anisotropy around a (192)Ir HDR source in a water phantom has been measured using MOSFETs as relative dosimeters. In addition, modeling using the EGSnrc code has been performed to provide a complete dose distribution consistent with the MOSFET measurements. Doses around the Nucletron 'classic' (192)Ir HDR source were measured for a range of radial distances from 5 to 30 mm within a 40 x 30 x 30 cm(3) water phantom, using a TN-RD-50 MOSFET dosimetry system with an active area of 0.2 mm by 0.2 mm. For each successive measurement a linear stepper capable of movement in intervals of 0.0125 mm re-positioned the MOSFET at the required radial distance, while a rotational stepper enabled angular displacement of the source at intervals of 0.9 degrees . The source-dosimeter arrangement within the water phantom was modeled using the standardized cylindrical geometry of the DOSRZnrc user code. In general, the measured relative anisotropy at each radial distance from 5 mm to 30 mm is in good agreement with the EGSnrc simulations, benchmark Monte Carlo simulation and TLD measurements where they exist. The experimental approach employing a MOSFET detection system of small size, high spatial resolution and fast read out capability allowed a practical approach to the determination of dose anisotropy around a HDR source.
Power-Combined GaN Amplifier with 2.28-W Output Power at 87 GHz
NASA Technical Reports Server (NTRS)
Fung, King Man; Ward, John; Chattopadhyay, Goutam; Lin, Robert H.; Samoska, Lorene A.; Kangaslahti, Pekka P.; Mehdi, Imran; Lambrigtsen, Bjorn H.; Goldsmith, Paul F.; Soria, Mary M.;
2011-01-01
Future remote sensing instruments will require focal plane spectrometer arrays with higher resolution at high frequencies. One of the major components of spectrometers are the local oscillator (LO) signal sources that are used to drive mixers to down-convert received radio-frequency (RF) signals to intermediate frequencies (IFs) for analysis. By advancing LO technology through increasing output power and efficiency, and reducing component size, these advances will improve performance and simplify architecture of spectrometer array systems. W-band power amplifiers (PAs) are an essential element of current frequency-multiplied submillimeter-wave LO signal sources. This work utilizes GaN monolithic millimeter-wave integrated circuit (MMIC) PAs developed from a new HRL Laboratories LLC 0.15- m gate length GaN semiconductor transistor. By additionally waveguide power combining PA MMIC modules, the researchers here target the highest output power performance and efficiency in the smallest volume achievable for W-band.
Electron source for a mini ion trap mass spectrometer
Dietrich, Daniel D.; Keville, Robert F.
1995-01-01
An ion trap which operates in the regime between research ion traps which can detect ions with a mass resolution of better than 1:10.sup.9 and commercial mass spectrometers requiring 10.sup.4 ions with resolutions of a few hundred. The power consumption is kept to a minimum by the use of permanent magnets and a novel electron gun design. By Fourier analyzing the ion cyclotron resonance signals induced in the trap electrodes, a complete mass spectra in a single combined structure can be detected. An attribute of the ion trap mass spectrometer is that overall system size is drastically reduced due to combining a unique electron source and mass analyzer/detector in a single device. This enables portable low power mass spectrometers for the detection of environmental pollutants or illicit substances, as well as sensors for on board diagnostics to monitor engine performance or for active feedback in any process involving exhausting waste products.
X-ray optics for the LAMAR facility, an overview. [Large Area Modular Array of Reflectors
NASA Technical Reports Server (NTRS)
Gorenstein, P.
1979-01-01
The paper surveys the Large Area Modular Array of Reflectors (LAMAR), the concept of which is based on meeting two major requirements in X-ray astronomy, large collecting area and moderately good or better angular resolution for avoiding source confusion and imaging source fields. It is shown that the LAMAR provides the same sensitivity and signal to noise in imaging as a single large telescope having the same area and angular resolution but is a great deal less costly to develop, construct, and integrate into a space mission. Attention is also given to the LAMAR modular nature which will allow for an evolutionary development from a modest size array on Spacelab to a Shuttle launched free flyer. Finally, consideration is given to manufacturing methods which show promise of making LAMAR meet the criteria of good angular resolution, relatively low cost, and capability for fast volume production.
Thermal energy storage for the Stirling engine powered automobile
NASA Technical Reports Server (NTRS)
Morgan, D. T. (Editor)
1979-01-01
A thermal energy storage (TES) system developed for use with the Stirling engine as an automotive power system has gravimetric and volumetric storage densities which are competitive with electric battery storage systems, meets all operational requirements for a practical vehicle, and can be packaged in compact sized automobiles with minimum impact on passenger and freight volume. The TES/Stirling system is the only storage approach for direct use of combustion heat from fuel sources not suitable for direct transport and use on the vehicle. The particular concept described is also useful for a dual mode TES/liquid fuel system in which the TES (recharged from an external energy source) is used for short duration trips (approximately 10 miles or less) and liquid fuel carried on board the vehicle used for long duration trips. The dual mode approach offers the potential of 50 percent savings in the consumption of premium liquid fuels for automotive propulsion in the United States.
Wireless power using magnetic resonance coupling for neural sensing applications
NASA Astrophysics Data System (ADS)
Yoon, Hargsoon; Kim, Hyunjung; Choi, Sang H.; Sanford, Larry D.; Geddis, Demetris; Lee, Kunik; Kim, Jaehwan; Song, Kyo D.
2012-04-01
Various wireless power transfer systems based on electromagnetic coupling have been investigated and applied in many biomedical applications including functional electrical stimulation systems and physiological sensing in humans and animals. By integrating wireless power transfer modules with wireless communication devices, electronic systems can deliver data and control system operation in untethered freely-moving conditions without requiring access through the skin, a potential source of infection. In this presentation, we will discuss a wireless power transfer module using magnetic resonance coupling that is specifically designed for neural sensing systems and in-vivo animal models. This research presents simple experimental set-ups and circuit models of magnetic resonance coupling modules and discusses advantages and concerns involved in positioning and sizing of source and receiver coils compared to conventional inductive coupling devices. Furthermore, the potential concern of tissue heating in the brain during operation of the wireless power transfer systems will also be addressed.
Can industry afford solar energy
NASA Astrophysics Data System (ADS)
Kreith, F.; Bezdek, R.
1983-03-01
Falling oil prices and conservation measures have reduced the economic impetus to develop new energy sources, thus decreasing the urgency for bringing solar conversion technologies to commercial readiness at an early date. However, the capability for solar to deliver thermal energy for industrial uses is proven. A year-round operation would be three times as effective as home heating, which is necessary only part of the year. Flat plate, parabolic trough, and solar tower power plant demonstration projects, though uneconomically operated, have revealed engineering factors necessary for successful use of solar-derived heat for industrial applications. Areas of concern have been categorized as technology comparisons, load temperatures, plant size, location, end-use, backup requirements, and storage costs. Tax incentives have, however, supported home heating and not industrial uses, and government subsidies have historically gone to conventional energy sources. Tax credit programs which could lead to a 20% market penetration by solar energy in the industrial sector by the year 2000 are presented.
X-ray spectroscopy of the super soft source RXJ0925.7-475
NASA Technical Reports Server (NTRS)
Ebisawa, Ken; Asai, Kazumi; Dotani, Tadayasu; Mukai, Koji; Smale, Alan
1996-01-01
The super soft source (SSS) RXJ 0925.7-475 was observed with the Advanced Satellite for Cosmology and Astrophysics (ASCA) solid state spectrometer and its energy spectrum was analyzed. A simple black body model does not fit the data, and several absorption edges of ionized heavy elements are required. Without the addition of absorption edges, the best-fit black body radius and the estimated bolometric luminosity are 6800 (d/1 kpc) km and 1.2 x 10(exp 37) (d/1 kps)(exp 2) erg/s, respectively. The introduction of absorption edges significantly reduces the best-fit radius and luminosity to 140 (d/1 KPS) km and 6 x 10(exp 34) (d/1 kpc)(exp 2) erg/s, respectively. This suggests that the estimation of the emission region size and luminosity of SSS based on the black body model fit to the observed data is not reliable.
Environmental investigations using diatom microfossils
Smith, Kathryn E.L.; Flocks, James G.
2010-01-01
Diatoms are unicellular phytoplankton (microscopic plant-like organisms) with cell walls made of silica (called a frustule). They live in both freshwater and saltwater and can be found in just about every place on Earth that is wet. The shape and morphology of the diatom frustule unique to each species are used for identification. Due to the microscopic size of diatoms, high-power microscopy is required for diatom identification. Diatoms are vital to life on Earth. They are photosynthetic primary producers, using sunlight to create oxygen and organic carbon from carbon dioxide and water. They are a significant source of the oxygen we breathe, have a major impact on the global carbon cycle (Smetacek, 1999), and are a food source for many aquatic organisms (Mann, 1993). Diatom abundance has even been demonstrated to have an influence on the diversity of larger marine mammals, including whales (Marx and Uhen, 2010). Data on diatom abundance and diversity are extremely useful in environmental studies.
Saad, David A.; Benoy, Glenn A.; Robertson, Dale M.
2018-05-11
Streamflow and nutrient concentration data needed to compute nitrogen and phosphorus loads were compiled from Federal, State, Provincial, and local agency databases and also from selected university databases. The nitrogen and phosphorus loads are necessary inputs to Spatially Referenced Regressions on Watershed Attributes (SPARROW) models. SPARROW models are a way to estimate the distribution, sources, and transport of nutrients in streams throughout the Midcontinental region of Canada and the United States. After screening the data, approximately 1,500 sites sampled by 34 agencies were identified as having suitable data for calculating the long-term mean-annual nutrient loads required for SPARROW model calibration. These final sites represent a wide range in watershed sizes, types of nutrient sources, and land-use and watershed characteristics in the Midcontinental region of Canada and the United States.
Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook
NASA Astrophysics Data System (ADS)
Sobolev, Andrej M.; Gray, Malcolm D.
2012-07-01
Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.
CIRiS: Compact Infrared Radiometer in Space
NASA Astrophysics Data System (ADS)
Osterman, D. P.; Collins, S.; Ferguson, J.; Good, W.; Kampe, T.; Rohrschneider, R.; Warden, R.
2016-09-01
The Compact Infrared Radiometer in Space (CIRiS) is a thermal infrared radiometric imaging instrument under development by Ball Aerospace for a Low Earth Orbit mission on a CubeSat spacecraft. Funded by the NASA Earth Science Technology Office's In-Space Validation of Earth Science Technology (InVEST) program, the mission objective is technology demonstration for improved on-orbit radiometric calibration. The CIRiS calibration approach uses a scene select mirror to direct three calibration views to the focal plane array and to transfer the resulting calibrated response to earth images. The views to deep space and two blackbody sources, including one at a selectable temperature, provide multiple options for calibration optimization. Two new technologies, carbon nanotube blackbody sources and microbolometer focal plane arrays with reduced pixel sizes, enable improved radiometric performance within the constrained 6U CubeSat volume. The CIRiS instrument's modular design facilitates subsystem modifications as required by future mission requirements. CubeSat constellations of CIRiS and derivative instruments offer an affordable approach to achieving revisit times as short as one day for diverse applications including water resource and drought management, cloud, aerosol, and dust studies, and land use and vegetation monitoring. Launch is planned for 2018.
Yi, Eongyu; Hyde, Clare E; Sun, Kai; Laine, Richard M
2016-02-12
Fumed silica is produced in 1000 tons per year quantities by combusting SiCl4 in H2 /O2 flames. Given that both SiCl4 and combustion byproduct HCl are corrosive, toxic and polluting, this route to fumed silica requires extensive safeguards that may be obviated if an alternate route were found. Silica, including rice hull ash (RHA) can be directly depolymerized using hindered diols to generate distillable spirocyclic alkoxysilanes or Si(OEt)4 . We report here the use of liquid-feed flame spray pyrolysis (LF-FSP) to combust the aforementioned precursors to produce fumed silica very similar to SiCl4 -derived products. The resulting powders are amorphous, necked, <50 nm average particle sizes, with specific surface areas (SSAs) of 140-230 m(2) g(-1) . The LF-FSP approach does not require the containment constraints of the SiCl4 process and given that the RHA silica source is produced in million ton per year quantities worldwide, the reported approach represents a sustainable, green and potentially lower-cost alternative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
KLYNAC: Compact linear accelerator with integrated power supply
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyzhenkov, Alexander
Accelerators and accelerator-based light sources have a wide range of applications in science, engineering technology and medicine. Today the scienti c community is working towards improving the quality of the accelerated beam and its parameters while trying to develop technology for reducing accelerator size. This work describes a design of a compact linear accelerator (linac) prototype, resonant Klynac device, which is a combined linear accelerator and its power supply - klystron. The intended purpose of a Klynac device is to provide a compact and inexpensive alternative to a conventional 1 to 6 MeV accelerator, which typically requires a separate RFmore » source, an accelerator itself and all the associated hardware. Because the Klynac is a single structure, it has the potential to be much less sensitive to temperature variations than a system with separate klystron and linac. We start by introducing a simpli ed theoretical model for a Klynac device. We then demonstrate how a prototype is designed step-by-step using particle-in-cell simulation studies for mono- resonant and bi-resonant structures. Finally, we discuss design options from a stability point of view and required input power as well as behavior of competing modes for the actual built device.« less
NASA Astrophysics Data System (ADS)
Chauvin, A.; Monteil, M.; Bellizzi, S.; Côte, R.; Herzog, Ph.; Pachebat, M.
2018-03-01
A nonlinear vibroacoustic absorber (Nonlinear Energy Sink: NES), involving a clamped thin membrane made in Latex, is assessed in the acoustic domain. This NES is here considered as an one-port acoustic system, analyzed at low frequencies and for increasing excitation levels. This dynamic and frequency range requires a suitable experimental technique, which is presented first. It involves a specific impedance tube able to deal with samples of sufficient size, and reaching high sound levels with a guaranteed linear response thank's to a specific acoustic source. The identification method presented here requires a single pressure measurement, and is calibrated from a set of known acoustic loads. The NES reflection coefficient is then estimated at increasing source levels, showing its strong level dependency. This is presented as a mean to understand energy dissipation. The results of the experimental tests are first compared to a nonlinear viscoelastic model of the membrane absorber. In a second step, a family of one degree of freedom models, treated as equivalent Helmholtz resonators is identified from the measurements, allowing a parametric description of the NES behavior over a wide range of levels.
Multipurpose neutron generators based on the radio frequency quadrupole linear accelerator
NASA Astrophysics Data System (ADS)
Hamm, Robert W.
2000-12-01
Neutron generators based on the Radio Frequency Quadrupole accelerator are now used for a variety of applications. These compact linear accelerators can produce from 108 to more than 1013 neutrons/second using either proton or deuteron beams to bombard beryllium targets. They exhibit long lifetimes at full output, as there is little target or beam degradation. Since they do not use radioactive materials, licensing requirements are less stringent than for isotopic sources or tritium sealed tube generators. The light weight and compact size of these robust systems make them transportable. The low divergence output beam from the RFQ also allows use of a remote target, which can reduce the seize of the shielding and moderator. The RFQ linac can be designed with a wide range of output beam energy and used with other targets such as lithium and deuterium to produce a neutron spectrum tailored to a specific application. These pulsed systems are well-suited for applications requiring a high peak neutron flux, including activation analysis of very short-lived reaction products. They can replace conventional sources in non-destructive testing applications such as thermal or fast neutron radiography, and can also be used for cancer therapy.
NASA Astrophysics Data System (ADS)
Liu, Hui; Rudd, Grant; Daly, Liam; Hempstead, Joshua; Liu, Yiran; Khan, Amjad P.; Mallidi, Srivalleesha; Thomas, Richard; Rizvi, Imran; Arnason, Stephen; Cuckov, Filip; Hasan, Tayyaba; Celli, Jonathan P.
2016-03-01
Photodynamic therapy (PDT) is a light-based modality that shows promise for adaptation and implementation as a cancer treatment technology in resource-limited settings. In this context PDT is particularly well suited for treatment of pre-cancer and early stage malignancy of the oral cavity, that present a major global health challenge, but for which light delivery can be achieved without major infrastructure requirements. In recent reports we demonstrated that a prototype low-cost batterypowered 635nm LED light source for ALA-PpIX PDT achieves tumoricidal efficacy in vitro and vivo, comparable to a commercial turn-key laser source. Here, building on these reports, we describe the further development of a prototype PDT device to enable intraoral light delivery, designed for ALA- PDT treatment of precancerous and cancerous lesions of the oral cavity. We evaluate light delivery via fiber bundles and customized 3D printed light applicators for flexible delivery to lesions of varying size and position within the oral cavity. We also briefly address performance requirements (output power, stability, and light delivery) and present validation of the device for ALA-PDT treatment in monolayer squamous carcinoma cell cultures.
Klynac: Compact Linear Accelerator with Integrated Power Supply
NASA Astrophysics Data System (ADS)
Malyzhenkov, A. V.
Accelerators and accelerator-based light sources have a wide range of applications in science, engineering technology and medicine. Today the scientific community is working towards improving the quality of the accelerated beam and its parameters, while trying to develop technology for reducing accelerator size. This work describes a design of a compact linear accelerator (linac) prototype: resonant Klynac device, which is a combined linear accelerator and its power supply - klystron. The intended purpose of a Klynac device is to provide a compact and inexpensive alternative to a conventional 1 to 6 MeV accelerator, which typically requires a separate RF source, accelerator itself and all the associated hardware. Because the Klynac is a single structure, it has the potential to be much less sensitive to temperature variations than a system with separate klystron and linac. We start by introducing a simplified theoretical model for a Klynac device. We then demonstrate how a prototype is designed step-by-step using Particle-In-Cell simulation studies for mono-resonant and bi-resonant structures. Finally, we discuss design options from a stability point of view and required input power as well as behavior of competing modes for the actual built device.
Two-stream Convolutional Neural Network for Methane Emissions Quantification
NASA Astrophysics Data System (ADS)
Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.
2017-12-01
Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair programs.
NASA Astrophysics Data System (ADS)
Meskhidze, N.; Royalty, T. M.; Phillips, B.; Dawson, K. W.; Petters, M. D.; Reed, R.; Weinstein, J.; Hook, D.; Wiener, R.
2017-12-01
The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here we present the design, testing, and analysis of data collected through the first instrument capable of measuring hygroscopicity-based, size-resolved particle fluxes using a continuous-flow Hygroscopicity-Resolved Relaxed Eddy Accumulation (Hy-Res REA) technique. The different components of the instrument were extensively tested inside the US Environmental Protection Agency's Aerosol Test Facility for sea-salt and ammoniums sulfate particle fluxes. The new REA system design does not require particle accumulation, therefore avoids the diffusional wall losses associated with long residence times of particles inside the air collectors of the traditional REA devices. The Hy-Res REA system used in this study includes a 3-D sonic anemometer, two fast-response solenoid valves, two Condensation Particle Counters (CPCs), a Scanning Mobility Particle Sizer (SMPS), and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA). A linear relationship was found between the sea-salt particle fluxes measured by eddy covariance and REA techniques, with comparable theoretical (0.34) and measured (0.39) proportionality constants. The sea-salt particle detection limit of the Hy-Res REA flux system is estimated to be 6x105 m-2s-1. For the conditions of ammonium sulfate and sea-salt particles of comparable source strength and location, the continuous-flow Hy-Res REA instrument was able to achieve better than 90% accuracy of measuring the sea-salt particle fluxes. In principle, the instrument can be applied to measure fluxes of particles of variable size and distinct hygroscopic properties (i.e., mineral dust, black carbon, etc.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less
Transport systems research vehicle color display system operations manual
NASA Technical Reports Server (NTRS)
Easley, Wesley C.; Johnson, Larry E.
1989-01-01
A recent upgrade of the Transport Systems Research Vehicle operated by the Advanced Transport Operating Systems Program Office at the NASA Langley Research Center has resulted in an all-glass panel in the research flight deck. Eight ARINC-D size CRT color displays make up the panel. A major goal of the display upgrade effort was ease of operation and maintenance of the hardware while maintaining versatility needed for flight research. Software is the key to this required versatility and will be the area demanding the most detailed technical design expertise. This document is is intended to serve as a single source of quick reference information needed for routine operation and system level maintenance. Detailed maintenance and modification of the display system will require specific design documentation and must be accomplished by individuals with specialized knowledge and experience.
Piotrowski, T; Rodrigues, G; Bajon, T; Yartsev, S
2014-03-01
Multi-institutional collaborations allow for more information to be analyzed but the data from different sources may vary in the subgroup sizes and/or conditions of measuring. Rigorous statistical analysis is required for pooling the data in a larger set. Careful comparison of all the components of the data acquisition is indispensable: identical conditions allow for enlargement of the database with improved statistical analysis, clearly defined differences provide opportunity for establishing a better practice. The optimal sequence of required normality, asymptotic normality, and independence tests is proposed. An example of analysis of six subgroups of position corrections in three directions obtained during image guidance procedures for 216 prostate cancer patients from two institutions is presented. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
FEM Modeling of a Magnetoelectric Transducer for Autonomous Micro Sensors in Medical Application
NASA Astrophysics Data System (ADS)
Yang, Gang; Talleb, Hakeim; Gensbittel, Aurélie; Ren, Zhuoxiang
2015-11-01
In the context of wireless and autonomous sensors, this paper presents the multiphysics modeling of an energy transducer based on magnetoelectric (ME) composite for biomedical applications. The study considers the power requirement of an implanted sensor, the communication distance, the size limit of the device for minimal invasive insertion as well as the electromagnetic exposure restriction of the human body. To minimize the electromagnetic absorption by the human body, the energy source is provided by an external reader emitting low frequency magnetic field. The modeling is carried out with the finite element method by solving simultaneously the multiple physics problems including the electric load of the conditioning circuit. The simulation results show that with the T-L mode of a trilayer laminated ME composite, the transducer can deliver the required energy in respecting different constraints.
Effect of beam types on the scintillations: a review
NASA Astrophysics Data System (ADS)
Baykal, Yahya; Eyyuboglu, Halil T.; Cai, Yangjian
2009-02-01
When different incidences are launched in atmospheric turbulence, it is known that the intensity fluctuations exhibit different characteristics. In this paper we review our work done in the evaluations of the scintillation index of general beam types when such optical beams propagate in horizontal atmospheric links in the weak fluctuations regime. Variation of scintillation indices versus the source and medium parameters are examined for flat-topped-Gaussian, cosh- Gaussian, cos-Gaussian, annular, elliptical Gaussian, circular (i.e., stigmatic) and elliptical (i.e., astigmatic) dark hollow, lowest order Bessel-Gaussian and laser array beams. For flat-topped-Gaussian beam, scintillation is larger than the single Gaussian beam scintillation, when the source sizes are much less than the Fresnel zone but becomes smaller for source sizes much larger than the Fresnel zone. Cosh-Gaussian beam has lower on-axis scintillations at smaller source sizes and longer propagation distances as compared to Gaussian beams where focusing imposes more reduction on the cosh- Gaussian beam scintillations than that of the Gaussian beam. Intensity fluctuations of a cos-Gaussian beam show favorable behaviour against a Gaussian beam at lower propagation lengths. At longer propagation lengths, annular beam becomes advantageous. In focused cases, the scintillation index of annular beam is lower than the scintillation index of Gaussian and cos-Gaussian beams starting at earlier propagation distances. Cos-Gaussian beams are advantages at relatively large source sizes while the reverse is valid for annular beams. Scintillations of a stigmatic or astigmatic dark hollow beam can be smaller when compared to stigmatic or astigmatic Gaussian, annular and flat-topped beams under conditions that are closely related to the beam parameters. Intensity fluctuation of an elliptical Gaussian beam can also be smaller than a circular Gaussian beam depending on the propagation length and the ratio of the beam waist size along the long axis to that along the short axis (i.e., astigmatism). Comparing against the fundamental Gaussian beam on equal source size and equal power basis, it is observed that the scintillation index of the lowest order Bessel-Gaussian beam is lower at large source sizes and large width parameters. However, for excessively large width parameters and beyond certain propagation lengths, the advantage of the lowest order Bessel-Gaussian beam seems to be lost. Compared to Gaussian beam, laser array beam exhibits less scintillations at long propagation ranges and at some midrange radial displacement parameters. When compared among themselves, laser array beams tend to have reduced scintillations for larger number of beamlets, longer wavelengths, midrange radial displacement parameters, intermediate Gaussian source sizes, larger inner scales and smaller outer scales of turbulence. The number of beamlets used does not seem to be so effective in this improvement of the scintillations.
The measurement of acoustic properties of limited size panels by use of a parametric source
NASA Astrophysics Data System (ADS)
Humphrey, V. F.
1985-01-01
A method of measuring the acoustic properties of limited size panels immersed in water, with a truncated parametric array used as the acoustic source, is described. The insertion loss and reflection loss of thin metallic panels, typically 0·45 m square, were measured at normal incidence by using this technique. Results were obtained for a wide range of frequencies (10 to 100 kHz) and were found to be in good agreement with the theoretical predictions for plane waves. Measurements were also made of the insertion loss of aluminium, Perspex and G.R.P. panels for angles of incidence up to 50°. The broad bandwidth available from the parametric source permitted detailed measurements to be made over a wide frequency range using a single transmitting transducer. The small spot sizes obtainable with the parametric source also helped to reduce the significance of diffraction from edges of the panel under test.
Small-scale structure of the CO emission in S255 from lunar occultation observations
NASA Technical Reports Server (NTRS)
Schloerb, F. P.; Scoville, N. Z.
1980-01-01
Two lunar occultations of the S255 H II region/molecular cloud complex were observed in the 2.6 mm CO line during 1978 and 1979. The resolution obtained (between 4 arcsec and 7 arcsec) enables us to resolve bright sources that are much smaller than the 44 arcsec telescope beam. In addition to the large-scale structure (approximately 10 arcmin in size) seen in previous CO maps, the observations reveal two high-temperature emission regions in the cloud core associated with two compact infrared sources about 20 arcsec apart. The first CO hot spot is larger in size with a Gaussian width of 41 arcsec + or - 7 arcsec and a peak temperature of 65 K. Its center falls between the two small infrared sources S255 IRS1 and IRS2. The linear size and peak temperature of this source are remarkably similar to those in the Orion Kleinmann-Low nebula. The second source is revealed from a discontinuous change in the CO line flux as the lunar limb crossed S255 IRS1. The size of this component is less than 7 arcsec; its temperature must exceed 200 K. No evidence is found for exceptionally high temperatures at the boundary of the two H II regions crossed during the occultations.
13 CFR 121.304 - What are the size requirements for refinancing an existing SBA loan?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What are the size requirements for refinancing an existing SBA loan? 121.304 Section 121.304 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards Size Eligibility...
Kang, Xuming; Song, Jinming; Yuan, Huamao; Duan, Liqin; Li, Xuegang; Li, Ning; Liang, Xianmeng; Qu, Baoxiao
2017-09-01
Heavy metal contamination is an essential indicator of environmental health. In this work, one sediment core was used for the analysis of the speciation of heavy metals (Cr, Mn, Ni, Cu, Zn, As, Cd, and Pb) in Jiaozhou Bay sediments with different grain sizes. The bioavailability, sources and ecological risk of heavy metals were also assessed on a centennial timescale. Heavy metals were enriched in grain sizes of < 63µm and were predominantly present in residual phases. Moreover, the mobility sequence based on the sum of the first three phases (for grain sizes of < 63µm) was Mn > Pb > Cd > Zn > Cu >Ni > Cr > As. Enrichment factors (EF) indicated that heavy metals in Jiaozhou Bay presented from no enrichment to minor enrichment. The potential ecological risk index (RI) indicated that Jiaozhou Bay had been suffering from a low ecological risk and presented an increasing trend since 1940s owing to the increase of anthropogenic activities. The source analysis indicated that natural sources were primary sources of heavy metals in Jiaozhou Bay and anthropogenic sources of heavy metals presented an increasing trend since 1940s. The principal component analysis (PCA) indicated that Cr, Mn, Ni, Cu and Pb were primarily derived from natural sources and that Zn and Cd were influenced by shipbuilding industry. Mn, Cu, Zn and Pb may originate from both natural and anthropogenic sources. As may be influenced by agricultural activities. Moreover, heavy metals in sediments of Jiaozhou Bay were clearly influenced by atmospheric deposition and river input. Copyright © 2017. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
Survey of Large Methane Emitters in North America
NASA Astrophysics Data System (ADS)
Deiker, S.
2017-12-01
It has been theorized that methane emissions in the oil and gas industry follow log normal or "fat tail" distributions, with large numbers of small sources for every very large source. Such distributions would have significant policy and operational implications. Unfortunately, by their very nature such distributions would require large sample sizes to verify. Until recently, such large-scale studies would be prohibitively expensive. The largest public study to date sampled 450 wells, an order of magnitude too low to effectively constrain these models. During 2016 and 2017, Kairos Aerospace conducted a series of surveys the LeakSurveyor imaging spectrometer, mounted on light aircraft. This small, lightweight instrument was designed to rapidly locate large emission sources. The resulting survey covers over three million acres of oil and gas production. This includes over 100,000 wells, thousands of storage tanks and over 7,500 miles of gathering lines. This data set allows us to now probe the distribution of large methane emitters. Results of this survey, and implications for methane emission distribution, methane policy and LDAR will be discussed.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
The energetics and mass structure of regions of star formation: S201
NASA Technical Reports Server (NTRS)
Thronson, H. A., Jr.; Smith, H. A.; Lada, C. J.; Glaccum, W.; Harper, D. A.; Loewenstein, R. F.; Smith, J.
1984-01-01
Theoretical predictions about dust and gas in star forming regions are tested by observing a 4 arcmin region surrounding the radio continuum source in 5201. The object was mapped in two far infrared wavelengths and found to show significant extended emission. Under the assumption that the molecular gas is heated solely via thermal coupling with the dust, the volume density was mapped in 5201. The ratios of infrared optical depth to CO column density were calculated for a number of positions in the source. Near the center of the cloud the values are found to be in good agreement with other determinations for regions with lower column density. In addition, the observations suggest significant molecular destruction in the outer parts of the object. Current models of gas heating were used to calculate a strong limit for the radius of the far infrared emitting grains, equal to or less than 0.15 micron. Grains of about this size are required by the observation of high temperature (T equal to or greater than 20 K) gas in many sources.
On Using Intensity Interferometry for Feature Identification and Imaging of Remote Objects
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Strekalov, Dmitry V.; Yu, Nan
2013-01-01
We derive an approximation to the intensity covariance function of two scanning pinhole detectors, facing a distant source (e.g., a star) being occluded partially by an absorptive object (e.g., a planet). We focus on using this technique to identify or image an object that is in the line-of-sight between a well-characterized source and the detectors. We derive the observed perturbation to the intensity covariance map due to the object, showing that under some reasonable approximations it is proportional to the real part of the Fourier transform of the source's photon-flux density times the Fourier transform of the object's intensity absorption. We highlight the key parameters impacting its visibility and discuss the requirements for estimating object-related parameters, e.g., its size, velocity or shape. We consider an application of this result to determining the orbit inclination of an exoplanet orbiting a distant star. Finally, motivated by the intrinsically weak nature of the signature, we study its signal-to-noise ratio and determine the impact of system parameters.
Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.B.; Cushing, K.M.; Johnson, J.W.
1982-05-01
The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less
Characterization of lunar ilmenite resources
NASA Astrophysics Data System (ADS)
Heiken, G. H.; Vaniman, D. T.
Ilmenite will be an important lunar resource, to be used mainly for oxygen production but also as a source of iron. Ilmenite abundances in high-Ti basaltic lavas are higher (9-19 vol pct) than in high-Ti mare soils (mostly less than 10 vol pct). This factor alone may make crushed high-Ti basaltic lavas most attractive as a target for ilmenite extraction. Concentration of ilmenite from either a crushed basalt or regolith requires size sorting to avoid polycrystalline fragments. In coarse-grained high-Ti basaltic lavas, about 60-80 percent of the ilmenite will consist of relatively 'clean' single crystals if the rocks are crushed to a size of 0.2 mm. Fine-grained high-Ti basalts, with thin skeletal or hopper-shaped ilmentes, would produce essentially no free or 'clean' ilmenite grains even if crushed to 0.15 mm and only about 7 percent free ilmenite if crushed to 0.05 mm. Data from the 2.8-m-thick regolith sampled by coring at the Apollo 17 site show that in even the most basalt-clast-rich and least mature stratigraphic intervals, free ilmenite grains make up less than 2 percent of the 0.02- to 0.2-mm size fraction and a mere 0.3 percent of the 0.2- to 2-mm size fraction.
Evaluation of Stony Coral Indicators for Coral Reef ...
Colonies of reef-building stony corals at 57 stations around St. Croix, U.S. Virgin Islands were characterized by species, size and percentage of living tissue. Taxonomic, biological and physical indicators of coral condition were derived from these measurements and assessed for their response to gradients of human disturbance. The purpose of the study was to identify indicators that could be used for regulatory assessments under authority of the Clean Water Act--this requires that indicators distinguish anthropogenic disturbances from natural variation. Stony coral indicators were tested for correlation with human disturbance across gradients located on three different sides of the island. At the most intensely disturbed location, five of eight primary indicators were highly correlated with distance from the source of disturbance: Coral taxa richness, average colony size, the coefficient of variation of colony size (an indicator of colony size heterogeneity), total topographic coral surface area, and live coral surface area. An additional set of exploratory indicators related to rarity, reproductive and spawning mode, and taxonomic identity were also screened for association with disturbance at the same location. For the other two locations, there were no significant changes in indicator values and therefore no discernible effects of human activity. Coral indicators demonstrated sufficient precision to detect levels of change that would be applicable in a regio
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Process system and method for fabricating submicron field emission cathodes
Jankowski, A.F.; Hayes, J.P.
1998-05-05
A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape. 3 figs.
Process system and method for fabricating submicron field emission cathodes
Jankowski, Alan F.; Hayes, Jeffrey P.
1998-01-01
A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape.
NASA Astrophysics Data System (ADS)
Li, Xiang; Jiang, Li; Hoa, Le Phuoc; Lyu, Yan; Xu, Tingting; Yang, Xin; Iinuma, Yoshiteru; Chen, Jianmin; Herrmann, Hartmut
2016-11-01
In this study, measurements of size-resolved sugar and nitrophenol concentrations and their distributions during Shanghai haze episodes were performed. The primary goal was to track their possible source categories and investigate the contribution of biological and biomass burning aerosols to urban haze events through regional transport. The results showed that levoglucosan had the highest concentration (40-852 ng m-3) followed by 4-nitrophenol (151-768 ng m-3), sucrose (38-380 ng m-3), 4-nitrocatechol (22-154 ng m-3), and mannitol (5-160 ng m-3). Size distributions exhibited over 90% of levoglucosan and 4-nitrocatechol to the total accumulated in the fine-particle size fraction (<2.1 μm), particularly in heavier haze periods. The back trajectories further supported the fact that levoglucosan was linked to biomass-burning particles, with higher values of associated with air masses passing from biomass burning areas (fire spots) before reaching Shanghai. Other primary saccharide and nitrophenol species showed an unusually large peak in the coarse-mode size fraction (>2.1 μm), which can be correlated with emissions from local sources (biological emission). Principal component analysis (PCA) and positive matrix factorization (PMF) revealed four probable sources (biomass burning: 28%, airborne pollen: 25%, fungal spores: 24%, and combustion emission: 23%) responsible for urban haze events. Taken together, these findings provide useful insight into size-resolved source apportionment analysis via molecular markers for urban haze pollution events in Shanghai.
Measurement of daily size-fractionated ambient PM10 mass, metals, inorganic ions (nitrate and sulfate) and elemental and organic carbon were conducted at source (Downey) and receptor (Riverside) sites within the Los Angeles Basin. In addition to 24-h concentration m...
Increasing seed size and quality by manipulating BIG SEEDS 1 in legume species
USDA-ARS?s Scientific Manuscript database
Plant organs such as seeds are primary sources of food for both humans and animals. Seed size is one of the major agronomic traits that have been selected in crop plants during their domestication. Legume seeds are a major source of dietary proteins and oils. Here, we report a novel and conserved ro...
Edge systems in the deep ocean
NASA Astrophysics Data System (ADS)
Coon, Andrew; Earp, Samuel L.
2010-04-01
DARPA has initiated a program to explore persistent presence in the deep ocean. The deep ocean is difficult to access and presents a hostile environment. Persistent operations in the deep ocean will require new technology for energy, communications and autonomous operations. Several fundamental characteristics of the deep ocean shape any potential system architecture. The deep sea presents acoustic sensing opportunities that may provide significantly enhanced sensing footprints relative to sensors deployed at traditional depths. Communication limitations drive solutions towards autonomous operation of the platforms and automation of data collection and processing. Access to the seabed presents an opportunity for fixed infrastructure with no important limitations on size and weight. Difficult access and persistence impose requirements for long-life energy sources and potentially energy harvesting. The ocean is immense, so there is a need to scale the system footprint for presence over tens of thousands and perhaps hundreds of thousands of square nautical miles. This paper focuses on the aspect of distributed sensing, and the engineering of networks of sensors to cover the required footprint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, G.A.; Commer, M.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less
System description for DART (Decision Analysis for Remediation Technologies)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nonte, J.; Bolander, T.; Nickelson, D.
1997-09-01
DART is a computer aided system populated with influence models to determine quantitative benefits derived by matching requirements and technologies. The DART database is populated with data from over 900 DOE sites from 10 Field Offices. These sites are either source terms, such as buried waste pits, or soil or groundwater contaminated plumes. The data, traceable to published documents, consists of site-specific data (contaminants, area, volume, depth, size, remedial action dates, site preferred remedial option), problems (e.g., offsite contaminant plume), and Site Technology Coordinating Group (STCG) need statements (also contained in the Ten-Year Plan). DART uses this data to calculatemore » and derive site priorities, risk rankings, and site specific technology requirements. DART is also populated with over 900 industry and DOE SCFA technologies. Technology capabilities can be used to match technologies to waste sites based on the technology`s capability to meet site requirements and constraints. Queries may be used to access, sort, roll-up, and rank site data. Data roll-ups may be graphically displayed.« less
Large Animal Models of an In Vivo Bioreactor for Engineering Vascularized Bone.
Akar, Banu; Tatara, Alexander M; Sutradhar, Alok; Hsiao, Hui-Yi; Miller, Michael; Cheng, Ming-Huei; Mikos, Antonios G; Brey, Eric M
2018-04-12
Reconstruction of large skeletal defects is challenging due to the requirement for large volumes of donor tissue and the often complex surgical procedures. Tissue engineering has the potential to serve as a new source of tissue for bone reconstruction, but current techniques are often limited in regards to the size and complexity of tissue that can be formed. Building tissue using an in vivo bioreactor approach may enable the production of appropriate amounts of specialized tissue, while reducing issues of donor site morbidity and infection. Large animals are required to screen and optimize new strategies for growing clinically appropriate volumes of tissues in vivo. In this article, we review both ovine and porcine models that serve as models of the technique proposed for clinical engineering of bone tissue in vivo. Recent findings are discussed with these systems, as well as description of next steps required for using these models, to develop clinically applicable tissue engineering applications.
THE APPLICATION OF MULTIVIEW METHODS FOR HIGH-PRECISION ASTROMETRIC SPACE VLBI AT LOW FREQUENCIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodson, R.; Rioja, M.; Imai, H.
2013-06-15
High-precision astrometric space very long baseline interferometry (S-VLBI) at the low end of the conventional frequency range, i.e., 20 cm, is a requirement for a number of high-priority science goals. These are headlined by obtaining trigonometric parallax distances to pulsars in pulsar-black hole pairs and OH masers anywhere in the Milky Way and the Magellanic Clouds. We propose a solution for the most difficult technical problems in S-VLBI by the MultiView approach where multiple sources, separated by several degrees on the sky, are observed simultaneously. We simulated a number of challenging S-VLBI configurations, with orbit errors up to 8 mmore » in size and with ionospheric atmospheres consistent with poor conditions. In these simulations we performed MultiView analysis to achieve the required science goals. This approach removes the need for beam switching requiring a Control Moment Gyro, and the space and ground infrastructure required for high-quality orbit reconstruction of a space-based radio telescope. This will dramatically reduce the complexity of S-VLBI missions which implement the phase-referencing technique.« less
A study of payload specialist station monitor size constraints. [space shuttle orbiters
NASA Technical Reports Server (NTRS)
Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.
1975-01-01
Constraints on the CRT display size for the shuttle orbiter cabin are studied. The viewing requirements placed on these monitors were assumed to involve display of imaged scenes providing visual feedback during payload operations and display of alphanumeric characters. Data on target recognition/resolution, target recognition, and range rate detection by human observers were utilized to determine viewing requirements for imaged scenes. Field-of-view and acuity requirements for a variety of payload operations were obtained along with the necessary detection capability in terms of range-to-target size ratios. The monitor size necessary to meet the acuity requirements was established. An empirical test was conducted to determine required recognition sizes for displayed alphanumeric characters. The results of the test were used to determine the number of characters which could be simultaneously displayed based on the recognition size requirements using the proposed monitor size. A CRT display of 20 x 20 cm is recommended. A portion of the display area is used for displaying imaged scenes and the remaining display area is used for alphanumeric characters pertaining to the displayed scene. The entire display is used for the character alone mode.
7 CFR 51.2284 - Size classification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classification. 51.2284 Section 51.2284...) Size Requirements § 51.2284 Size classification. The following classifications are provided to describe... of kernels in the lot shall conform to the requirements of the specified classification as defined...
7 CFR 51.2284 - Size classification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classification. 51.2284 Section 51.2284...) Size Requirements § 51.2284 Size classification. The following classifications are provided to describe... of kernels in the lot shall conform to the requirements of the specified classification as defined...
Implementation of Size-Dependent Local Diagnostic Reference Levels for CT Angiography.
Boere, Hub; Eijsvoogel, Nienke G; Sailer, Anna M; Wildberger, Joachim E; de Haan, Michiel W; Das, Marco; Jeukens, Cecile R L P N
2018-05-01
Diagnostic reference levels (DRLs) are established for standard-sized patients; however, patient dose in CT depends on patient size. The purpose of this study was to introduce a method for setting size-dependent local diagnostic reference levels (LDRLs) and to evaluate these LDRLs in comparison with size-independent LDRLs and with respect to image quality. One hundred eighty-four aortic CT angiography (CTA) examinations performed on either a second-generation or third-generation dual-source CT scanner were included; we refer to the second-generation dual-source CT scanner as "CT1" and the third-generation dual-source CT scanner as "CT2." The volume CT dose index (CTDI vol ) and patient diameter (i.e., the water-equivalent diameter) were retrieved by dose-monitoring software. Size-dependent DRLs based on a linear regression of the CTDI vol versus patient size were set by scanner type. Size-independent DRLs were set by the 5th and 95th percentiles of the CTDI vol values. Objective image quality was assessed using the signal-to-noise ratio (SNR), and subjective image quality was assessed using a 4-point Likert scale. The CTDI vol depended on patient size and scanner type (R 2 = 0.72 and 0.78, respectively; slope = 0.05 and 0.02 mGy/mm; p < 0.001). Of the outliers identified by size-independent DRLs, 30% (CT1) and 67% (CT2) were adequately dosed when considering patient size. Alternatively, 30% (CT1) and 70% (CT2) of the outliers found with size-dependent DRLs were not identified using size-independent DRLs. A negative correlation was found between SNR and CTDI vol (R 2 = 0.36 for CT1 and 0.45 for CT2). However, all outliers had a subjective image quality score of sufficient or better. We introduce a method for setting size-dependent LDRLs in CTA. Size-dependent LDRLs are relevant for assessing the appropriateness of the radiation dose for an individual patient on a specific CT scanner.
Lin, Jocelyn E; Hilborn, Ray; Quinn, Thomas P; Hauser, Lorenz
2011-12-01
Small populations can provide insights into ecological and evolutionary aspects of species distributions over space and time. In the Wood River system in Alaska, USA, small aggregates of Chinook (Oncorhynchus tshawytscha) and chum salmon (O. keta) spawn in an area dominated by sockeye salmon (O. nerka). Our objective was to determine whether these Chinook and chum salmon are reproductively isolated, self-sustaining populations, population sinks that produce returning adults but receive immigration, or strays from other systems that do not produce returning adults. DNA samples collected from adult chum salmon from 16 streams and Chinook salmon from four streams in the Wood River system over 3 years were compared to samples from large populations in the nearby Nushagak River system, a likely source of strays. For both species, microsatellite markers indicated no significant genetic differentiation between the two systems. Simulations of microsatellite data in a large source and a smaller sink population suggested that considerable immigration would be required to counteract the diverging effects of genetic drift and produce genetic distances as small as those observed, considering the small census sizes of the two species in the Wood River system. Thus, the Wood River system likely receives substantial immigration from neighbouring watersheds, such as the Nushagak River system, which supports highly productive runs. Although no data on population productivity in the Wood River system exist, our results suggest source-sink dynamics for the two species, a finding relevant to other systems where salmonid population sizes are limited by habitat factors. © 2011 Blackwell Publishing Ltd.
Nutrient bioassimilation capacity of aquacultured oysters: quantification of an ecosystem service.
Higgins, Colleen B; Stephenson, Kurt; Brown, Bonnie L
2011-01-01
Like many coastal zones and estuaries, the Chesapeake Bay has been severely degraded by cultural eutrophication. Rising implementation costs and difficulty achieving nutrient reduction goals associated with point and nonpoint sources suggests that approaches supplemental to source reductions may prove useful in the future. Enhanced oyster aquaculture has been suggested as one potential policy initiative to help rid the Bay waters of excess nutrients via harvest of bioassimilated nutrients. To assess this potential, total nitrogen (TN), total phosphorous (TP), and total carbon (TC) content were measured in oyster tissue and shell at two floating-raft cultivation sites in the Chesapeake Bay. Models were developed based on the common market measurement of total length (TL) for aquacultured oysters, which was strongly correlated to the TN (R2 = 0.76), TP (R2 = 0.78), and TC (R2 = 0.76) content per oyster tissue and shell. These models provide resource managers with a tool to quantify net nutrient removal. Based on model estimates, 10(6) harvest-sized oysters (76 mm TL) remove 132 kg TN, 19 kg TP, and 3823 kg TC. In terms of nutrients removed per unit area, oyster harvest is an effective means of nutrient removal compared with other nonpoint source reduction strategies. At a density of 286 oysters m(-2), assuming no mortality, harvest size nutrient removal rates can be as high as 378 kg TN ha(-1), 54 kg TP ha(-1), and 10,934 kg TC ha(-1) for 76-mm oysters. Removing 1 t N from the Bay would require harvesting 7.7 million 76-mm TL cultivated oysters.
Mitra, Biplob; Wolfe, Chad; Wu, Sy-Juen
2018-05-01
The feasibility of dextrose monohydrate as a non-animal sourced diluent in high shear wet granulation (HSWG) tablet formulations was determined. Impacts of granulation solution amount and addition time, wet massing time, impeller speed, powder and solution binder, and dry milling speed and screen opening size on granule size, friability and density, and tablet solid fraction (SF) and tensile strength (TS) were evaluated. The stability of theophylline tablets TS, disintegration time (DT) and in vitro dissolution were also studied. Following post-granulation drying at 60 °C, dextrose monohydrate lost 9% water and converted into the anhydrate form. Higher granulation solution amounts and faster addition, faster impeller speeds, and solution binder produced larger, denser and stronger (less friable) granules. All granules were compressed into tablets with acceptable TS. Contrary to what is normally observed, denser and larger granules (at ≥21% water level) produced tablets with a higher TS. The TS of the weakest tablets increased the most after storage at both 25 °C/60% RH and 40 °C/75% RH. Tablet DT was higher for stronger granules and after storage. Tablet dissolution profiles for 21% or less water were comparable and did not change on stability. However, the dissolution profile for tablets prepared with 24% water was slower initially and continued to decrease on stability. The results indicate a granulation water amount of not more than 21% is required to achieve acceptable tablet properties. This study clearly demonstrated the utility of dextrose monohydrate as a non-animal sourced diluent in a HSWG tablet formulation.
Field mappers for laser material processing
NASA Astrophysics Data System (ADS)
Blair, Paul; Currie, Matthew; Trela, Natalia; Baker, Howard J.; Murphy, Eoin; Walker, Duncan; McBride, Roy
2016-03-01
The native shape of the single-mode laser beam used for high power material processing applications is circular with a Gaussian intensity profile. Manufacturers are now demanding the ability to transform the intensity profile and shape to be compatible with a new generation of advanced processing applications that require much higher precision and control. We describe the design, fabrication and application of a dual-optic, beam-shaping system for single-mode laser sources, that transforms a Gaussian laser beam by remapping - hence field mapping - the intensity profile to create a wide variety of spot shapes including discs, donuts, XY separable and rotationally symmetric. The pair of optics transform the intensity distribution and subsequently flatten the phase of the beam, with spot sizes and depth of focus close to that of a diffraction limited beam. The field mapping approach to beam-shaping is a refractive solution that does not add speckle to the beam, making it ideal for use with single mode laser sources, moving beyond the limits of conventional field mapping in terms of spot size and achievable shapes. We describe a manufacturing process for refractive optics in fused silica that uses a freeform direct-write process that is especially suited for the fabrication of this type of freeform optic. The beam-shaper described above was manufactured in conventional UV-fused silica using this process. The fabrication process generates a smooth surface (<1nm RMS), leading to laser damage thresholds of greater than 100J/cm2, which is well matched to high power laser sources. Experimental verification of the dual-optic filed mapper is presented.
Code of Federal Regulations, 2010 CFR
2010-07-01
... RUBBER MANUFACTURING POINT SOURCE CATEGORY Small-Sized General Molded, Extruded, and Fabricated Rubber..., foam rubber backing, rubber cement-dipped goods, and retreaded tires by small-sized plants...
Code of Federal Regulations, 2010 CFR
2010-07-01
... RUBBER MANUFACTURING POINT SOURCE CATEGORY Large-Sized General Molded, Extruded, and Fabricated Rubber..., foam rubber backing, rubber cement-dipped goods, and retreaded tires by large-sized plants...
NASA Technical Reports Server (NTRS)
Pearl, J. C.; Sinton, W. M.
1982-01-01
The size and temperature, morphology and distribution, variability, possible absorption features, and processes of hot spots on Io are discussed, and an estimate of the global heat flux is made. Size and temperature information is deconvolved to obtain equivalent radius and temperature of hot spots, and simultaneously obtained Voyager thermal and imaging data is used to match hot sources with specific geologic features. In addition to their thermal output, it is possible that hot spots are also characterized by production of various gases and particulate materials; the spectral signature of SO2 has been seen. Origins for relatively stable, low temperature sources, transient high temperature sources, and relatively stable, high-tmperature sources are discussed.
40 CFR 55.13 - Federal requirements that apply to OCS sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sources. 55.13 Section 55.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... sources. (a) The requirements of this section shall apply to OCS sources as set forth below. In the event that a requirement of this section conflicts with an applicable requirement of § 55.14 of this part and...
Locating an atmospheric contamination source using slow manifolds
NASA Astrophysics Data System (ADS)
Tang, Wenbo; Haller, George; Baik, Jong-Jin; Ryu, Young-Hee
2009-04-01
Finite-size particle motion in fluids obeys the Maxey-Riley equations, which become singular in the limit of infinitesimally small particle size. Because of this singularity, finding the source of a dispersed set of small particles is a numerically ill-posed problem that leads to exponential blowup. Here we use recent results on the existence of a slow manifold in the Maxey-Riley equations to overcome this difficulty in source inversion. Specifically, we locate the source of particles by projecting their dispersed positions on a time-varying slow manifold, and by advecting them on the manifold in backward time. We use this technique to locate the source of a hypothetical anthrax release in an unsteady three-dimensional atmospheric wind field in an urban street canyon.
Monte Carlo modelling of large scale NORM sources using MCNP.
Wallace, J D
2013-12-01
The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Undersampling power-law size distributions: effect on the assessment of extreme natural hazards
Geist, Eric L.; Parsons, Thomas E.
2014-01-01
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.