Detecting fission from special nuclear material sources
Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA
2012-06-05
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a graphing component that displays the plot of the neutron distribution from the unknown source over a Poisson distribution and a plot of neutrons due to background or environmental sources. The system further includes a known neutron source placed in proximity to the unknown source to actively interrogate the unknown source in order to accentuate differences in neutron emission from the unknown source from Poisson distributions and/or environmental sources.
Quantum key distribution with an unknown and untrusted source
NASA Astrophysics Data System (ADS)
Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong
2008-05-01
The security of a standard bidirectional “plug-and-play” quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we solve this question directly by presenting the quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard Bennett-Brassard 1984 protocol, weak+vacuum decoy state protocol, and one-decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source.
Absolute nuclear material assay using count distribution (LAMBDA) space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay using count distribution (LAMBDA) space
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-06-05
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Quantum key distribution with an unknown and untrusted source
NASA Astrophysics Data System (ADS)
Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong
2009-03-01
The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Improved mapping of radio sources from VLBI data by least-square fit
NASA Technical Reports Server (NTRS)
Rodemich, E. R.
1985-01-01
A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.
Fission meter and neutron detection using poisson distribution comparison
Rowland, Mark S; Snyderman, Neal J
2014-11-18
A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.
Distributed control system for parallel-connected DC boost converters
Goldsmith, Steven
2017-08-15
The disclosed invention is a distributed control system for operating a DC bus fed by disparate DC power sources that service a known or unknown load. The voltage sources vary in v-i characteristics and have time-varying, maximum supply capacities. Each source is connected to the bus via a boost converter, which may have different dynamic characteristics and power transfer capacities, but are controlled through PWM. The invention tracks the time-varying power sources and apportions their power contribution while maintaining the DC bus voltage within the specifications. A central digital controller solves the steady-state system for the optimal duty cycle settings that achieve a desired power supply apportionment scheme for a known or predictable DC load. A distributed networked control system is derived from the central system that utilizes communications among controllers to compute a shared estimate of the unknown time-varying load through shared bus current measurements and bus voltage measurements.
Nonuniformity correction of imaging systems with a spatially nonhomogeneous radiation source.
Gutschwager, Berndt; Hollandt, Jörg
2015-12-20
We present a novel method of nonuniformity correction of imaging systems in a wide optical spectral range by applying a radiation source with an unknown and spatially nonhomogeneous radiance or radiance temperature distribution. The benefit of this method is that it can be applied with radiation sources of arbitrary spatial radiance or radiance temperature distribution and only requires the sufficient temporal stability of this distribution during the measurement process. The method is based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogenous radiance distribution and a thermal imager of a predefined nonuniform focal plane array responsivity is presented.
Deconvolution Methods and Systems for the Mapping of Acoustic Sources from Phased Microphone Arrays
NASA Technical Reports Server (NTRS)
Humphreys, Jr., William M. (Inventor); Brooks, Thomas F. (Inventor)
2012-01-01
Mapping coherent/incoherent acoustic sources as determined from a phased microphone array. A linear configuration of equations and unknowns are formed by accounting for a reciprocal influence of one or more cross-beamforming characteristics thereof at varying grid locations among the plurality of grid locations. An equation derived from the linear configuration of equations and unknowns can then be iteratively determined. The equation can be attained by the solution requirement of a constraint equivalent to the physical assumption that the coherent sources have only in phase coherence. The size of the problem may then be reduced using zoning methods. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with a phased microphone array (microphones arranged in an optimized grid pattern including a plurality of grid locations) in order to compile an output presentation thereof, thereby removing beamforming characteristics from the resulting output presentation.
Deconvolution methods and systems for the mapping of acoustic sources from phased microphone arrays
NASA Technical Reports Server (NTRS)
Brooks, Thomas F. (Inventor); Humphreys, Jr., William M. (Inventor)
2010-01-01
A method and system for mapping acoustic sources determined from a phased microphone array. A plurality of microphones are arranged in an optimized grid pattern including a plurality of grid locations thereof. A linear configuration of N equations and N unknowns can be formed by accounting for a reciprocal influence of one or more beamforming characteristics thereof at varying grid locations among the plurality of grid locations. A full-rank equation derived from the linear configuration of N equations and N unknowns can then be iteratively determined. A full-rank can be attained by the solution requirement of the positivity constraint equivalent to the physical assumption of statically independent noise sources at each N location. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with the phased microphone array in order to compile an output presentation thereof, thereby removing the beamforming characteristics from the resulting output presentation.
Pulsar statistics and their interpretations
NASA Technical Reports Server (NTRS)
Arnett, W. D.; Lerche, I.
1981-01-01
It is shown that a lack of knowledge concerning interstellar electron density, the true spatial distribution of pulsars, the radio luminosity source distribution of pulsars, the real ages and real aging rates of pulsars, the beaming factor (and other unknown factors causing the known sample of about 350 pulsars to be incomplete to an unknown degree) is sufficient to cause a minimum uncertainty of a factor of 20 in any attempt to determine pulsar birth or death rates in the Galaxy. It is suggested that this uncertainty must impact on suggestions that the pulsar rates can be used to constrain possible scenarios for neutron star formation and stellar evolution in general.
Evidence for foliar endophytic nitrogen fixation in a widely distributed subalpine conifer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moyes, Andrew B.; Kueppers, Lara M.; Pett-Ridge, Jennifer
Coniferous forest nitrogen (N) budgets indicate unknown sources of N. A consistent association between limber pine ( Pinus flexilis) and potential N 2-fixing acetic acid bacteria (AAB) indicates that native foliar endophytes may supply subalpine forests with N.
Evidence for foliar endophytic nitrogen fixation in a widely distributed subalpine conifer
Moyes, Andrew B.; Kueppers, Lara M.; Pett-Ridge, Jennifer; ...
2016-02-01
Coniferous forest nitrogen (N) budgets indicate unknown sources of N. A consistent association between limber pine ( Pinus flexilis) and potential N 2-fixing acetic acid bacteria (AAB) indicates that native foliar endophytes may supply subalpine forests with N.
NASA Astrophysics Data System (ADS)
Gutschwager, Berndt; Hollandt, Jörg
2017-01-01
We present a novel method of nonuniformity correction (NUC) of infrared cameras and focal plane arrays (FPA) in a wide optical spectral range by reading radiance temperatures and by applying a radiation source with an unknown and spatially nonhomogeneous radiance temperature distribution. The benefit of this novel method is that it works with the display and the calculation of radiance temperatures, it can be applied to radiation sources of arbitrary spatial radiance temperature distribution, and it only requires sufficient temporal stability of this distribution during the measurement process. In contrast to this method, an initially presented method described the calculation of NUC correction with the reading of monitored radiance values. Both methods are based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogeneous radiance temperature distribution and a thermal imager of a predefined nonuniform FPA responsivity is presented.
Lu, Feng; Matsushita, Yasuyuki; Sato, Imari; Okabe, Takahiro; Sato, Yoichi
2015-10-01
We propose an uncalibrated photometric stereo method that works with general and unknown isotropic reflectances. Our method uses a pixel intensity profile, which is a sequence of radiance intensities recorded at a pixel under unknown varying directional illumination. We show that for general isotropic materials and uniformly distributed light directions, the geodesic distance between intensity profiles is linearly related to the angular difference of their corresponding surface normals, and that the intensity distribution of the intensity profile reveals reflectance properties. Based on these observations, we develop two methods for surface normal estimation; one for a general setting that uses only the recorded intensity profiles, the other for the case where a BRDF database is available while the exact BRDF of the target scene is still unknown. Quantitative and qualitative evaluations are conducted using both synthetic and real-world scenes, which show the state-of-the-art accuracy of smaller than 10 degree without using reference data and 5 degree with reference data for all 100 materials in MERL database.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
Gamma-ray momentum reconstruction from Compton electron trajectories by filtered back-projection
Haefner, A.; Gunter, D.; Plimley, B.; ...
2014-11-03
Gamma-ray imaging utilizing Compton scattering has traditionally relied on measuring coincident gamma-ray interactions to map directional information of the source distribution. This coincidence requirement makes it an inherently inefficient process. We present an approach to gamma-ray reconstruction from Compton scattering that requires only a single electron tracking detector, thus removing the coincidence requirement. From the Compton scattered electron momentum distribution, our algorithm analytically computes the incident photon's correlated direction and energy distributions. Because this method maps the source energy and location, it is useful in applications, where prior information about the source distribution is unknown. We demonstrate this method withmore » electron tracks measured in a scientific Si charge coupled device. While this method was demonstrated with electron tracks in a Si-based detector, it is applicable to any detector that can measure electron direction and energy, or equivalently the electron momentum. For example, it can increase the sensitivity to obtain energy and direction in gas-based systems that suffer from limited efficiency.« less
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali
2015-04-01
The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.
Stand conditions associated with truffle abundance in western hemlock/Douglas-fir forests
Malcolm North; Joshua Greenberg
1998-01-01
Truffles are a staple food source for many forest small mammals yet the vegetation or soil conditions associated with truffle abundance are unknown. We examined the spatial distribution of forest structures, organic layer depth, root density, and two of the most common western North American truffles (Elaphomyces granulatus and Rhizopogon...
Radioactivity Registered With a Small Number of Events
NASA Astrophysics Data System (ADS)
Zlokazov, Victor; Utyonkov, Vladimir
2018-02-01
The synthesis of superheavy elements asks for the analysis of low statistics experimental data presumably obeying an unknown exponential distribution and to take the decision whether they originate from one source or have admixtures. Here we analyze predictions following from non-parametrical methods, employing only such fundamental sample properties as the sample mean, the median and the mode.
Cyber-Physical Trade-Offs in Distributed Detection Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.
2010-01-01
We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less
Weak signal transmission in complex networks and its application in detecting connectivity.
Liang, Xiaoming; Liu, Zonghua; Li, Baowen
2009-10-01
We present a network model of coupled oscillators to study how a weak signal is transmitted in complex networks. Through both theoretical analysis and numerical simulations, we find that the response of other nodes to the weak signal decays exponentially with their topological distance to the signal source and the coupling strength between two neighboring nodes can be figured out by the responses. This finding can be conveniently used to detect the topology of unknown network, such as the degree distribution, clustering coefficient and community structure, etc., by repeatedly choosing different nodes as the signal source. Through four typical networks, i.e., the regular one dimensional, small world, random, and scale-free networks, we show that the features of network can be approximately given by investigating many fewer nodes than the network size, thus our approach to detect the topology of unknown network may be efficient in practical situations with large network size.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
2013-03-06
lithium - ion battery ,” Journal of Power Sources, vol. 195, no. 9, pp. 2961 – 2968, 2010. [10] L. Cai and R. White, “An efficient electrochemical-thermal...13] D. R. Pendergast, E. P. DeMauro, M. Fletcher, E. Stimson, and J. C. Mollendorf, “A rechargeable lithium - ion battery module for underwater use...Journal of Power Sources, vol. 196, no. 2, pp. 793–800, 2011. [14] D. H. Jeon and S. M. Baek, “Thermal modeling of cylindrical lithium ion battery during
Green's function of radial inhomogeneous spheres excited by internal sources.
Zouros, Grigorios P; Kokkorakis, Gerassimos C
2011-01-01
Green's function in the interior of penetrable bodies with inhomogeneous compressibility by sources placed inside them is evaluated through a Schwinger-Lippmann volume integral equation. In the case of a radial inhomogeneous sphere, the radial part of the unknown Green's function can be expanded in a double Dini's series, which allows analytical evaluation of the involved cumbersome integrals. The simple case treated here can be extended to more difficult situations involving inhomogeneous density as well as to the corresponding electromagnetic or elastic problem. Finally, numerical results are given for various inhomogeneous compressibility distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, P.A.
1993-03-01
The global geochemical cycle for an element tracks its path from its various sources to its sinks via processes of weathering and transportation. The cycle may then be quantified in a necessarily approximate manner. The geochemical cycle (thus quantified) reveals constraints (known and unknown) on an element's behavior imposed by the various processes which act on it. In the context of a global geochemical cycle, a continent becomes essentially a source term. If, however, an element's behavior is examined in a local or regional context, sources and their related sinks may be identified. This suggests that small-scale geochemical cycles maymore » be superimposed on global geochemical cycles. Definition of such sub-cycles may clarify the distribution of an element in the earth's near-surface environment. In Florida, phosphate minerals of the Hawthorn Group act as a widely distributed source of uranium. Uranium is transported by surface- and ground-waters. Florida is the site of extensive wetlands and peatlands. The organic matter associated with these deposits adsorbs uranium and may act as a local sink depending on its hydrogeologic setting. This work examines the role of organic matter in the distribution of uranium in the surface and shallow subsurface environments of central and north Florida.« less
7. Photocopy of painting (Source unknown, Date unknown) EXTERIOR SOUTH ...
7. Photocopy of painting (Source unknown, Date unknown) EXTERIOR SOUTH FRONT VIEW OF MISSION AND CONVENTO AFTER 1913 - Mission San Francisco Solano de Sonoma, First & Spain Streets, Sonoma, Sonoma County, CA
Nishiura, Hiroshi
2007-05-11
The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period. After the suggestion that the incubation period follows lognormal distribution, Japanese epidemiologists extended this assumption to estimates of the time of exposure during a point source outbreak. Although the reason why the incubation period of acute infectious diseases tends to reveal a right-skewed distribution has been explored several times, the validity of the lognormal assumption is yet to be fully clarified. At present, various different distributions are assumed, and the lack of validity in assuming lognormal distribution is particularly apparent in the case of slowly progressing diseases. The present paper indicates that (1) analysis using well-defined short periods of exposure with appropriate statistical methods is critical when the exact time of exposure is unknown, and (2) when assuming a specific distribution for the incubation period, comparisons using different distributions are needed in addition to estimations using different datasets, analyses of the determinants of incubation period, and an understanding of the underlying disease mechanisms.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
Kylin, Henrik; Svensson, Teresia; Jensen, Sören; Strachan, William M J; Franich, Robert; Bouwman, Hindrik
2017-10-01
The production and use of pentachlorophenol (PCP) was recently prohibited/restricted by the Stockholm Convention on persistent organic pollutants (POPs), but environmental data are few and of varying quality. We here present the first extensive dataset of the continent-wide (Eurasia and Canada) occurrence of PCP and its methylation product pentachloroanisole (PCA) in the environment, specifically in pine needles. The highest concentrations of PCP were found close to expected point sources, while PCA chiefly shows a northern and/or coastal distribution not correlating with PCP distribution. Although long-range transport and environmental methylation of PCP or formation from other precursors cannot be excluded, the distribution patterns suggest that such processes may not be the only source of PCA to remote regions and unknown sources should be sought. We suggest that natural sources, e.g., chlorination of organic matter in Boreal forest soils enhanced by chloride deposition from marine sources, should be investigated as a possible partial explanation of the observed distributions. The results show that neither PCA nor total PCP (ΣPCP = PCP + PCA) should be used to approximate the concentrations of PCP; PCP and PCA must be determined and quantified separately to understand their occurrence and fate in the environment. The background work shows that the accumulation of airborne POPs in plants is a complex process. The variations in life cycles and physiological adaptations have to be taken into account when using plants to evaluate the concentrations of POPs in remote areas. Copyright © 2017 Elsevier Ltd. All rights reserved.
14. Photocopy of photograph (source unknown) photographer unknown pre1885 NORTH ...
14. Photocopy of photograph (source unknown) photographer unknown pre-1885 NORTH SIDE AND WEST FRONT (NOTE ABSENCE OF DORMER ON GAMBREL ROOF OF ELL) (Illustration #6 of Data Report included in Field Records) - Narbonne House, 71 Essex Street, Salem, Essex County, MA
44. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
44. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angeles, CA, Photographer, Date unknown FIRST FLOOR PLAN - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
45. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
45. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angeles, CA, Photographer, Date unknown SECOND FLOOR PLAN - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
51. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
51. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angeles, CA, Photographer, Date unknown EXTERIOR, ELEVATION DETAILS - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
46. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
46. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angeles, CA, Photographer, Date unknown NORTH ELEVATION - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
47. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
47. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angleles, CA, Photographer, Date unknown WEST ELEVATION - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
Electromagnetic spectrum management system
Seastrand, Douglas R.
2017-01-31
A system for transmitting a wireless countermeasure signal to disrupt third party communications is disclosed that include an antenna configured to receive wireless signals and transmit wireless counter measure signals such that the wireless countermeasure signals are responsive to the received wireless signals. A receiver processes the received wireless signals to create processed received signal data while a spectrum control module subtracts known source signal data from the processed received signal data to generate unknown source signal data. The unknown source signal data is based on unknown wireless signals, such as enemy signals. A transmitter is configured to process the unknown source signal data to create countermeasure signals and transmit a wireless countermeasure signal over the first antenna or a second antenna to thereby interfere with the unknown wireless signals.
49. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
49. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angeles, CA, Photographer, Date unknown SECTION THROUGH BUILDING, LOOKING EAST - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
48. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
48. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angeles, CA., Photographer, Date unknown SECTION THROUGH BUILDING, LOOKING NORTH - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
19. Photocopy of measured drawing (source unknown) 6 March 1945, ...
19. Photocopy of measured drawing (source unknown) 6 March 1945, delineator unknown PROPOSED ADAPTIVE REUSE AS CLEMSON COLLEGE FACULTY CLUB, SECOND FLOOR PLAN - Woodburn, Woodburn Road, U.S. Route 76 vicinity, Pendleton, Anderson County, SC
18. Photocopy of measured drawing (source unknown) 6 March 1945, ...
18. Photocopy of measured drawing (source unknown) 6 March 1945, delineator unknown PROPOSED ADAPTIVE REUSE AS CLEMSON COLLEGE FACULTY CLUB, FIRST FLOOR PLAN - Woodburn, Woodburn Road, U.S. Route 76 vicinity, Pendleton, Anderson County, SC
17. Photocopy of measured drawing (source unknown) 6 March 1945, ...
17. Photocopy of measured drawing (source unknown) 6 March 1945, delineator unknown PROPOSED ADAPTIVE REUSE AS CLEMSON COLLEGE FACULTY CLUB, BASEMENT PLAN - Woodburn, Woodburn Road, U.S. Route 76 vicinity, Pendleton, Anderson County, SC
20. Photocopy of measured drawing (source unknown) 6 March 1945, ...
20. Photocopy of measured drawing (source unknown) 6 March 1945, delineator unknown PROPOSED ADAPTIVE REUSE AS CLEMSON COLLEGE FACULTY CLUB, ATTIC PLAN - Woodburn, Woodburn Road, U.S. Route 76 vicinity, Pendleton, Anderson County, SC
21. Photocopy of maesured drawing (source unknown) 6 March 1945, ...
21. Photocopy of maesured drawing (source unknown) 6 March 1945, delineator unknown PROPOSED ADAPTIVE REUSE AS CLEMSON COLLEGE FACULTY CLUB, SITE PLAN - Woodburn, Woodburn Road, U.S. Route 76 vicinity, Pendleton, Anderson County, SC
Contaminant source identification using semi-supervised machine learning
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir V.; Alexandrov, Boian S.; O'Malley, Daniel
2018-05-01
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).
Contaminant source identification using semi-supervised machine learning
Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan
2017-11-08
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less
Contaminant source identification using semi-supervised machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less
SYNTHESIS OF NOVEL ALL-DIELECTRIC GRATING FILTERS USING GENETIC ALGORITHMS
NASA Technical Reports Server (NTRS)
Zuffada, Cinzia; Cwik, Tom; Ditchman, Christopher
1997-01-01
We are concerned with the design of inhomogeneous, all dielectric (lossless) periodic structures which act as filters. Dielectric filters made as stacks of inhomogeneous gratings and layers of materials are being used in optical technology, but are not common at microwave frequencies. The problem is then finding the periodic cell's geometric configuration and permittivity values which correspond to a specified reflectivity/transmittivity response as a function of frequency/illumination angle. This type of design can be thought of as an inverse-source problem, since it entails finding a distribution of sources which produce fields (or quantities derived from them) of given characteristics. Electromagnetic sources (electric and magnetic current densities) in a volume are related to the outside fields by a well known linear integral equation. Additionally, the sources are related to the fields inside the volume by a constitutive equation, involving the material properties. Then, the relationship linking the fields outside the source region to those inside is non-linear, in terms of material properties such as permittivity, permeability and conductivity. The solution of the non-linear inverse problem is cast here as a combination of two linear steps, by explicitly introducing the electromagnetic sources in the computational volume as a set of unknowns in addition to the material unknowns. This allows to solve for material parameters and related electric fields in the source volume which are consistent with Maxwell's equations. Solutions are obtained iteratively by decoupling the two steps. First, we invert for the permittivity only in the minimization of a cost function and second, given the materials, we find the corresponding electric fields through direct solution of the integral equation in the source volume. The sources thus computed are used to generate the far fields and the synthesized triter response. The cost function is obtained by calculating the deviation between the synthesized value of reflectivity/transmittivity and the desired one. Solution geometries for the periodic cell are sought as gratings (ensembles of columns of different heights and widths), or combinations of homogeneous layers of different dielectric materials and gratings. Hence the explicit unknowns of the inversion step are the material permittivities and the relative boundaries separating homogeneous parcels of the periodic cell.
Location error uncertainties - an advanced using of probabilistic inverse theory
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2016-04-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.
NASA Astrophysics Data System (ADS)
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
52. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
52. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angeles, CA, Photographer, Date unknown DETAILS OF MAIN FLOOR ELEVATOR LOBBY - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
50. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
50. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angleles, CA, Photographer, Date unknown ENTRANCE AND TYPICAL BAY ON FLOWER STREET - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
53. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print ...
53. Photocopy of drawing (Source unknown, 1928) Rapid Blue Print Co., Los Angeles, CA, Photographer, Date unknown DETAILS OF CORRIDORS ON SECOND - TWELFTH FLOORS - Richfield Oil Building, 555 South Flower Street, Los Angeles, Los Angeles County, CA
Electromagnetic spectrum management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seastrand, Douglas R.
A system for transmitting a wireless countermeasure signal to disrupt third party communications is disclosed that include an antenna configured to receive wireless signals and transmit wireless counter measure signals such that the wireless countermeasure signals are responsive to the received wireless signals. A receiver processes the received wireless signals to create processed received signal data while a spectrum control module subtracts known source signal data from the processed received signal data to generate unknown source signal data. The unknown source signal data is based on unknown wireless signals, such as enemy signals. A transmitter is configured to process themore » unknown source signal data to create countermeasure signals and transmit a wireless countermeasure signal over the first antenna or a second antenna to thereby interfere with the unknown wireless signals.« less
Recovering an unknown source in a fractional diffusion problem
NASA Astrophysics Data System (ADS)
Rundell, William; Zhang, Zhidong
2018-09-01
A standard inverse problem is to determine a source which is supported in an unknown domain D from external boundary measurements. Here we consider the case of a time-independent situation where the source is equal to unity in an unknown subdomain D of a larger given domain Ω and the boundary of D has the star-like shape, i.e.
Cloern, James E.; Cole, Brian E.; Caffrey, J.M.
1996-01-01
In this report, we focus on selection of an “optimum” station configuration for the channel of San Francisco Bay for vertical profiling of water quality. Our analysis is based on the monthly cruises conducted by the USGS under the auspices of the Regional Monitoring Program for Trace Substances (Caffrey et al. 1994; SFEI 1994). The underlying rationale for undertaking the analysis is that the distribution of trace substances is structured, at least in part, by the same forces acting on water quality parameters. This must be true to some extent, as trace substance concentrations are partially dependent on water quality characteristics such as salinity. On the other hand, the quantitative importance of these parameters in accounting for overall variability in individual trace substances is unknown. Furthermore, trace substances have their own unique sources, and these sources may dominate their distribution.
Real-time strategy game training: emergence of a cognitive flexibility trait.
Glass, Brian D; Maddox, W Todd; Love, Bradley C
2013-01-01
Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function.
Real-Time Strategy Game Training: Emergence of a Cognitive Flexibility Trait
Glass, Brian D.; Maddox, W. Todd; Love, Bradley C.
2013-01-01
Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function. PMID:23950921
NASA Astrophysics Data System (ADS)
Zhuo-Dan, Zhu; Shang-Hong, Zhao; Chen, Dong; Ying, Sun
2018-07-01
In this paper, a phase-encoded measurement device independent quantum key distribution (MDI-QKD) protocol without a shared reference frame is presented, which can generate secure keys between two parties while the quantum channel or interferometer introduces an unknown and slowly time-varying phase. The corresponding secret key rate and single photons bit error rate is analysed, respectively, with single photons source (SPS) and weak coherent source (WCS), taking finite-key analysis into account. The numerical simulations show that the modified phase-encoded MDI-QKD protocol has apparent superiority both in maximal secure transmission distance and key generation rate while possessing the improved robustness and practical security in the high-speed case. Moreover, the rejection of the frame-calibrating part will intrinsically reduce the consumption of resources as well as the potential security flaws of practical MDI-QKD systems.
The Chuar Petroleum System, Arizona and Utah
Lillis, Paul G.
2016-01-01
The Neoproterozoic Chuar Group consists of marine mudstone, sandstone and dolomitic strata divided into the Galeros and Kwagunt Formations, and is exposed only in the eastern Grand Canyon, Arizona. Research by the U.S. Geological Survey (USGS) in the late 1980s identified strata within the group to be possible petroleum source rocks, and in particular the Walcott Member of the Kwagunt Formation. Industry interest in a Chuar oil play led to several exploratory wells drilled in the 1990s in southern Utah and northern Arizona to test the overlying Cambrian Tapeats Sandstone reservoir, and confirm the existence of the Chuar in subcrop. USGS geochemical analyses of Tapeats oil shows in two wells have been tentatively correlated to Chuar bitumen extracts. Distribution of the Chuar in the subsurface is poorly constrained with only five well penetrations, but recently published gravity/aeromagnetic interpretations provide further insight into the Chuar subcrop distribution. The Chuar petroleum system was reexamined as part of the USGS Paradox Basin resource assessment in 2011. A map was constructed to delineate the Chuar petroleum system that encompasses the projected Chuar source rock distribution and all oil shows in the Tapeats Sandstone, assuming that the Chuar is the most likely source for such oil shows. Two hypothetical plays were recognized but not assessed: (1) a conventional play with a Chuar source and Tapeats reservoir, and (2) an unconventional play with a Chuar source and reservoir. The conventional play has been discouraging because most surface structures have been tested by drilling with minimal petroleum shows, and there is some evidence that petroleum may have been flushed by CO2 from Tertiary volcanism. The unconventional play is untested and remains promising even though the subcrop distribution of source facies within the Chuar Group is largely unknown.
A systematic review and meta-analysis on the incubation period of Campylobacteriosis.
Awofisayo-Okuyelu, A; Hall, I; Adak, G; Hawker, J I; Abbott, S; McCARTHY, N
2017-08-01
Accurate knowledge of pathogen incubation period is essential to inform public health policies and implement interventions that contribute to the reduction of burden of disease. The incubation period distribution of campylobacteriosis is currently unknown with several sources reporting different times. Variation in the distribution could be expected due to host, transmission vehicle, and organism characteristics, however, the extent of this variation and influencing factors are unclear. The authors have undertaken a systematic review of published literature of outbreak studies with well-defined point source exposures and human experimental studies to estimate the distribution of incubation period and also identify and explain the variation in the distribution between studies. We tested for heterogeneity using I 2 and Kolmogorov-Smirnov tests, regressed incubation period against possible explanatory factors, and used hierarchical clustering analysis to define subgroups of studies without evidence of heterogeneity. The mean incubation period of subgroups ranged from 2·5 to 4·3 days. We observed variation in the distribution of incubation period between studies that was not due to chance. A significant association between the mean incubation period and age distribution was observed with outbreaks involving only children reporting an incubation of 1·29 days longer when compared with outbreaks involving other age groups.
Developing Particle Emission Inventories Using Remote Sensing (PEIRS)
NASA Technical Reports Server (NTRS)
Tang, Chia-Hsi; Coull, Brent A.; Schwartz, Joel; Lyapustin, Alexei I.; Di, Qian; Koutrakis, Petros
2016-01-01
Information regarding the magnitude and distribution of PM(sub 2.5) emissions is crucial in establishing effective PM regulations and assessing the associated risk to human health and the ecosystem. At present, emission data is obtained from measured or estimated emission factors of various source types. Collecting such information for every known source is costly and time consuming. For this reason, emission inventories are reported periodically and unknown or smaller sources are often omitted or aggregated at large spatial scale. To address these limitations, we have developed and evaluated a novel method that uses remote sensing data to construct spatially-resolved emission inventories for PM(sub 2.5). This approach enables us to account for all sources within a fixed area, which renders source classification unnecessary. We applied this method to predict emissions in the northeast United States during the period of 2002-2013 using high- resolution 1 km x 1 km Aerosol Optical Depth (AOD). Emission estimates moderately agreed with the EPA National Emission Inventory (R(sup2) = 0.66 approx. 0.71, CV = 17.7 approx. 20%). Predicted emissions are found to correlate with land use parameters suggesting that our method can capture emissions from land use-related sources. In addition, we distinguished small-scale intra-urban variation in emissions reflecting distribution of metropolitan sources. In essence, this study demonstrates the great potential of remote sensing data to predict particle source emissions cost-effectively.
Developing Particle Emission Inventories Using Remote Sensing (PEIRS)
Tang, Chia-Hsi; Coull, Brent A.; Schwartz, Joel; Lyapustin, Alexei I.; Di, Qian; Koutrakis, Petros
2018-01-01
Information regarding the magnitude and distribution of PM2.5 emissions is crucial in establishing effective PM regulations and assessing the associated risk to human health and the ecosystem. At present, emission data is obtained from measured or estimated emission factors of various source types. Collecting such information for every known source is costly and time consuming. For this reason, emission inventories are reported periodically and unknown or smaller sources are often omitted or aggregated at large spatial scale. To address these limitations, we have developed and evaluated a novel method that uses remote sensing data to construct spatially-resolved emission inventories for PM2.5. This approach enables us to account for all sources within a fixed area, which renders source classification unnecessary. We applied this method to predict emissions in the northeast United States during the period of 2002–2013 using high- resolution 1 km × 1km Aerosol Optical Depth (AOD). Emission estimates moderately agreed with the EPA National Emission Inventory (R2=0.66~0.71, CV = 17.7~20%). Predicted emissions are found to correlate with land use parameters suggesting that our method can capture emissions from land use-related sources. In addition, we distinguished small-scale intra-urban variation in emissions reflecting distribution of metropolitan sources. In essence, this study demonstrates the great potential of remote sensing data to predict particle source emissions cost-effectively. PMID:27653469
Fried, Alan; Lee, Yin -Nan; Frost, Greg; ...
2002-02-27
Here, formaldehyde measurements from two independent instruments are compared with photochemical box model calculations. The measurements were made on the NOAA P-3 aircraft as part of the 1997 North Atlantic Regional Experiment (NARE 1997). After examining the possible reasons for the model-measurement discrepancy, we conclude that there are probably one or more additional unknown sources of CH 2O in the North Atlantic troposphere.
A blind HI search for galaxies in the northern Zone of Avoidance
NASA Astrophysics Data System (ADS)
Rivers, Andrew James
Searches for galaxies in the nearby and distant universe have long focused in the direction of the Galactic poles, or perpendicular to the plane of the Milky Way. Dust concentrated in the Milky Way's disk absorbs and scatters light and therefore precludes easy optical detection of extragalactic sources in this ``Zone of Avoidance'' (ZOA). The Dwingeloo Obscured Galaxies Survey (DOGS) was a 21-cm blind survey for galaxies hidden in the northern ZOA. Dust is transparent at radio wavelengths and therefore the survey is not biased against detection of galaxies near the Galactic plane. The DOGS project was designed to reveal hidden dynamically important nearby galaxies and to help ``fill in the blanks'' in the local large scale structure. During the survey and subsequent followup observations, 43 galaxies were detected; 28 of these were previously unknown. Obscuration by dust could effectively hide a massive member of the Local Group. This survey rules out the existence of a hidden gas-rich dynamically important source. The possibility of gas-poor elliptical galaxies and low-mass dwarfs remains; the low velocity of one detected dwarf irregular galaxy relative to the Milky Way indicates possible membership in the Local Group. Other nearby galaxies detected by DOGS were linked to the IC 342/Maffei group and to the nearby galaxy NGC 6946. Of the five galaxies in the IC 342/Maffei group, three were unknown at the time of the survey. Derived group properties indicate the group consists of two separate physical groups which appear close together in the sky. The five sources near NGC 6946 support the identification of a new nearby group associated with this large spiral galaxy. The distribution of massive spiral galaxies compared to low-mass dwarf galaxies may be used to test theories of structure formation. In a universe dominated by Cold Dark Matter (CDM) dwarf galaxies are more evenly distributed and are a more accurate tracer of the mass distribution. Open universe models predict approximately equal clustering properties of dwarf and spiral galaxies. A statistical analysis of the DOGS sample argues against the CDM model; no smoothly distributed population of stunted dwarf galaxies is seen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
A distributed transmit beamforming synchronization strategy for multi-element radar systems
NASA Astrophysics Data System (ADS)
Xiao, Manlin; Li, Xingwen; Xu, Jikang
2017-02-01
The distributed transmit beamforming has recently been discussed as an energy-effective technique in wireless communication systems. A common ground of various techniques is that the destination node transmits a beacon signal or feedback to assist source nodes to synchronize signals. However, this approach is not appropriate for a radar system since the destination is a non-cooperative target of an unknown location. In our paper, we propose a novel synchronization strategy for a distributed multiple-element beamfoming radar system. Source nodes estimate parameters of beacon signals transmitted from others to get their local synchronization information. The channel information of the phase propagation delay is transmitted to nodes via the reflected beacon signals as well. Next, each node generates appropriate parameters to form a beamforming signal at the target. Transmit beamforming signals of all nodes will combine coherently at the target compensating for different propagation delay. We analyse the influence of the local oscillation accuracy and the parameter estimation errors on the performance of the proposed synchronization scheme. The results of numerical simulations illustrate that this synchronization scheme is effective to enable the transmit beamforming in a distributed multi-element radar system.
Visualization of Sources in the Universe
NASA Astrophysics Data System (ADS)
Kafatos, M.; Cebral, J. R.
1993-12-01
We have begun to develop a series of visualization tools of importance to the display of astronomical data and have applied these to the visualization of cosmological sources in the recently formed Institute for Computational Sciences and Informatics at GMU. One can use a three-dimensional perspective plot of the density surface for three dimensional data and in this case the iso-level contours are three- dimensional surfaces. Sophisticated rendering algorithms combined with multiple source lighting allow us to look carefully at such density contours and to see fine structure on the surface of the density contours. Stereoscopic and transparent rendering can give an even more sophisticated approach with multi-layered surfaces providing information at different levels. We have applied these methods to looking at density surfaces of 3-D data such as 100 clusters of galaxies and 2500 galaxies in the CfA redshift survey. Our plots presented are based on three variables, right ascension, declination and redshift. We have also obtained density structures in 2-D for the distribution of gamma-ray bursts (where distances are unknown) and the distribution of a variety of sources such as clusters of galaxies. Our techniques allow for correlations to be done visually.
2015-09-30
experiment was conducted in Broad Sound of Massachusetts Bay using the AUV Unicorn, a 147dB omnidirectional Lubell source, and an open-ended steel pipe... steel pipe target (Figure C) was dropped at an approximate local coordinate position of (x,y)=(170,155). The location was estimated using ship...position when the target was dropped, but was only accurate within 10-15m. The orientation of the target was unknown. Figure C: Open-ended steel
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
Consistent description of kinetic equation with triangle anomaly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu Shi; Gao Jianhua; Wang Qun
2011-05-01
We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less
Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu
2017-05-24
In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2015-06-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.
An evolutive real-time source inversion based on a linear inverse formulation
NASA Astrophysics Data System (ADS)
Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.
2016-12-01
Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.
Time-correlated neutron analysis of a multiplying HEU source
NASA Astrophysics Data System (ADS)
Miller, E. C.; Kalter, J. M.; Lavelle, C. M.; Watson, S. M.; Kinlaw, M. T.; Chichester, D. L.; Noonan, W. A.
2015-06-01
The ability to quickly identify and characterize special nuclear material remains a national security challenge. In counter-proliferation applications, identifying the neutron multiplication of a sample can be a good indication of the level of threat. Currently neutron multiplicity measurements are performed with moderated 3He proportional counters. These systems rely on the detection of thermalized neutrons, a process which obscures both energy and time information from the source. Fast neutron detectors, such as liquid scintillators, have the ability to detect events on nanosecond time scales, providing more information on the temporal structure of the arriving signal, and provide an alternative method for extracting information from the source. To explore this possibility, a series of measurements were performed on the Idaho National Laboratory's MARVEL assembly, a configurable HEU source. The source assembly was measured in a variety of different HEU configurations and with different reflectors, covering a range of neutron multiplications from 2 to 8. The data was collected with liquid scintillator detectors and digitized for offline analysis. A gap based approach for identifying the bursts of detected neutrons associated with the same fission chain was used. Using this approach, we are able to study various statistical properties of individual fission chains. One of these properties is the distribution of neutron arrival times within a given burst. We have observed two interesting empirical trends. First, this distribution exhibits a weak, but definite, dependence on source multiplication. Second, there are distinctive differences in the distribution depending on the presence and type of reflector. Both of these phenomena might prove to be useful when assessing an unknown source. The physical origins of these phenomena can be illuminated with help of MCNPX-PoliMi simulations.
Source memory for action in young and older adults: self vs. close or unknown others.
Rosa, Nicole M; Gutchess, Angela H
2011-09-01
The present study examines source memory for actions (e.g., placing items in a suitcase). For both young and older adult participants, source memory for actions performed by the self was better than memory for actions performed by either a known (close) or unknown other. In addition, neither young nor older adults were more likely to confuse self with close others than with unknown others. Results suggest an advantage in source memory for actions performed by the self compared to others, possibly associated with sensorimotor cues that are relatively preserved in aging.
Learning from Heterogeneous Data Sources: An Application in Spatial Proteomics
Breckels, Lisa M.; Holden, Sean B.; Wojnar, David; Mulvey, Claire M.; Christoforou, Andy; Groen, Arnoud; Trotter, Matthew W. B.; Kohlbacher, Oliver; Lilley, Kathryn S.; Gatto, Laurent
2016-01-01
Sub-cellular localisation of proteins is an essential post-translational regulatory mechanism that can be assayed using high-throughput mass spectrometry (MS). These MS-based spatial proteomics experiments enable us to pinpoint the sub-cellular distribution of thousands of proteins in a specific system under controlled conditions. Recent advances in high-throughput MS methods have yielded a plethora of experimental spatial proteomics data for the cell biology community. Yet, there are many third-party data sources, such as immunofluorescence microscopy or protein annotations and sequences, which represent a rich and vast source of complementary information. We present a unique transfer learning classification framework that utilises a nearest-neighbour or support vector machine system, to integrate heterogeneous data sources to considerably improve on the quantity and quality of sub-cellular protein assignment. We demonstrate the utility of our algorithms through evaluation of five experimental datasets, from four different species in conjunction with four different auxiliary data sources to classify proteins to tens of sub-cellular compartments with high generalisation accuracy. We further apply the method to an experiment on pluripotent mouse embryonic stem cells to classify a set of previously unknown proteins, and validate our findings against a recent high resolution map of the mouse stem cell proteome. The methodology is distributed as part of the open-source Bioconductor pRoloc suite for spatial proteomics data analysis. PMID:27175778
NASA Astrophysics Data System (ADS)
Hosseini, Seyed Abolfazl; Afrakoti, Iman Esmaili Paeen
2017-04-01
Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The 241Am-9Be and 252Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions.
Cosmological Distance Scale to Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Azzam, W. J.; Linder, E. V.; Petrosian, V.
1993-05-01
The source counts or the so-called log N -- log S relations are the primary data that constrain the spatial distribution of sources with unknown distances, such as gamma-ray bursts. In order to test galactic, halo, and cosmological models for gamma-ray bursts we compare theoretical characteristics of the log N -- log S relations to those obtained from data gathered by the BATSE instrument on board the Compton Observatory (GRO) and other instruments. We use a new and statistically correct method, that takes proper account of the variable nature of the triggering threshold, to analyze the data. Constraints on models obtained by this comparison will be presented. This work is supported by NASA grants NAGW 2290, NAG5 2036, and NAG5 1578.
25. Photocopy of photograph (Source unknown, c. 19231925) EXTERIOR, CLOSEUP ...
25. Photocopy of photograph (Source unknown, c. 1923-1925) EXTERIOR, CLOSE-UP OF SOUTH FRONT OF MISSION AFTER RESTORATION, C. 1923-1925 - Mission San Francisco Solano de Sonoma, First & Spain Streets, Sonoma, Sonoma County, CA
44. Reinforcement construction to Pleasant Dam. Photographer unknown, 1935. Source: ...
44. Reinforcement construction to Pleasant Dam. Photographer unknown, 1935. Source: Huber Collection, University of California, Berkeley, Water Resources Library. - Waddell Dam, On Agua Fria River, 35 miles northwest of Phoenix, Phoenix, Maricopa County, AZ
A reconfigurable computing platform for plume tracking with mobile sensor networks
NASA Astrophysics Data System (ADS)
Kim, Byung Hwa; D'Souza, Colin; Voyles, Richard M.; Hesch, Joel; Roumeliotis, Stergios I.
2006-05-01
Much work has been undertaken recently toward the development of low-power, high-performance sensor networks. There are many static remote sensing applications for which this is appropriate. The focus of this development effort is applications that require higher performance computation, but still involve severe constraints on power and other resources. Toward that end, we are developing a reconfigurable computing platform for miniature robotic and human-deployed sensor systems composed of several mobile nodes. The system provides static and dynamic reconfigurability for both software and hardware by the combination of CPU (central processing unit) and FPGA (field-programmable gate array) allowing on-the-fly reprogrammability. Static reconfigurability of the hardware manifests itself in the form of a "morphing bus" architecture that permits the modular connection of various sensors with no bus interface logic. Dynamic hardware reconfigurability provides for the reallocation of hardware resources at run-time as the mobile, resource-constrained nodes encounter unknown environmental conditions that render various sensors ineffective. This computing platform will be described in the context of work on chemical/biological/radiological plume tracking using a distributed team of mobile sensors. The objective for a dispersed team of ground and/or aerial autonomous vehicles (or hand-carried sensors) is to acquire measurements of the concentration of the chemical agent from optimal locations and estimate its source and spread. This requires appropriate distribution, coordination and communication within the team members across a potentially unknown environment. The key problem is to determine the parameters of the distribution of the harmful agent so as to use these values for determining its source and predicting its spread. The accuracy and convergence rate of this estimation process depend not only on the number and accuracy of the sensor measurements but also on their spatial distribution over time (the sampling strategy). For the safety of a human-deployed distribution of sensors, optimized trajectories to minimize human exposure are also of importance. The systems described in this paper are currently being developed. Parts of the system are already in existence and some results from these are described.
Airborne methane remote measurements reveal heavy-tail flux distribution in Four Corners region
Thorpe, Andrew K.; Thompson, David R.; Hulley, Glynn; Kort, Eric Adam; Vance, Nick; Borchardt, Jakob; Krings, Thomas; Gerilowski, Konstantin; Sweeney, Colm; Conley, Stephen; Bue, Brian D.; Aubrey, Andrew D.; Hook, Simon; Green, Robert O.
2016-01-01
Methane (CH4) impacts climate as the second strongest anthropogenic greenhouse gas and air quality by influencing tropospheric ozone levels. Space-based observations have identified the Four Corners region in the Southwest United States as an area of large CH4 enhancements. We conducted an airborne campaign in Four Corners during April 2015 with the next-generation Airborne Visible/Infrared Imaging Spectrometer (near-infrared) and Hyperspectral Thermal Emission Spectrometer (thermal infrared) imaging spectrometers to better understand the source of methane by measuring methane plumes at 1- to 3-m spatial resolution. Our analysis detected more than 250 individual methane plumes from fossil fuel harvesting, processing, and distributing infrastructures, spanning an emission range from the detection limit ∼ 2 kg/h to 5 kg/h through ∼ 5,000 kg/h. Observed sources include gas processing facilities, storage tanks, pipeline leaks, and well pads, as well as a coal mine venting shaft. Overall, plume enhancements and inferred fluxes follow a lognormal distribution, with the top 10% emitters contributing 49 to 66% to the inferred total point source flux of 0.23 Tg/y to 0.39 Tg/y. With the observed confirmation of a lognormal emission distribution, this airborne observing strategy and its ability to locate previously unknown point sources in real time provides an efficient and effective method to identify and mitigate major emissions contributors over a wide geographic area. With improved instrumentation, this capability scales to spaceborne applications [Thompson DR, et al. (2016) Geophys Res Lett 43(12):6571–6578]. Further illustration of this potential is demonstrated with two detected, confirmed, and repaired pipeline leaks during the campaign. PMID:27528660
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Beucler, E.; Haugmard, M.; Mocquet, A.
2016-12-01
The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.
NASA Astrophysics Data System (ADS)
Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.
2017-11-01
We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.
12. Photocopy of lithograph (source unknown) The Armor Lithograph Company, ...
12. Photocopy of lithograph (source unknown) The Armor Lithograph Company, Ltd., Pittsburgh, Pennsylvania, ca. 1888 COURTHOUSE AND JAIL, FROM THE WEST - Allegheny County Courthouse & Jail, 436 Grant Street (Courthouse), 420 Ross Street (Jail), Pittsburgh, Allegheny County, PA
NASA Astrophysics Data System (ADS)
Jensen, B. J. L.; Mackay, H.; Pyne-O'Donnell, S.; Plunkett, G.; Hughes, P. D. M.; Froese, D. G.; Booth, R.
2014-12-01
Cryptotephras (tephra not visible to the naked eye) form the foundation of the tephrostratigraphic frameworks used in Europe to date and correlate widely distributed geologic, paleoenvironmental and archaeological records. Pyne-O'Donnell et al. (2012) established the potential for developing a similar crypto-tephrostratigraphy across eastern North America by identifying multiple tephra, including the White River Ash (east; WRAe), St. Helens We and East Lake, in a peat core located in Newfoundland. Following on from this work, several ongoing projects have examined additional peat cores from Michigan, New York State, Maine, Nova Scotia and Newfoundland to build a tephrostratigraphic framework for this region. Using the precedent set by recent research by Jensen et al.(in press) that correlated the Alaskan WRAe to the European cryptotephra AD860B, unknown tephras identified in this work were not necessarily assumed to be from "expected" source areas (e.g. the Cascades). Here we present several examples of the preservation of tephra layers with an intercontinental distribution (i.e. WRAe and Ksudach 1), from relatively small magnitude events (i.e. St. Helens layer T, Mono Crater), and the first example of a Mexican ash in the NE (Volcan Ceboruco, Jala pumice). There are several implications of the identification of these units. These far-travelled ashes: (1) highlight the need to consider "ultra" distal source volcanoes for unknown cryptotephra deposits,. (2) present an opportunity for physical volcanologists to examine why some eruptions have an exceptional distribution of ash that is not necessarily controlled by the magnitude of the event. (3) complicate the idea of using tephrostratigraphic frameworks to understand the frequency of eruptions towards aiding hazard planning and prediction (e.g. Swindles et al., 2011). (4) show that there is a real potential to link tropical and mid to high-latitude paleoenvironmental records. Jensen et al. (in press) Transatlantic correlation of the Alaskan White River Ash. Geology. Pyne-O'Donnell et al. (2012). High-precision ultra-distal Holocene tephrochronology in North America. Quaternary Science Reviews, 52, 6-11. Swindles et al. (2011). A 7000 yr perspective on volcanic ash clouds affecting northern Europe. Geology, 39, 887-890.
A Risk-Based Multi-Objective Optimization Concept for Early-Warning Monitoring Networks
NASA Astrophysics Data System (ADS)
Bode, F.; Loschko, M.; Nowak, W.
2014-12-01
Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources which cannot be eliminated, especially in urban regions. As matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs.In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations and the early warning time and to minimize the installation and operating costs of the monitoring network. A qualitative risk ranking is used to prioritize the known risk sources for monitoring. The unknown risk sources can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well.We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks which are valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrade) to also cover moderate, tolerable and unknown risk sources. Monitoring networks which are valid for the remaining risk also cover all other risk sources but the early-warning time suffers.The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. To avoid numerical dispersion during the transport simulations we use the particle-tracking random walk method.
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
17. Photocopy of a photograph, source and date unknown GENERAL ...
17. Photocopy of a photograph, source and date unknown GENERAL VIEW OF FRONT FACADE OF MT. CLARE STATION; PASSENGER CAR SHOP IN REAR - Baltimore & Ohio Railroad, Mount Clare Passenger Car Shop, Southwest corner of Pratt & Poppleton Streets, Baltimore, Independent City, MD
Supervised Detection of Anomalous Light Curves in Massive Astronomical Catalogs
NASA Astrophysics Data System (ADS)
Nun, Isadora; Pichara, Karim; Protopapas, Pavlos; Kim, Dae-Won
2014-09-01
The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nun, Isadora; Pichara, Karim; Protopapas, Pavlos
The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each ofmore » the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.« less
Iliev, Filip L.; Stanev, Valentin G.; Vesselinov, Velimir V.
2018-01-01
Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found. PMID:29518126
Iliev, Filip L; Stanev, Valentin G; Vesselinov, Velimir V; Alexandrov, Boian S
2018-01-01
Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found.
SymPS: BRDF Symmetry Guided Photometric Stereo for Shape and Light Source Estimation.
Lu, Feng; Chen, Xiaowu; Sato, Imari; Sato, Yoichi
2018-01-01
We propose uncalibrated photometric stereo methods that address the problem due to unknown isotropic reflectance. At the core of our methods is the notion of "constrained half-vector symmetry" for general isotropic BRDFs. We show that such symmetry can be observed in various real-world materials, and it leads to new techniques for shape and light source estimation. Based on the 1D and 2D representations of the symmetry, we propose two methods for surface normal estimation; one focuses on accurate elevation angle recovery for surface normals when the light sources only cover the visible hemisphere, and the other for comprehensive surface normal optimization in the case that the light sources are also non-uniformly distributed. The proposed robust light source estimation method also plays an essential role to let our methods work in an uncalibrated manner with good accuracy. Quantitative evaluations are conducted with both synthetic and real-world scenes, which produce the state-of-the-art accuracy for all of the non-Lambertian materials in MERL database and the real-world datasets.
A Genealogical Look at Shared Ancestry on the X Chromosome.
Buffalo, Vince; Mount, Stephen M; Coop, Graham
2016-09-01
Close relatives can share large segments of their genome identical by descent (IBD) that can be identified in genome-wide polymorphism data sets. There are a range of methods to use these IBD segments to identify relatives and estimate their relationship. These methods have focused on sharing on the autosomes, as they provide a rich source of information about genealogical relationships. We hope to learn additional information about recent ancestry through shared IBD segments on the X chromosome, but currently lack the theoretical framework to use this information fully. Here, we fill this gap by developing probability distributions for the number and length of X chromosome segments shared IBD between an individual and an ancestor k generations back, as well as between half- and full-cousin relationships. Due to the inheritance pattern of the X and the fact that X homologous recombination occurs only in females (outside of the pseudoautosomal regions), the number of females along a genealogical lineage is a key quantity for understanding the number and length of the IBD segments shared among relatives. When inferring relationships among individuals, the number of female ancestors along a genealogical lineage will often be unknown. Therefore, our IBD segment length and number distributions marginalize over this unknown number of recombinational meioses through a distribution of recombinational meioses we derive. By using Bayes' theorem to invert these distributions, we can estimate the number of female ancestors between two relatives, giving us details about the genealogical relations between individuals not possible with autosomal data alone. Copyright © 2016 by the Genetics Society of America.
EPA Unmix 6.0 Fundamentals & User Guide
Unmix seeks to solve the general mixture problem where the data are assumed to be a linear combination of an unknown number of sources of unknown composition, which contribute an unknown amount to each sample.
Impact of dose calibrators quality control programme in Argentina
NASA Astrophysics Data System (ADS)
Furnari, J. C.; de Cabrejas, M. L.; del C. Rotta, M.; Iglicki, F. A.; Milá, M. I.; Magnavacca, C.; Dima, J. C.; Rodríguez Pasqués, R. H.
1992-02-01
The national Quality Control (QC) programme for radionuclide calibrators started 12 years ago. Accuracy and the implementation of a QC programme were evaluated over all these years at 95 nuclear medicine laboratories where dose calibrators were in use. During all that time, the Metrology Group of CNEA has distributed 137Cs sealed sources to check stability and has been performing periodic "checking rounds" and postal surveys using unknown samples (external quality control). An account of the results of both methods is presented. At present, more of 65% of the dose calibrators measure activities with an error less than 10%.
Investigating the origin of ultrahigh-energy cosmic rays with CRPropa
NASA Astrophysics Data System (ADS)
Bouchachi, Dallel; Attallah, Reda
2016-07-01
Ultrahigh-energy cosmic rays are the most energetic of any subatomic particles ever observed in nature. Yet, their sources and acceleration mechanisms are still unknown. To better understand the origin of these particles, we carried out extensive numerical simulations of their propagation in extragalactic space. We used the public CRPropa code which considers all relevant particle interactions and magnetic deflections. We examined the energy spectrum, the mass composition, and the distribution of arrival directions under different scenarios. Such a study allows, in particular, to properly interpret the data of modern experiments like "The Pierre Auger Observatory" and "The Telescope Array".
Full statistical mode reconstruction of a light field via a photon-number-resolved measurement
NASA Astrophysics Data System (ADS)
Burenkov, I. A.; Sharma, A. K.; Gerrits, T.; Harder, G.; Bartley, T. J.; Silberhorn, C.; Goldschmidt, E. A.; Polyakov, S. V.
2017-05-01
We present a method to reconstruct the complete statistical mode structure and optical losses of multimode conjugated optical fields using an experimentally measured joint photon-number probability distribution. We demonstrate that this method evaluates classical and nonclassical properties using a single measurement technique and is well suited for quantum mesoscopic state characterization. We obtain a nearly perfect reconstruction of a field comprised of up to ten modes based on a minimal set of assumptions. To show the utility of this method, we use it to reconstruct the mode structure of an unknown bright parametric down-conversion source.
Aluminum and Manganese Distributions in the Solomon Sea: Results from the 2012 PANDORA Cruise
NASA Astrophysics Data System (ADS)
Michael, S. M.; Resing, J. A.; Jeandel, C.; Lacan, F.
2016-02-01
Much is still unknown about the sources of trace nutrients to the Equatorial Undercurrent (EUC), which ultimately contribute to high-nutrient regions in the Eastern Tropical Pacific. One region that is possibly a source of trace nutrients to the EUC is the Solomon Sea, located east of Papua New Guinea. A study during the summer of 2012, PANDORA, was conducted on board the R/V l'Atalante to determine currents and the geochemical makeup within the basin. Water samples were analyzed for aluminum and manganese using Flow Injection Analysis (FIA). At many stations, aluminum distributions exhibit a sub-surface minimum, located at approximately the same depth as a salinity maximum. Additionally, aluminum is enriched along coastal areas, particularly in the outflow of the Vitiaz Strait, which is concurrent with the findings of Slemons et al. 2010. These regions of high aluminum are also likely regions of iron enrichment. Manganese distributions in the Solomon Sea are similar to data collected north of the region by Slemons et al. 2010, and show a scavenged distribution with local inputs in the surface and concentrations decreasing at depth. This region has strong western boundary currents, and input from coastal margins, two large rivers, island mining sites, and hydrothermal activity, making it an important study-site to determine how trace nutrients are transported to the open ocean.
NASA Astrophysics Data System (ADS)
Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael
2013-05-01
This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.
6. Photographic copy of photograph. No date. Photographer unknown. (Source: ...
6. Photographic copy of photograph. No date. Photographer unknown. (Source: SCIP office, Coolidge, AZ) CHINA WASH FLUME UNDER CONSTRUCTION - San Carlos Irrigation Project, China Wash Flume, Main (Florence-Case Grande) Canal at Station 137+00, T4S, R10E, S14, Coolidge, Pinal County, AZ
Development of a European Ensemble System for Seasonal Prediction: Application to crop yield
NASA Astrophysics Data System (ADS)
Terres, J. M.; Cantelaube, P.
2003-04-01
Western European agriculture is highly intensive and the weather is the main source of uncertainty for crop yield assessment and for crop management. In the current system, at the time when a crop yield forecast is issued, the weather conditions leading up to harvest time are unknown and are therefore a major source of uncertainty. The use of seasonal weather forecast would bring additional information for the remaining crop season and has valuable benefit for improving the management of agricultural markets and environmentally sustainable farm practices. An innovative method for supplying seasonal forecast information to crop simulation models has been developed in the frame of the EU funded research project DEMETER. It consists in running a crop model on each individual member of the seasonal hindcasts to derive a probability distribution of crop yield. Preliminary results of cumulative probability function of wheat yield provides information on both the yield anomaly and the reliability of the forecast. Based on the spread of the probability distribution, the end-user can directly quantify the benefits and risks of taking weather-sensitive decisions.
Developing particle emission inventories using remote sensing (PEIRS).
Tang, Chia-Hsi; Coull, Brent A; Schwartz, Joel; Lyapustin, Alexei I; Di, Qian; Koutrakis, Petros
2017-01-01
Information regarding the magnitude and distribution of PM 2.5 emissions is crucial in establishing effective PM regulations and assessing the associated risk to human health and the ecosystem. At present, emission data is obtained from measured or estimated emission factors of various source types. Collecting such information for every known source is costly and time-consuming. For this reason, emission inventories are reported periodically and unknown or smaller sources are often omitted or aggregated at large spatial scale. To address these limitations, we have developed and evaluated a novel method that uses remote sensing data to construct spatially resolved emission inventories for PM 2.5 . This approach enables us to account for all sources within a fixed area, which renders source classification unnecessary. We applied this method to predict emissions in the northeastern United States during the period 2002-2013 using high-resolution 1 km × 1 km aerosol optical depth (AOD). Emission estimates moderately agreed with the EPA National Emission Inventory (R 2 = 0.66-0.71, CV = 17.7-20%). Predicted emissions are found to correlate with land use parameters, suggesting that our method can capture emissions from land-use-related sources. In addition, we distinguished small-scale intra-urban variation in emissions reflecting distribution of metropolitan sources. In essence, this study demonstrates the great potential of remote sensing data to predict particle source emissions cost-effectively. We present a novel method, particle emission inventories using remote sensing (PEIRS), using remote sensing data to construct spatially resolved PM 2.5 emission inventories. Both primary emissions and secondary formations are captured and predicted at a high spatial resolution of 1 km × 1 km. Using PEIRS, large and comprehensive data sets can be generated cost-effectively and can inform development of air quality regulations.
Masaki, Yukiko; Shimizu, Yoichi; Yoshioka, Takeshi; Tanaka, Yukari; Nishijima, Ken-Ichi; Zhao, Songji; Higashino, Kenichi; Sakamoto, Shingo; Numata, Yoshito; Yamaguchi, Yoshitaka; Tamaki, Nagara; Kuge, Yuji
2015-11-19
(18)F-fluoromisonidazole (FMISO) has been widely used as a hypoxia imaging probe for diagnostic positron emission tomography (PET). FMISO is believed to accumulate in hypoxic cells via covalent binding with macromolecules after reduction of its nitro group. However, its detailed accumulation mechanism remains unknown. Therefore, we investigated the chemical forms of FMISO and their distributions in tumours using imaging mass spectrometry (IMS), which visualises spatial distribution of chemical compositions based on molecular masses in tissue sections. Our radiochemical analysis revealed that most of the radioactivity in tumours existed as low-molecular-weight compounds with unknown chemical formulas, unlike observations made with conventional views, suggesting that the radioactivity distribution primarily reflected that of these unknown substances. The IMS analysis indicated that FMISO and its reductive metabolites were nonspecifically distributed in the tumour in patterns not corresponding to the radioactivity distribution. Our IMS search found an unknown low-molecular-weight metabolite whose distribution pattern corresponded to that of both the radioactivity and the hypoxia marker pimonidazole. This metabolite was identified as the glutathione conjugate of amino-FMISO. We showed that the glutathione conjugate of amino-FMISO is involved in FMISO accumulation in hypoxic tumour tissues, in addition to the conventional mechanism of FMISO covalent binding to macromolecules.
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
Sabin, Guilherme P; Lozano, Valeria A; Rocha, Werickson F C; Romão, Wanderson; Ortiz, Rafael S; Poppi, Ronei J
2013-11-01
The chemical imaging technique by near infrared spectroscopy was applied for characterization of formulations in tablets of sildenafil citrate of six different sources. Five formulations were provided by Brazilian Federal Police and correspond to several trademarks of prohibited marketing and one was an authentic sample of Viagra. In a first step of the study, multivariate curve resolution was properly chosen for the estimation of the distribution map of concentration of the active ingredient in tablets of different sources, where the chemical composition of all excipients constituents was not truly known. In such cases, it is very difficult to establish an appropriate calibration technique, so that only the information of sildenafil is considered independently of the excipients. This determination was possible only by reaching the second-order advantage, where the analyte quantification can be performed in the presence of unknown interferences. In a second step, the normalized histograms of images from active ingredient were grouped according to their similarities by hierarchical cluster analysis. Finally it was possible to recognize the patterns of distribution maps of concentration of sildenafil citrate, distinguishing the true formulation of Viagra. This concept can be used to improve the knowledge of industrial products and processes, as well as, for characterization of counterfeit drugs. Copyright © 2013. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Sutherland, Michael Stephen
2010-12-01
The Galactic magnetic field is poorly understood. Essentially the only reliable measurements of its properties are the local orientation and field strength. Its behavior at galactic scales is unknown. Historically, magnetic field measurements have been performed using radio astronomy techniques which are sensitive to certain regions of the Galaxy and rely upon models of the distribution of gas and dust within the disk. However, the deflection of trajectories of ultra high energy cosmic rays arriving from extragalactic sources depends only on the properties of the magnetic field. In this work, a method is developed for determining acceptable global models of the Galactic magnetic field by backtracking cosmic rays through the field model. This method constrains the parameter space of magnetic field models by comparing a test statistic between backtracked cosmic rays and isotropic expectations for assumed cosmic ray source and composition hypotheses. Constraints on Galactic magnetic field models are established using data from the southern site of the Pierre Auger Observatory under various source distribution and cosmic ray composition hypotheses. Field models possessing structure similar to the stellar spiral arms are found to be inconsistent with hypotheses of an iron cosmic ray composition and sources selected from catalogs tracing the local matter distribution in the universe. These field models are consistent with hypothesis combinations of proton composition and sources tracing the local matter distribution. In particular, strong constraints are found on the parameter space of bisymmetric magnetic field models scanned under hypotheses of proton composition and sources selected from the 2MRS-VS, Swift 39-month, and VCV catalogs. Assuming that the Galactic magnetic field is well-described by a bisymmetric model under these hypotheses, the magnetic field strength near the Sun is less than 3-4 muG and magnetic pitch angle is less than -8°. These results comprise the first measurements of the Galactic magnetic field using ultra-high energy cosmic rays and supplement existing radio astronomical measurements of the Galactic magnetic field.
the-wizz: clustering redshift estimation for everyone
NASA Astrophysics Data System (ADS)
Morrison, C. B.; Hildebrandt, H.; Schmidt, S. J.; Baldry, I. K.; Bilicki, M.; Choi, A.; Erben, T.; Schneider, P.
2017-05-01
We present the-wizz, an open source and user-friendly software for estimating the redshift distributions of photometric galaxies with unknown redshifts by spatially cross-correlating them against a reference sample with known redshifts. The main benefit of the-wizz is in separating the angular pair finding and correlation estimation from the computation of the output clustering redshifts allowing anyone to create a clustering redshift for their sample without the intervention of an 'expert'. It allows the end user of a given survey to select any subsample of photometric galaxies with unknown redshifts, match this sample's catalogue indices into a value-added data file and produce a clustering redshift estimation for this sample in a fraction of the time it would take to run all the angular correlations needed to produce a clustering redshift. We show results with this software using photometric data from the Kilo-Degree Survey (KiDS) and spectroscopic redshifts from the Galaxy and Mass Assembly survey and the Sloan Digital Sky Survey. The results we present for KiDS are consistent with the redshift distributions used in a recent cosmic shear analysis from the survey. We also present results using a hybrid machine learning-clustering redshift analysis that enables the estimation of clustering redshifts for individual galaxies. the-wizz can be downloaded at http://github.com/morriscb/The-wiZZ/.
NASA Astrophysics Data System (ADS)
Huang, Ching-Sheng; Yeh, Hund-Der
2016-11-01
This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Vesselinov, Velimir V.; Stanev, Valentin
The ShiftNMFk1.2 code, or as we call it, GreenNMFk, represents a hybrid algorithm combining unsupervised adaptive machine learning and Green's function inverse method. GreenNMFk allows an efficient and high performance de-mixing and feature extraction of a multitude of nonnegative signals that change their shape propagating through the medium. The signals are mixed and recorded by a network of uncorrelated sensors. The code couples Non-negative Matrix Factorization (NMF) and inverse-analysis Green's functions method. GreenNMF synergistically performs decomposition of the recorded mixtures, finds the number of the unknown sources and uses the Green's function of the governing partial differential equation to identifymore » the unknown sources and their charecteristics. GreenNMF can be applied directly to any problem controlled by a known partial-differential parabolic equation where mixtures of an unknown number of sources are measured at multiple locations. Full GreenNMFk method is a subject LANL U.S. Patent application S133364.000 August, 2017. The ShiftNMFk 1.2 version here is a toy version of this method that can work with a limited number of unknown sources (4 or less).« less
Inference of relativistic electron spectra from measurements of inverse Compton radiation
NASA Astrophysics Data System (ADS)
Craig, I. J. D.; Brown, J. C.
1980-07-01
The inference of relativistic electron spectra from spectral measurement of inverse Compton radiation is discussed for the case where the background photon spectrum is a Planck function. The problem is formulated in terms of an integral transform that relates the measured spectrum to the unknown electron distribution. A general inversion formula is used to provide a quantitative assessment of the information content of the spectral data. It is shown that the observations must generally be augmented by additional information if anything other than a rudimentary two or three parameter model of the source function is to be derived. It is also pointed out that since a similar equation governs the continuum spectra emitted by a distribution of black-body radiators, the analysis is relevant to the problem of stellar population synthesis from galactic spectra.
Mei, Jie; Ren, Wei; Li, Bing; Ma, Guangfu
2015-09-01
In this paper, we consider the distributed containment control problem for multiagent systems with unknown nonlinear dynamics. More specifically, we focus on multiple second-order nonlinear systems and networked Lagrangian systems. We first study the distributed containment control problem for multiple second-order nonlinear systems with multiple dynamic leaders in the presence of unknown nonlinearities and external disturbances under a general directed graph that characterizes the interaction among the leaders and the followers. A distributed adaptive control algorithm with an adaptive gain design based on the approximation capability of neural networks is proposed. We present a necessary and sufficient condition on the directed graph such that the containment error can be reduced as small as desired. As a byproduct, the leaderless consensus problem is solved with asymptotical convergence. Because relative velocity measurements between neighbors are generally more difficult to obtain than relative position measurements, we then propose a distributed containment control algorithm without using neighbors' velocity information. A two-step Lyapunov-based method is used to study the convergence of the closed-loop system. Next, we apply the ideas to deal with the containment control problem for networked unknown Lagrangian systems under a general directed graph. All the proposed algorithms are distributed and can be implemented using only local measurements in the absence of communication. Finally, simulation examples are provided to show the effectiveness of the proposed control algorithms.
Benschop, Jackie; Biggs, Patrick J.; Marshall, Jonathan C.; Hayman, David T.S.; Carter, Philip E.; Midwinter, Anne C.; Mather, Alison E.; French, Nigel P.
2017-01-01
During 1998–2012, an extended outbreak of Salmonella enterica serovar Typhimurium definitive type 160 (DT160) affected >3,000 humans and killed wild birds in New Zealand. However, the relationship between DT160 within these 2 host groups and the origin of the outbreak are unknown. Whole-genome sequencing was used to compare 109 Salmonella Typhimurium DT160 isolates from sources throughout New Zealand. We provide evidence that DT160 was introduced into New Zealand around 1997 and rapidly propagated throughout the country, becoming more genetically diverse over time. The genetic heterogeneity was evenly distributed across multiple predicted functional protein groups, and we found no evidence of host group differentiation between isolates collected from human, poultry, bovid, and wild bird sources, indicating ongoing transmission between these host groups. Our findings demonstrate how a comparative genomic approach can be used to gain insight into outbreaks, disease transmission, and the evolution of a multihost pathogen after a probable point-source introduction. PMID:28516864
A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu
2007-01-01
Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…
NASA Astrophysics Data System (ADS)
Cui, Guozeng; Xu, Shengyuan; Ma, Qian; Li, Yongmin; Zhang, Zhengqiang
2018-05-01
In this paper, the problem of prescribed performance distributed output consensus for higher-order non-affine nonlinear multi-agent systems with unknown dead-zone input is investigated. Fuzzy logical systems are utilised to identify the unknown nonlinearities. By introducing prescribed performance, the transient and steady performance of synchronisation errors are guaranteed. Based on Lyapunov stability theory and the dynamic surface control technique, a new distributed consensus algorithm for non-affine nonlinear multi-agent systems is proposed, which ensures cooperatively uniformly ultimately boundedness of all signals in the closed-loop systems and enables the output of each follower to synchronise with the leader within predefined bounded error. Finally, simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.
Domain-Invariant Partial-Least-Squares Regression.
Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne
2018-05-11
Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.
Accuracy of a simplified method for shielded gamma-ray skyshine sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassett, M.S.; Shultis, J.K.
1989-11-01
Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less
Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA
2012-04-10
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source.
NASA Astrophysics Data System (ADS)
Jameel, M. Y.; Brewer, S.; Fiorella, R.; Tipple, B. J.; Bowen, G. J.; Terry, S.
2017-12-01
Public water supply systems (PWSS) are complex distribution systems and critical infrastructure, making them vulnerable to physical disruption and contamination. Exploring the susceptibility of PWSS to such perturbations requires detailed knowledge of the supply system structure and operation. Although the physical structure of supply systems (i.e., pipeline connection) is usually well documented for developed cities, the actual flow patterns of water in these systems are typically unknown or estimated based on hydrodynamic models with limited observational validation. Here, we present a novel method for mapping the flow structure of water in a large, complex PWSS, building upon recent work highlighting the potential of stable isotopes of water (SIW) to document water management practices within complex PWSS. We sampled a major water distribution system of the Salt Lake Valley, Utah, measuring SIW of water sources, treatment facilities, and numerous sites within in the supply system. We then developed a hierarchical Bayesian (HB) isotope mixing model to quantify the proportion of water supplied by different sources at sites within the supply system. Known production volumes and spatial distance effects were used to define the prior probabilities for each source; however, we did not include other physical information about the supply system. Our results were in general agreement with those obtained by hydrodynamic models and provide quantitative estimates of contributions of different water sources to a given site along with robust estimates of uncertainty. Secondary properties of the supply system, such as regions of "static" and "dynamic" source (e.g., regions supplied dominantly by one source vs. those experiencing active mixing between multiple sources), can be inferred from the results. The isotope-based HB isotope mixing model offers a new investigative technique for analyzing PWSS and documenting aspects of supply system structure and operation that are otherwise challenging to observe. The method could allow water managers to document spatiotemporal variation in PWSS flow patterns, critical for interrogating the distribution system to inform operation decision making or disaster response, optimize water supply and, monitor and enforce water rights.
Zheng, Na; Wang, Qichao; Liang, Zhongzhu; Zheng, Dongmei
2008-07-01
Wuli River, Cishan River, and Lianshan River are three freshwater rivers flowing through Huludao City, in a region of northeast China strongly affected by industrialization. Contamination assessment has never been conducted in a comprehensive way. For the first time, the contamination of three rivers impacted by different sources in the same city was compared. This work investigated the distribution and sources of Hg, Pb, Cd, Zn and Cu in the surface sediments of Wuli River, Cishan River, and Lianshan River, and assessed heavy metal toxicity risk with the application of two different sets of Sediment Quality Guideline (SQG) indices (effect range low/effect range median values, ERL/ERM; and threshold effect level/probable effect level, TEL/PEL). Furthermore, this study used a toxic unit approach to compare and gauge the individual and combined metal contamination for Hg, Pb, Cd, Zn and Cu. Results showed that Hg contamination in the sediments of Wuli River originated from previous sediment contamination of the chlor-alkali producing industry, and Pb, Cd, Zn and Cu contamination was mainly derived from atmospheric deposition and unknown small pollution sources. Heavy metal contamination to Cishan River sediments was mainly derived from Huludao Zinc Plant, while atmospheric deposition, sewage wastewater and unknown small pollution were the primary sources for Lianshan River. The potential acute toxicity in sediment of Wuli River may be primarily due to Hg contamination. Hg is the major toxicity contributor, accounting for 53.3-93.2%, 7.9-54.9% to total toxicity in Wuli River and Lianshan River, respectively, followed by Cd. In Cishan River, Cd is the major sediment toxicity contributor, however, accounting for 63.2-66.9% of total toxicity.
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Rendezvous with connectivity preservation for multi-robot systems with an unknown leader
NASA Astrophysics Data System (ADS)
Dong, Yi
2018-02-01
This paper studies the leader-following rendezvous problem with connectivity preservation for multi-agent systems composed of uncertain multi-robot systems subject to external disturbances and an unknown leader, both of which are generated by a so-called exosystem with parametric uncertainty. By combining internal model design, potential function technique and adaptive control, two distributed control strategies are proposed to maintain the connectivity of the communication network, to achieve the asymptotic tracking of all the followers to the output of the unknown leader system, as well as to reject unknown external disturbances. It is also worth to mention that the uncertain parameters in the multi-robot systems and exosystem are further allowed to belong to unknown and unbounded sets when applying the second fully distributed control law containing a dynamic gain inspired by high-gain adaptive control or self-tuning regulator.
Evidence-Based Reptile Housing and Nutrition.
Oonincx, Dennis; van Leeuwen, Jeroen
2017-09-01
The provision of a good light source is important for reptiles. For instance, ultraviolet light is used in social interactions and used for vitamin D synthesis. With respect to housing, most reptilians are best kept pairwise or individually. Environmental enrichment can be effective but depends on the form and the species to which it is applied. Temperature gradients around preferred body temperatures allow accurate thermoregulation, which is essential for reptiles. Natural distributions indicate suitable ambient temperatures, but microclimatic conditions are at least as important. Because the nutrient requirements of reptiles are largely unknown, facilitating self-selection from various dietary items is preferable. Copyright © 2017 Elsevier Inc. All rights reserved.
The underlying philosophy of Unmix is to let the data speak for itself. Unmix seeks to solve the general mixture problem where the data are assumed to be a linear combination of an unknown number of sources of unknown composition, which contribute an unknown amount to each sample...
Optical flashes from internal pairs formed in gamma-ray burst afterglows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panaitescu, A.
We develop a numerical formalism for calculating the distribution with energy of the (internal) pairs formed in a relativistic source from unscattered MeV–TeV photons. For gamma-ray burst (GRB) afterglows, this formalism is more suitable if the relativistic reverse shock that energizes the ejecta is the source of the GeV photons. The number of pairs formed is set by the source GeV output (calculated from the Fermi-LAT fluence), the unknown source Lorentz factor, and the unmeasured peak energy of the LAT spectral component. We show synchrotron and inverse-Compton light curves expected from pairs formed in the shocked medium and identify some criteria for testing a pair origin of GRB optical counterparts. Pairs formed in bright LAT afterglows with a Lorentz factor in the few hundreds may produce bright optical counterparts (more » $$R\\lt 10$$) lasting for up to one hundred seconds. As a result, the number of internal pairs formed from unscattered seed photons decreases very strongly with the source Lorentz factor, thus bright GRB optical counterparts cannot arise from internal pairs if the afterglow Lorentz factor is above several hundreds.« less
Optical flashes from internal pairs formed in gamma-ray burst afterglows
Panaitescu, A.
2015-06-09
We develop a numerical formalism for calculating the distribution with energy of the (internal) pairs formed in a relativistic source from unscattered MeV–TeV photons. For gamma-ray burst (GRB) afterglows, this formalism is more suitable if the relativistic reverse shock that energizes the ejecta is the source of the GeV photons. The number of pairs formed is set by the source GeV output (calculated from the Fermi-LAT fluence), the unknown source Lorentz factor, and the unmeasured peak energy of the LAT spectral component. We show synchrotron and inverse-Compton light curves expected from pairs formed in the shocked medium and identify some criteria for testing a pair origin of GRB optical counterparts. Pairs formed in bright LAT afterglows with a Lorentz factor in the few hundreds may produce bright optical counterparts (more » $$R\\lt 10$$) lasting for up to one hundred seconds. As a result, the number of internal pairs formed from unscattered seed photons decreases very strongly with the source Lorentz factor, thus bright GRB optical counterparts cannot arise from internal pairs if the afterglow Lorentz factor is above several hundreds.« less
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
Bleka, Øyvind; Storvik, Geir; Gill, Peter
2016-03-01
We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
An iterative method for the localization of a neutron source in a large box (container)
NASA Astrophysics Data System (ADS)
Dubinski, S.; Presler, O.; Alfassi, Z. B.
2007-12-01
The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
Identifying known unknowns using the US EPA's CompTox Chemistry Dashboard.
McEachran, Andrew D; Sobus, Jon R; Williams, Antony J
2017-03-01
Chemical features observed using high-resolution mass spectrometry can be tentatively identified using online chemical reference databases by searching molecular formulae and monoisotopic masses and then rank-ordering of the hits using appropriate relevance criteria. The most likely candidate "known unknowns," which are those chemicals unknown to an investigator but contained within a reference database or literature source, rise to the top of a chemical list when rank-ordered by the number of associated data sources. The U.S. EPA's CompTox Chemistry Dashboard is a curated and freely available resource for chemistry and computational toxicology research, containing more than 720,000 chemicals of relevance to environmental health science. In this research, the performance of the Dashboard for identifying known unknowns was evaluated against that of the online ChemSpider database, one of the primary resources used by mass spectrometrists, using multiple previously studied datasets reported in the peer-reviewed literature totaling 162 chemicals. These chemicals were examined using both applications via molecular formula and monoisotopic mass searches followed by rank-ordering of candidate compounds by associated references or data sources. A greater percentage of chemicals ranked in the top position when using the Dashboard, indicating an advantage of this application over ChemSpider for identifying known unknowns using data source ranking. Additional approaches are being developed for inclusion into a non-targeted analysis workflow as part of the CompTox Chemistry Dashboard. This work shows the potential for use of the Dashboard in exposure assessment and risk decision-making through significant improvements in non-targeted chemical identification. Graphical abstract Identifying known unknowns in the US EPA's CompTox Chemistry Dashboard from molecular formula and monoisotopic mass inputs.
The Information Available to a Moving Observer on Shape with Unknown, Isotropic BRDFs.
Chandraker, Manmohan
2016-07-01
Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.
Hawkley, Gavin
2014-12-01
Atmospheric dispersion modeling within the near field of a nuclear facility typically applies a building wake correction to the Gaussian plume model, whereby a point source is modeled as a plane source. The plane source results in greater near field dilution and reduces the far field effluent concentration. However, the correction does not account for the concentration profile within the near field. Receptors of interest, such as the maximally exposed individual, may exist within the near field and thus the realm of building wake effects. Furthermore, release parameters and displacement characteristics may be unknown, particularly during upset conditions. Therefore, emphasis is placed upon the need to analyze and estimate an enveloping concentration profile within the near field of a release. This investigation included the analysis of 64 air samples collected over 128 wk. Variables of importance were then derived from the measurement data, and a methodology was developed that allowed for the estimation of Lorentzian-based dispersion coefficients along the lateral axis of the near field recirculation cavity; the development of recirculation cavity boundaries; and conservative evaluation of the associated concentration profile. The results evaluated the effectiveness of the Lorentzian distribution methodology for estimating near field releases and emphasized the need to place air-monitoring stations appropriately for complete concentration characterization. Additionally, the importance of the sampling period and operational conditions were discussed to balance operational feedback and the reporting of public dose.
NASA Astrophysics Data System (ADS)
Abdulhay, Ibrahim Shakib
1995-01-01
The thermoluminescent dosimeter (TLD) response (integrated light output per unit exposure) of a high Z material increases more rapidly with decreasing photon energy and with energy above the pair production threshold than that of lower Z materials. The ratio of the responses obtained when two thermoluminescent dosimeter (TLD) materials are simultaneously exposed to gamma or x-rays could be used to obtain information about the incident photon energies. In addition, the responses are affected by the presence of the material surrounding the dosimeters. Two TLD's, LiF and CaSO_4, with respective effective atomic number of 8.2 and 15.3, have been chosen to be sandwiched between different absorber materials (Al, Cu, and Pb) and irradiated at selected distances from gamma radiation sources. The photon energies used in this investigation were 60 keV, 142 keV, 662 keV, 1.25 MeV, and 6.129 MeV. Fit equations of the responses of the dosimeters to different energies have been obtained and used to evaluate the energy distributions of unknown ionizing radiation fields. In addition, the electron mass attenuation coefficient beta used in Burlin and Burlin-Horowitz Cavity Theory has been modified to produce better agreement with experimental data at low photon energies and at high energies.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
An almost-parameter-free harmony search algorithm for groundwater pollution source identification.
Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui
2013-01-01
The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.
This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando
This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Englbrecht, F; Lindner, F; Bin, J
2016-06-15
Purpose: To measure and simulate well-defined electron spectra using a linear accelerator and a permanent-magnetic wide-angle spectrometer to test the performance of a novel reconstruction algorithm for retrieval of unknown electron-sources, in view of application to diagnostics of laser-driven particle acceleration. Methods: Six electron energies (6, 9, 12, 15, 18 and 21 MeV, 40cm × 40cm field-size) delivered by a Siemens Oncor linear accelerator were recorded using a permanent-magnetic wide-angle electron spectrometer (150mT) with a one dimensional slit (0.2mm × 5cm). Two dimensional maps representing beam-energy and entrance-position along the slit were measured using different scintillating screens, read by anmore » online CMOS detector of high resolution (0.048mm × 0.048mm pixels) and large field of view (5cm × 10cm). Measured energy-slit position maps were compared to forward FLUKA simulations of electron transport through the spectrometer, starting from IAEA phase-spaces of the accelerator. The latter ones were validated against measured depth-dose and lateral profiles in water. Agreement of forward simulation and measurement was quantified in terms of position and shape of the signal distribution on the detector. Results: Measured depth-dose distributions and lateral profiles in the water phantom showed good agreement with forward simulations of IAEA phase-spaces, thus supporting usage of this simulation source in the study. Measured energy-slit position maps and those obtained by forward Monte-Carlo simulations showed satisfactory agreement in shape and position. Conclusion: Well-defined electron beams of known energy and shape will provide an ideal scenario to study the performance of a novel reconstruction algorithm using measured and simulated signal. Future work will increase the stability and convergence of the reconstruction-algorithm for unknown electron sources, towards final application to the electrons which drive the interaction of TW-class laser pulses with nanometer thin target foils to accelerate protons and ions to multi-MeV kinetic energy. Cluster of Excellence of the German Research Foundation (DFG) “Munich-Centre for Advanced Photonics”.« less
Identifying known unknowns using the US EPA's CompTox ...
Chemical features observed using high-resolution mass spectrometry can be tentatively identified using online chemical reference databases by searching molecular formulae and monoisotopic masses and then rank-ordering of the hits using appropriate relevance criteria. The most likely candidate “known unknowns,” which are those chemicals unknown to an investigator but contained within a reference database or literature source, rise to the top of a chemical list when rank-ordered by the number of associated data sources. The U.S. EPA’s CompTox Chemistry Dashboard is a curated and freely available resource for chemistry and computational toxicology research, containing more than 720,000 chemicals of relevance to environmental health science. In this research, the performance of the Dashboard for identifying “known unknowns” was evaluated against that of the online ChemSpider database, one of the primary resources used by mass spectrometrists, using multiple previously studied datasets reported in the peer-reviewed literature totaling 162 chemicals. These chemicals were examined using both applications via molecular formula and monoisotopic mass searches followed by rank-ordering of candidate compounds by associated references or data sources. A greater percentage of chemicals ranked in the top position when using the Dashboard, indicating an advantage of this application over ChemSpider for identifying known unknowns using data source ranking. Addition
Astrophysical signatures of leptonium
NASA Astrophysics Data System (ADS)
Ellis, Simon C.; Bland-Hawthorn, Joss
2018-01-01
More than 1043 positrons annihilate every second in the centre of our Galaxy yet, despite four decades of observations, their origin is still unknown. Many candidates have been proposed, such as supernovae and low mass X-ray binaries. However, these models are difficult to reconcile with the distribution of positrons, which are highly concentrated in the Galactic bulge, and therefore require specific propagation of the positrons through the interstellar medium. Alternative sources include dark matter decay, or the supermassive black hole, both of which would have a naturally high bulge-to-disc ratio. The chief difficulty in reconciling models with the observations is the intrinsically poor angular resolution of gamma-ray observations, which cannot resolve point sources. Essentially all of the positrons annihilate via the formation of positronium. This gives rise to the possibility of observing recombination lines of positronium emitted before the atom annihilates. These emission lines would be in the UV and the NIR, giving an increase in angular resolution of a factor of 104 compared to gamma ray observations, and allowing the discrimination between point sources and truly diffuse emission. Analogously to the formation of positronium, it is possible to form atoms of true muonium and true tauonium. Since muons and tauons are intrinsically unstable, the formation of such leptonium atoms will be localised to their places of origin. Thus observations of true muonium or true tauonium can provide another way to distinguish between truly diffuse sources such as dark matter decay, and an unresolved distribution of point sources. Contribution to the Topical Issue "Low Energy Positron and Electron Interactions", edited by James Sullivan, Ron White, Michael Bromley, Ilya Fabrikant and David Cassidy.
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Sim, Won-Jin; Kim, Hee-Young; Choi, Sung-Deuk; Kwon, Jung-Hwan; Oh, Jeong-Eun
2013-03-15
We investigated 33 pharmaceuticals and personal care products (PPCPs) with emphasis on anthelmintics and their metabolites in human sanitary waste treatment plants (HTPs), sewage treatment plants (STPs), hospital wastewater treatment plants (HWTPs), livestock wastewater treatment plants (LWTPs), river water and seawater. PPCPs showed the characteristic specific occurrence patterns according to wastewater sources. The LWTPs and HTPs showed higher levels (maximum 3000 times in influents) of anthelmintics than other wastewater treatment plants, indicating that livestock wastewater and human sanitary waste are one of principal sources of anthelmintics. Among anthelmintics, fenbendazole and its metabolites are relatively high in the LWTPs, while human anthelmintics such as albendazole and flubendazole are most dominant in the HTPs, STPs and HWTPs. The occurrence pattern of fenbendazole's metabolites in water was different from pharmacokinetics studies, showing the possibility of transformation mechanism other than the metabolism in animal bodies by some processes unknown to us. The river water and seawater are generally affected by the point sources, but the distribution patterns in some receiving water are slightly different from the effluent, indicating the influence of non-point sources. Copyright © 2013 Elsevier B.V. All rights reserved.
41. Upstream end of emergency spillway excavation. Photographer unknown, 1929. ...
41. Upstream end of emergency spillway excavation. Photographer unknown, 1929. Source: Arizona Department of Water Resources (ADWR). - Waddell Dam, On Agua Fria River, 35 miles northwest of Phoenix, Phoenix, Maricopa County, AZ
Modeling field-scale cosolvent flooding for DNAPL source zone remediation
NASA Astrophysics Data System (ADS)
Liang, Hailian; Falta, Ronald W.
2008-02-01
A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.
Modeling field-scale cosolvent flooding for DNAPL source zone remediation.
Liang, Hailian; Falta, Ronald W
2008-02-19
A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.
On the use of variability time-scales as an early classifier of radio transients and variables
NASA Astrophysics Data System (ADS)
Pietka, M.; Staley, T. D.; Pretorius, M. L.; Fender, R. P.
2017-11-01
We have shown previously that a broad correlation between the peak radio luminosity and the variability time-scales, approximately L ∝ τ5, exists for variable synchrotron emitting sources and that different classes of astrophysical sources occupy different regions of luminosity and time-scale space. Based on those results, we investigate whether the most basic information available for a newly discovered radio variable or transient - their rise and/or decline rate - can be used to set initial constraints on the class of events from which they originate. We have analysed a sample of ≈800 synchrotron flares, selected from light curves of ≈90 sources observed at 5-8 GHz, representing a wide range of astrophysical phenomena, from flare stars to supermassive black holes. Selection of outbursts from the noisy radio light curves has been done automatically in order to ensure reproducibility of results. The distribution of rise/decline rates for the selected flares is modelled as a Gaussian probability distribution for each class of object, and further convolved with estimated areal density of that class in order to correct for the strong bias in our sample. We show in this way that comparing the measured variability time-scale of a radio transient/variable of unknown origin can provide an early, albeit approximate, classification of the object, and could form part of a suite of measurements used to provide early categorization of such events. Finally, we also discuss the effect scintillating sources will have on our ability to classify events based on their variability time-scales.
A new statistical method for design and analyses of component tolerance
NASA Astrophysics Data System (ADS)
Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam
2017-03-01
Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.
Mudalige, Thilak K; Qu, Haiou; Linder, Sean W
2015-11-13
Engineered nanoparticles are available in large numbers of commercial products claiming various health benefits. Nanoparticle absorption, distribution, metabolism, excretion, and toxicity in a biological system are dependent on particle size, thus the determination of size and size distribution is essential for full characterization. Number based average size and size distribution is a major parameter for full characterization of the nanoparticle. In the case of polydispersed samples, large numbers of particles are needed to obtain accurate size distribution data. Herein, we report a rapid methodology, demonstrating improved nanoparticle recovery and excellent size resolution, for the characterization of gold nanoparticles in dietary supplements using asymmetric flow field flow fractionation coupled with visible absorption spectrometry and inductively coupled plasma mass spectrometry. A linear relationship between gold nanoparticle size and retention times was observed, and used for characterization of unknown samples. The particle size results from unknown samples were compared to results from traditional size analysis by transmission electron microscopy, and found to have less than a 5% deviation in size for unknown product over the size range from 7 to 30 nm. Published by Elsevier B.V.
Hua, Changchun; Zhang, Liuliu; Guan, Xinping
2017-01-01
This paper studies the problem of distributed output tracking consensus control for a class of high-order stochastic nonlinear multiagent systems with unknown nonlinear dead-zone under a directed graph topology. The adaptive neural networks are used to approximate the unknown nonlinear functions and a new inequality is used to deal with the completely unknown dead-zone input. Then, we design the controllers based on backstepping method and the dynamic surface control technique. It is strictly proved that the resulting closed-loop system is stable in probability in the sense of semiglobally uniform ultimate boundedness and the tracking errors between the leader and the followers approach to a small residual set based on Lyapunov stability theory. Finally, two simulation examples are presented to show the effectiveness and the advantages of the proposed techniques.
Research of mine water source identification based on LIF technology
NASA Astrophysics Data System (ADS)
Zhou, Mengran; Yan, Pengcheng
2016-09-01
According to the problem that traditional chemical methods to the mine water source identification takes a long time, put forward a method for rapid source identification system of mine water inrush based on the technology of laser induced fluorescence (LIF). Emphatically analyzes the basic principle of LIF technology. The hardware composition of LIF system are analyzed and the related modules were selected. Through the fluorescence experiment with the water samples of coal mine in the LIF system, fluorescence spectra of water samples are got. Traditional water source identification mainly according to the ion concentration representative of the water, but it is hard to analysis the ion concentration of the water from the fluorescence spectra. This paper proposes a simple and practical method of rapid identification of water by fluorescence spectrum, which measure the space distance between unknown water samples and standard samples, and then based on the clustering analysis, the category of the unknown water sample can be get. Water source identification for unknown samples verified the reliability of the LIF system, and solve the problem that the current coal mine can't have a better real-time and online monitoring on water inrush, which is of great significance for coal mine safety in production.
Prostate cancer and industrial pollution Risk around putative focus in a multi-source scenario.
Ramis, Rebeca; Diggle, Peter; Cambra, Koldo; López-Abente, Gonzalo
2011-04-01
Prostate cancer is the second most common type of cancer among men but its aetiology is still largely unknown. Different studies have proposed several risk factors such as ethnic origin, age, genetic factors, hormonal factors, diet and insulin-like growth factor, but the spatial distribution of the disease suggests that other environmental factors are involved. This paper studies the spatial distribution of prostate cancer mortality in an industrialized area using distances from each of a number of industrial facilities as indirect measures of exposure to industrial pollution. We studied the Gran Bilbao area (Spain) with a population of 791,519 inhabitants distributed in 657 census tracts. There were 20 industrial facilities within the area, 8 of them in the central axis of the region. We analysed prostate cancer mortality during the period 1996-2003. There were 883 deaths giving a crude rate of 14 per 100,000 inhabitants. We extended the standard Poisson regression model by the inclusion of a multiplicative non-linear function to model the effect of distance from an industrial facility. The function's shape combined an elevated risk close to the source with a neutral effect at large distance. We also included socio-demographic covariates in the model to control potential confounding. We aggregated the industrial facilities by sector: metal, mineral, chemical and other activities. Results relating to metal industries showed a significantly elevated risk by a factor of approximately 1.4 in the immediate vicinity, decaying with distance to a value of 1.08 at 12km. The remaining sectors did not show a statistically significant excess of risk at the source. Notwithstanding the limitations of this kind of study, we found evidence of association between the spatial distribution of prostate cancer mortality aggregated by census tracts and proximity to metal industrial facilities located within the area, after adjusting for socio-demographic characteristics at municipality level. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
A multiwave range test for obstacle reconstructions with unknown physical properties
NASA Astrophysics Data System (ADS)
Potthast, Roland; Schulz, Jochen
2007-08-01
We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A `range test' for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533-547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhauser, Basel, 1986, pp. 93-102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Gottingen, 1999]. In particular, we propose a new version of the Kirsch-Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Xiao-Chuan; Liu, Ruo-Yu; Wang, Xiang-Yu, E-mail: xywang@nju.edu.cn
The nearly isotropic distribution of teraelectronvolt to petaelectronvolt neutrinos recently detected by the IceCube Collaboration suggests that they come from sources at a distance beyond our Galaxy, but how far away they are is largely unknown because of a lack of any associations with known sources. In this paper, we propose that the cumulative TeV gamma-ray emission accompanying the production of neutrinos can be used to constrain the distance of these neutrino sources, since the opacity of TeV gamma rays due to absorption by the extragalactic background light depends on the distance these TeV gamma rays have traveled. As themore » diffuse extragalactic TeV background measured by Fermi is much weaker than the expected cumulative flux associated with IceCube neutrinos, the majority of IceCube neutrinos, if their sources are transparent to TeV gamma rays, must come from distances larger than the horizon of TeV gamma rays. We find that above 80% of the IceCube neutrinos should come from sources at redshift z > 0.5. Thus, the chance of finding nearby sources correlated with IceCube neutrinos would be small. We also find that, to explain the flux of neutrinos under the TeV gamma-ray emission constraint, the redshift evolution of neutrino source density must be at least as fast as the cosmic star formation rate.« less
Åström, Johan; Pettersson, Thomas J R; Reischer, Georg H; Norberg, Tommy; Hermansson, Malte
2015-02-03
Several assays for the detection of host-specific genetic markers of the order Bacteroidales have been developed and used for microbial source tracking (MST) in environmental waters. It is recognized that the source-sensitivity and source-specificity are unknown and variable when introducing these assays in new geographic regions, which reduces their reliability and use. A Bayesian approach was developed to incorporate expert judgments with regional assay sensitivity and specificity assessments in a utility evaluation of a human and a ruminant-specific qPCR assay for MST in a drinking water source. Water samples from Lake Rådasjön were analyzed for E. coli, intestinal enterococci and somatic coliphages through cultivation and for human (BacH) and ruminant-specific (BacR) markers through qPCR assays. Expert judgments were collected regarding the probability of human and ruminant fecal contamination based on fecal indicator organism data and subjective information. Using Bayes formula, the conditional probability of a true human or ruminant fecal contamination given the presence of BacH or BacR was determined stochastically from expert judgments and regional qPCR assay performance, using Beta distributions to represent uncertainties. A web-based computational tool was developed for the procedure, which provides a measure of confidence to findings of host-specific markers and demonstrates the information value from these assays.
Åström, Johan; Pettersson, Thomas J. R.; Reischer, Georg H.; Norberg, Tommy; Hermansson, Malte
2017-01-01
Several assays for the detection of host-specific genetic markers of the order Bacteroidales have been developed and used for microbial source tracking (MST) in environmental waters. It is recognized that the source-sensitivity and source-specificity are unknown and variable when introducing these assays in new geographic regions, which reduces their reliability and use. A Bayesian approach was developed to incorporate expert judgments with regional assay sensitivity and specificity assessments in a utility evaluation of a human and a ruminant-specific qPCR assay for MST in a drinking water source. Water samples from Lake Rådasjön were analyzed for E. coli, intestinal enterococci and somatic coliphages through cultivation and for human (BacH) and ruminant-specific (BacR) markers through qPCR assays. Expert judgments were collected regarding the probability of human and ruminant fecal contamination based on fecal indicator organism data and subjective information. Using Bayes formula, the conditional probability of a true human or ruminant fecal contamination given the presence of BacH or BacR was determined stochastically from expert judgments and regional qPCR assay performance, using Beta distributions to represent uncertainties. A web-based computational tool was developed for the procedure, which provides a measure of confidence to findings of host-specific markers and demonstrates the information value from these assays. PMID:25545113
Accelerating fissile material detection with a neutron source
Rowland, Mark S.; Snyderman, Neal J.
2018-01-30
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly to count neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a Poisson neutron generator for in-beam interrogation of a possible fissile neutron source and a DC power supply that exhibits electrical ripple on the order of less than one part per million. Certain voltage multiplier circuits, such as Cockroft-Walton voltage multipliers, are used to enhance the effective of series resistor-inductor circuits components to reduce the ripple associated with traditional AC rectified, high voltage DC power supplies.
Branched GDGT distributions in lakes from Mexico and Central America
NASA Astrophysics Data System (ADS)
Lei, A.; Werne, J. P.; Correa-Metrio, A.; Pérez, L.; Caballero, M.
2017-12-01
The potential to use bacterial derived branched glycerol dialkyl glycerol tetraethers (brGDGTs) to reconstruct mean annual air temperatures from soils sparked significant interest in the terrestrial paleoclimate community, where a high-fidelity paleotemperature proxy is desperately needed. While the source of brGDGTs remains unknown (but are potentially attributed to the highly diverse phylum Acidobacteria), much evidence points to the potential for these bacteria to live not only in the terrestrial environment but also in lake water and sediments as well. Though the application of brGDGTs to lacustrine reconstructions is promising, the initial applications of soil-based MBT/CBT proxy to lacustrine sediments typically resulted in lower temperatures than were reasonable, likely due to additions from lacustrine bacterial brGDGTs. Here, we present data from a suite of >100 lakes in Mexico and Central America, producing a regional core-top calibration different from those developed in other regions. Results indicate a significant role for regional differences in controlling the brGDGTs distribution, likely due to different brGDGT-producing microbial communities thriving under varying environmental conditions. Rigorous development of brGDGT based proxies will improve our understanding of the source and applicability of these biomarkers, and increase confidence in the accuracy of paleotemperature reconstructions to numerous lacustrine records in the region.
A systematic review of waterborne disease burden methodologies from developed countries.
Murphy, H M; Pintar, K D M; McBean, E A; Thomas, M K
2014-12-01
The true incidence of endemic acute gastrointestinal illness (AGI) attributable to drinking water in Canada is unknown. Using a systematic review framework, the literature was evaluated to identify methods used to attribute AGI to drinking water. Several strategies have been suggested or applied to quantify AGI attributable to drinking water at a national level. These vary from simple point estimates, to quantitative microbial risk assessment, to Monte Carlo simulations, which rely on assumptions and epidemiological data from the literature. Using two methods proposed by researchers in the USA, this paper compares the current approaches and key assumptions. Knowledge gaps are identified to inform future waterborne disease attribution estimates. To improve future estimates, there is a need for robust epidemiological studies that quantify the health risks associated with small, private water systems, groundwater systems and the influence of distribution system intrusions on risk. Quantification of the occurrence of enteric pathogens in water supplies, particularly for groundwater, is needed. In addition, there are unanswered questions regarding the susceptibility of vulnerable sub-populations to these pathogens and the influence of extreme weather events (precipitation) on AGI-related health risks. National centralized data to quantify the proportions of the population served by different water sources, by treatment level, source water quality, and the condition of the distribution system infrastructure, are needed.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
Global biogeography of human infectious diseases.
Murray, Kris A; Preston, Nicholas; Allen, Toph; Zambrana-Torrelio, Carlos; Hosseini, Parviez R; Daszak, Peter
2015-10-13
The distributions of most infectious agents causing disease in humans are poorly resolved or unknown. However, poorly known and unknown agents contribute to the global burden of disease and will underlie many future disease risks. Existing patterns of infectious disease co-occurrence could thus play a critical role in resolving or anticipating current and future disease threats. We analyzed the global occurrence patterns of 187 human infectious diseases across 225 countries and seven epidemiological classes (human-specific, zoonotic, vector-borne, non-vector-borne, bacterial, viral, and parasitic) to show that human infectious diseases exhibit distinct spatial grouping patterns at a global scale. We demonstrate, using outbreaks of Ebola virus as a test case, that this spatial structuring provides an untapped source of prior information that could be used to tighten the focus of a range of health-related research and management activities at early stages or in data-poor settings, including disease surveillance, outbreak responses, or optimizing pathogen discovery. In examining the correlates of these spatial patterns, among a range of geographic, epidemiological, environmental, and social factors, mammalian biodiversity was the strongest predictor of infectious disease co-occurrence overall and for six of the seven disease classes examined, giving rise to a striking congruence between global pathogeographic and "Wallacean" zoogeographic patterns. This clear biogeographic signal suggests that infectious disease assemblages remain fundamentally constrained in their distributions by ecological barriers to dispersal or establishment, despite the homogenizing forces of globalization. Pathogeography thus provides an overarching context in which other factors promoting infectious disease emergence and spread are set.
How to Decide? Multi-Objective Early-Warning Monitoring Networks for Water Suppliers
NASA Astrophysics Data System (ADS)
Bode, Felix; Loschko, Matthias; Nowak, Wolfgang
2015-04-01
Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources, which cannot be eliminated, especially in urban regions. As a matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs. In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations, to enhance the early warning time before detected contaminations reach the drinking water well, and to minimize the installation and operating costs of the monitoring network. Using multi-objectives optimization, we avoid the problem of having to weight these objectives to a single objective-function. These objectives are clearly competing, and it is impossible to know their mutual trade-offs beforehand - each catchment differs in many points and it is hardly possible to transfer knowledge between geological formations and risk inventories. To make our optimization results more specific to the type of risk inventory in different catchments we do risk prioritization of all known risk sources. Due to the lack of the required data, quantitative risk ranking is impossible. Instead, we use a qualitative risk ranking to prioritize the known risk sources for monitoring. Additionally, we allow for the existence of unknown risk sources that are totally uncertain in location and in their inherent risk. Therefore, they can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well. We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrades) to also cover moderate, tolerable and unknown risk sources. Monitoring networks, which are valid for the remaining risk also cover all other risk sources, but only with a relatively poor early-warning time. The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. It simulates, which potential contaminant plumes from the risk sources would be detectable where and when by all possible candidate positions for monitoring wells. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. These include uncertainty in ambient flow direction of the groundwater, uncertainty of the conductivity field, and different scenarios for the pumping rates of the production wells. To avoid numerical dispersion during the transport simulations, we use particle-tracking random walk methods when simulating transport.
Challenges/issues of NIS used in particle accelerator facilities
NASA Astrophysics Data System (ADS)
Faircloth, Dan
2013-09-01
High current, high duty cycle negative ion sources are an essential component of many high power particle accelerators. This talk gives an overview of the state-of-the-art sources used around the world. Volume, surface and charge exchange negative ion production processes are detailed. Cesiated magnetron and Penning surface plasma sources are discussed along with surface converter sources. Multicusp volume sources with filament and LaB6 cathodes are described before moving onto RF inductively coupled volume sources with internal and external antennas. The major challenges facing accelerator facilities are detailed. Beam current, source lifetime and reliability are the most pressing. The pros and cons of each source technology is discussed along with their development programs. The uncertainties and unknowns common to these sources are discussed. The dynamics of cesium surface coverage and the causes of source variability are still unknown. Minimizing beam emittance is essential to maximizing the transport of high current beams; space charge effects are very important. The basic physics of negative ion production is still not well understood, theoretical and experimental programs continue to improve this, but there are still many mysteries to be solved.
Farris, Zach J; Morelli, Toni Lyn; Sefczek, Timothy; Wright, Patricia C
2011-01-01
The aye-aye is considered the most widely distributed lemur in Madagascar; however, the effect of forest quality on aye-aye abundance is unknown. We compared aye-aye presence across degraded and non-degraded forest at Ranomafana National Park, Madagascar. We used secondary signs (feeding sites, high activity sites) as indirect cues of aye-aye presence and Canarium trees as an indicator of resource availability. All 3 measured variables indicated higher aye-aye abundance within non-degraded forest; however, the differences across forest type were not significant. Both degraded and non-degraded forests showed a positive correlation between feeding sites and high activity sites. We found that Canarium, an important aye-aye food source, was rare and had limited dispersal, particularly across degraded forest. This preliminary study provides baseline data for aye-aye activity and resource utilization across degraded and non-degraded forests. Copyright © 2011 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Zhao, S.
2014-12-01
Levels of microplastics (MPs) in China are completely unknown. Here suspended MPs were characterized quantitatively and qualitatively for the Yangtze Estuary and East China Sea. MPs were extracted via a floatation method. MPs were then counted and categorized according to shape and size under a dissecting microscope. The MP densities were 4137.3±2461.5 and 0.167±0.138 n/m3 in the estuarine and the sea waters, respectively. Plastic abundances varied strongly in the estuary. Higher density in the C transect corroborated that rivers were the important sources of MP to the marine environment. MPs (0.5-5mm) constituted more than 90% of total plastics. Plastic particles (> 5 mm) were observed with a maximum size of 12.46 mm. The most frequent plastics were fibres, followed by granules and films. Plastic spherules occurred sparsely. Transparent and coloured plastics comprised the majority of the particle colours. This study provides clues in understanding MPs fate and potential source.
Li, Tao; Sun, Guihua; Ma, Shengzhong; Liang, Kai; Yang, Chupeng; Li, Bo; Luo, Weidong
2016-11-15
Concentration, spatial distribution, composition and sources of polycyclic aromatic hydrocarbons (PAHs) were investigated based on measurements of 16 PAH compounds in surface sediments of the western Taiwan Strait. Total PAH concentrations ranged from 2.41 to 218.54ngg -1 . Cluster analysis identified three site clusters representing the northern, central and southern regions. Sedimentary PAHs mainly originated from a mixture of pyrolytic and petrogenic in the north, from pyrolytic in the central, and from petrogenic in the south. An end-member mixing model was performed using PAH compound data to estimate mixing proportions for unknown end-members (i.e., extreme-value sample points) proposed by principal component analysis (PCA). The results showed that the analyzed samples can be expressed as mixtures of three end-members, and the mixing of different end-members was strongly related to the transport pathway controlled by two currents, which alternately prevail in the Taiwan Strait during different seasons. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Singh, Randhir; Das, Nilima; Kumar, Jitendra
2017-06-01
An effective analytical technique is proposed for the solution of the Lane-Emden equations. The proposed technique is based on the variational iteration method (VIM) and the convergence control parameter h . In order to avoid solving a sequence of nonlinear algebraic or complicated integrals for the derivation of unknown constant, the boundary conditions are used before designing the recursive scheme for solution. The series solutions are found which converges rapidly to the exact solution. Convergence analysis and error bounds are discussed. Accuracy, applicability of the method is examined by solving three singular problems: i) nonlinear Poisson-Boltzmann equation, ii) distribution of heat sources in the human head, iii) second-kind Lane-Emden equation.
NASA Technical Reports Server (NTRS)
Cook, R. K.
1969-01-01
The propagation of sound waves at infrasonic frequencies (oscillation periods 1.0 - 1000 seconds) in the atmosphere is being studied by a network of seven stations separated geographically by distances of the order of thousands of kilometers. The stations measure the following characteristics of infrasonic waves: (1) the amplitude and waveform of the incident sound pressure, (2) the direction of propagation of the wave, (3) the horizontal phase velocity, and (4) the distribution of sound wave energy at various frequencies of oscillation. Some infrasonic sources which were identified and studied include the aurora borealis, tornadoes, volcanos, gravity waves on the oceans, earthquakes, and atmospheric instability waves caused by winds at the tropopause. Waves of unknown origin seem to radiate from several geographical locations, including one in the Argentine.
Maslehaty, Homajoun; Petridis, Athanassios K; Barth, Harald; Doukas, Alexandros; Mehdorn, Hubertus Maximilian
2011-01-01
Spontaneous subarachnoid hemorrhage (SAH) without evidence of a bleeding source on the first digital subtraction angiogram (DSA) - also called SAH of unknown origin - is observed in up to 27% of all cases. Depending on the bleeding pattern on CT scanning, SAH can be differentiated into perimesencephalic (PM-SAH) and non-perimesencephalic SAH (NON-PM-SAH). The aim of our study was to investigate the effectiveness of magnetic resonance imaging (MRI) for detecting a bleeding source in SAH of unknown origin. We retrospectively reviewed 1,226 patients with spontaneous SAH between January 1991 and December 2008 in our department. DSA was performed in 1,068 patients, with negative results in 179 patients. Forty-seven patients were categorized as having PM-SAH and 132 patients as having NON-PM-SAH. MRI of the brain and the craniocervical region was performed within 72 h after diagnosis of SAH and demonstrated no bleeding sources in any of the PM-SAH and NON-PM-SAH patients (100% negative). In our experience MRI did not produce any additional benefit for detecting a bleeding source after SAH with a negative angiogram. The costs of this examination exceeded the clinical value. Despite our results MRI should be discussed on a case-by-case basis because rare bleeding sources are periodically diagnosed in cases of NON-PM-SAH.
Choi, Yun Ho; Yoo, Sung Jin
2017-03-28
A minimal-approximation-based distributed adaptive consensus tracking approach is presented for strict-feedback multiagent systems with unknown heterogeneous nonlinearities and control directions under a directed network. Existing approximation-based consensus results for uncertain nonlinear multiagent systems in lower-triangular form have used multiple function approximators in each local controller to approximate unmatched nonlinearities of each follower. Thus, as the follower's order increases, the number of the approximators used in its local controller increases. However, the proposed approach employs only one function approximator to construct the local controller of each follower regardless of the order of the follower. The recursive design methodology using a new error transformation is derived for the proposed minimal-approximation-based design. Furthermore, a bounding lemma on parameters of Nussbaum functions is presented to handle the unknown control direction problem in the minimal-approximation-based distributed consensus tracking framework and the stability of the overall closed-loop system is rigorously analyzed in the Lyapunov sense.
Disturbance Source Separation in Shear Flows Using Blind Source Separation Methods
NASA Astrophysics Data System (ADS)
Gluzman, Igal; Cohen, Jacob; Oshman, Yaakov
2017-11-01
A novel approach is presented for identifying disturbance sources in wall-bounded shear flows. The method can prove useful for active control of boundary layer transition from laminar to turbulent flow. The underlying idea is to consider the flow state, as measured in sensors, to be a mixture of sources, and to use Blind Source Separation (BSS) techniques to recover the separate sources and their unknown mixing process. We present a BSS method based on the Degenerate Unmixing Estimation Technique. This method can be used to identify any (a priori unknown) number of sources by using the data acquired by only two sensors. The power of the new method is demonstrated via numerical and experimental proofs of concept. Wind tunnel experiments involving boundary layer flow over a flat plate were carried out, in which two hot-wire anemometers were used to separate disturbances generated by disturbance generators such as a single dielectric barrier discharge plasma actuator and a loudspeaker.
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Bayesian methods for characterizing unknown parameters of material models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Multiscale modeling of lithium ion batteries: thermal aspects
Zausch, Jochen
2015-01-01
Summary The thermal behavior of lithium ion batteries has a huge impact on their lifetime and the initiation of degradation processes. The development of hot spots or large local overpotentials leading, e.g., to lithium metal deposition depends on material properties as well as on the nano- und microstructure of the electrodes. In recent years a theoretical structure emerges, which opens the possibility to establish a systematic modeling strategy from atomistic to continuum scale to capture and couple the relevant phenomena on each scale. We outline the building blocks for such a systematic approach and discuss in detail a rigorous approach for the continuum scale based on rational thermodynamics and homogenization theories. Our focus is on the development of a systematic thermodynamically consistent theory for thermal phenomena in batteries at the microstructure scale and at the cell scale. We discuss the importance of carefully defining the continuum fields for being able to compare seemingly different phenomenological theories and for obtaining rules to determine unknown parameters of the theory by experiments or lower-scale theories. The resulting continuum models for the microscopic and the cell scale are numerically solved in full 3D resolution. The complex very localized distributions of heat sources in a microstructure of a battery and the problems of mapping these localized sources on an averaged porous electrode model are discussed by comparing the detailed 3D microstructure-resolved simulations of the heat distribution with the result of the upscaled porous electrode model. It is shown, that not all heat sources that exist on the microstructure scale are represented in the averaged theory due to subtle cancellation effects of interface and bulk heat sources. Nevertheless, we find that in special cases the averaged thermal behavior can be captured very well by porous electrode theory. PMID:25977870
Fully probabilistic earthquake source inversion on teleseismic scales
NASA Astrophysics Data System (ADS)
Stähler, Simon; Sigloch, Karin
2017-04-01
Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.
A Bayesian framework for infrasound location
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.
2010-04-01
We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.
Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency
Abu Bakr, Muhammad; Lee, Sukhan
2017-01-01
The paradigm of multisensor data fusion has been evolved from a centralized architecture to a decentralized or distributed architecture along with the advancement in sensor and communication technologies. These days, distributed state estimation and data fusion has been widely explored in diverse fields of engineering and control due to its superior performance over the centralized one in terms of flexibility, robustness to failure and cost effectiveness in infrastructure and communication. However, distributed multisensor data fusion is not without technical challenges to overcome: namely, dealing with cross-correlation and inconsistency among state estimates and sensor data. In this paper, we review the key theories and methodologies of distributed multisensor data fusion available to date with a specific focus on handling unknown correlation and data inconsistency. We aim at providing readers with a unifying view out of individual theories and methodologies by presenting a formal analysis of their implications. Finally, several directions of future research are highlighted. PMID:29077035
Gouagna, Louis-Clément; Poueme, Rodrigue S; Dabiré, Kounbobr Roch; Ouédraogo, Jean-Bosco; Fontenille, Didier; Simard, Frédéric
2010-12-01
Sugar feeding by male mosquitoes is critical for their success in mating competition. However, the facets of sugar source finding under natural conditions remain unknown. Here, evidence obtained in Western Burkina Faso indicated that the distribution of An. gambiae s.s. (M and S molecular forms) males across different peri-domestic habitats is dependent on the availability of potential sugar sources from which they obtain more favorable sites for feeding or resting. Among field-collected anophelines, a higher proportion of specimens containing fructose were found on flowering Mangifera indica (Anacardiaceae), Dolonix regia (Fabaceae), Thevetia neriifolia (Apocynaceae), Senna siamea, and Cassia sieberiana (both Fabaceae) compared to that recorded on other nearby plants, suggesting that some plants are favored for use as a sugar source over others. Y-tube olfactometer assays with newly-emerged An. gambiae s.s. exposed to odors from individual plants and some combinations thereof showed that males use odor cues to guide their preference. The number of sugar-positive males was variable in a no-choice cage assay, consistent with the olfactory response patterns towards corresponding odor stimuli. These experiments provide the first evidence both in field and laboratory conditions for previously unstudied interactions between males of An. gambiae and natural sugar sources. © 2010 The Society for Vector Ecology.
Evidence from the Pacific troposphere for large global sources of oxygenated organic compounds
NASA Astrophysics Data System (ADS)
Singh, H.; Chen, Y.; Staudt, A.; Jacob, D.; Blake, D.; Heikes, B.; Snow, J.
2001-04-01
The presence of oxygenated organic compounds in the troposphere strongly influences key atmospheric processes. Such oxygenated species are, for example, carriers of reactive nitrogen and are easily photolysed, producing free radicals-and so influence the oxidizing capacity and the ozone-forming potential of the atmosphere-and may also contribute significantly to the organic component of aerosols. But knowledge of the distribution and sources of oxygenated organic compounds, especially in the Southern Hemisphere, is limited. Here we characterize the tropospheric composition of oxygenated organic species, using data from a recent airborne survey conducted over the tropical Pacific Ocean (30°N to 30°S). Measurements of a dozen oxygenated chemicals (carbonyls, alcohols, organic nitrates, organic pernitrates and peroxides), along with several C2-C8 hydrocarbons, reveal that abundances of oxygenated species are extremely high, and collectively, oxygenated species are nearly five times more abundant than non-methane hydrocarbons in the Southern Hemisphere. Current atmospheric models are unable to correctly simulate these findings, suggesting that large, diffuse, and hitherto-unknown sources of oxygenated organic compounds must therefore exist. Although the origin of these sources is still unclear, we suggest that oxygenated species could be formed via the oxidation of hydrocarbons in the atmosphere, the photochemical degradation of organic matter in the oceans, and direct emissions from terrestrial vegetation.
Distributed robust adaptive control of high order nonlinear multi agent systems.
Hashemi, Mahnaz; Shahgholian, Ghazanfar
2018-03-01
In this paper, a robust adaptive neural network based controller is presented for multi agent high order nonlinear systems with unknown nonlinear functions, unknown control gains and unknown actuator failures. At first, Neural Network (NN) is used to approximate the nonlinear uncertainty terms derived from the controller design procedure for the followers. Then, a novel distributed robust adaptive controller is developed by combining the backstepping method and the Dynamic Surface Control (DSC) approach. The proposed controllers are distributed in the sense that the designed controller for each follower agent only requires relative state information between itself and its neighbors. By using the Young's inequality, only few parameters need to be tuned regardless of NN nodes number. Accordingly, the problems of dimensionality curse and explosion of complexity are counteracted, simultaneously. New adaptive laws are designed by choosing the appropriate Lyapunov-Krasovskii functionals. The proposed approach proves the boundedness of all the closed-loop signals in addition to the convergence of the distributed tracking errors to a small neighborhood of the origin. Simulation results indicate that the proposed controller is effective and robust. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
DEVELOPMENT AND EVALUATION OF PM 2.5 SOURCE APPORTIONMENT METHODOLOGIES
The receptor model called Positive Matrix Factorization (PMF) has been extensively used to apportion sources of ambient fine particulate matter (PM2.5), but the accuracy of source apportionment results currently remains unknown. In addition, air quality forecast model...
NASA Astrophysics Data System (ADS)
Hales, Christopher A.; Chiles Con Pol Collaboration
2014-04-01
We recently started a 1000 hour campaign to observe 0.2 square degrees of the COSMOS field in full polarization continuum at 1.4 GHz with the Jansky VLA, as part of a joint program with the spectral line COSMOS HI Large Extragalactic Survey (CHILES). When complete, we expect our CHILES Continuum Polarization (CHILES Con Pol) survey to reach an unprecedented SKA-era sensitivity of 0.7 uJy per 4 arcsecond FWHM beam. Here we present the key goals of CHILES Con Pol, which are to (i) produce a source catalog of legacy value to the astronomical community, (ii) measure differential source counts in total intensity, linear polarization, and circular polarization in order to constrain the redshift and luminosity distributions of source populations, (iii) perform a novel weak lensing study using radio polarization as an indicator of intrinsic alignment to better study dark energy and dark matter, and (iv) probe the unknown origin of cosmic magnetism by measuring the strength and structure of intergalactic magnetic fields in the filaments of large scale structure. The CHILES Con Pol source catalog will be a useful resource for upcoming wide-field surveys by acting as a training set for machine learning algorithms, which can then be used to identify and classify radio sources in regions lacking deep multiwavelength coverage.
Visualization of NO2 emission sources using temporal and spatial pattern analysis in Asia
NASA Astrophysics Data System (ADS)
Schütt, A. M. N.; Kuhlmann, G.; Zhu, Y.; Lipkowitsch, I.; Wenig, M.
2016-12-01
Nitrogen dioxide (NO2) is an indicator for population density and level of development, but the contributions of the different emission sources to the overall concentrations remains mostly unknown. In order to allocate fractions of OMI NO2 to emission types, we investigate several temporal cycles and regional patterns.Our analysis is based on daily maps of tropospheric NO2 vertical column densities (VCDs) from the Ozone Monitoring Instrument (OMI). The data set is mapped to a high resolution grid by a histopolation algorithm. This algorithm is based on a continuous parabolic spline, producing more realistic smooth distributions while reproducing the measured OMI values when integrating over ground pixel areas.In the resulting sequence of zoom in maps, we analyze weekly and annual cycles for cities, countryside and highways in China, Japan and Korea Republic and look for patterns and trends and compare the derived results to emission sources in Middle Europe and North America. Due to increased heating in winter compared to summer and more traffic during the week than on Sundays, we dissociate traffic, heating and power plants and visualized maps with different sources. We will also look into the influence of emission control measures during big events like the Olympic Games 2008 and the World Expo 2010 as a possibility to confirm our classification of NO2 emission sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
Poznanski, R R
2010-09-01
A reaction-diffusion model is presented to encapsulate calcium-induced calcium release (CICR) as a potential mechanism for somatofugal bias of dendritic calcium movement in starburst amacrine cells. Calcium dynamics involves a simple calcium extrusion (pump) and a buffering mechanism of calcium binding proteins homogeneously distributed over the plasma membrane of the endoplasmic reticulum within starburst amacrine cells. The system of reaction-diffusion equations in the excess buffer (or low calcium concentration) approximation are reformulated as a nonlinear Volterra integral equation which is solved analytically via a regular perturbation series expansion in response to calcium feedback from a continuously and uniformly distributed calcium sources. Calculation of luminal calcium diffusion in the absence of buffering enables a wave to travel at distances of 120 μm from the soma to distal tips of a starburst amacrine cell dendrite in 100 msec, yet in the presence of discretely distributed calcium-binding proteins it is unknown whether the propagating calcium wave-front in the somatofugal direction is further impeded by endogenous buffers. If so, this would indicate CICR to be an unlikely mechanism of retinal direction selectivity in starburst amacrine cells.
High-energy neutrinos from FR0 radio galaxies?
NASA Astrophysics Data System (ADS)
Tavecchio, F.; Righi, C.; Capetti, A.; Grandi, P.; Ghisellini, G.
2018-04-01
The sources responsible for the emission of high-energy (≳100 TeV) neutrinos detected by IceCube are still unknown. Among the possible candidates, active galactic nuclei with relativistic jets are often examined, since the outflowing plasma seems to offer the ideal environment to accelerate the required parent high-energy cosmic rays. The non-detection of single-point sources or - almost equivalently - the absence, in the IceCube events, of multiplets originating from the same sky position - constrains the cosmic density and the neutrino output of these sources, pointing to a numerous population of faint sources. Here we explore the possibility that FR0 radio galaxies, the population of compact sources recently identified in large radio and optical surveys and representing the bulk of radio-loud AGN population, can represent suitable candidates for neutrino emission. Modelling the spectral energy distribution of an FR0 radio galaxy recently associated with a γ-ray source detected by the Large Area Telescope onboard Fermi, we derive the physical parameters of its jet, in particular the power carried by it. We consider the possible mechanisms of neutrino production, concluding that pγ reactions in the jet between protons and ambient radiation is too inefficient to sustain the required output. We propose an alternative scenario, in which protons, accelerated in the jet, escape from it and diffuse in the host galaxy, producing neutrinos as a result of pp scattering with the interstellar gas, in strict analogy with the processes taking place in star-forming galaxies.
A selection of giant radio sources from NVSS
Proctor, D. D.
2016-06-01
Results of the application of pattern-recognition techniques to the problem of identifying giant radio sources (GRSs) from the data in the NVSS catalog are presented, and issues affecting the process are explored. Decision-tree pattern-recognition software was applied to training-set source pairs developed from known NVSS large-angular-size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRSs for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation ofmore » $$20^{\\prime} $$ and a minimum component area of 1.87 square arcmin at the 1.4 mJy level. The importance of comparing the resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 ± 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher-ranked sources were visually screened, and a table of over 1600 candidates, including morphological annotation, is presented. These systems include doubles and triples, wide-angle tail and narrow-angle tail, S- or Z-shaped systems, and core-jets and resolved cores. In conclusion, while some resolved-lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.« less
2004-06-23
Vibrio cholerae ) + — — — + Unknown Salmonella Typhimurium + — + — — Unknown Typhoid fever (Salmonella Typhi) + O — — — Unknown Source: This...disseminated by contamination of food or drink. Cholera bb ( Vibrio cholerae ) Cholera occurs in many of the developing countries of Africa and Asia...diseaseinfo/cholera_g.htm]; the Health Canada Material Safety Data Sheet - Infectious Substances for Vibrio cholerae , found online at [http://www.hc-sc.gc.ca
Methane storage capacity of the early martian cryosphere
NASA Astrophysics Data System (ADS)
Lasue, Jeremie; Quesnel, Yoann; Langlais, Benoit; Chassefière, Eric
2015-11-01
Methane is a key molecule to understand the habitability of Mars due to its possible biological origin and short atmospheric lifetime. Recent methane detections on Mars present a large variability that is probably due to relatively localized sources and sink processes yet unknown. In this study, we determine how much methane could have been abiotically produced by early Mars serpentinization processes that could also explain the observed martian remanent magnetic field. Under the assumption of a cold early Mars environment, a cryosphere could trap such methane as clathrates in stable form at depth. The extent and spatial distribution of these methane reservoirs have been calculated with respect to the magnetization distribution and other factors. We calculate that the maximum storage capacity of such a clathrate cryosphere is about 2.1 × 1019-2.2 × 1020 moles of CH4, which can explain sporadic releases of methane that have been observed on the surface of the planet during the past decade (∼1.2 × 109 moles). This amount of trapped methane is sufficient for similar sized releases to have happened yearly during the history of the planet. While the stability of such reservoirs depends on many factors that are poorly constrained, it is possible that they have remained trapped at depth until the present day. Due to the possible implications of methane detection for life and its influence on the atmospheric and climate processes on the planet, confirming the sporadic release of methane on Mars and the global distribution of its sources is one of the major goals of the current and next space missions to Mars.
THE LOW-FREQUENCY RADIO CATALOG OF FLAT-SPECTRUM SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massaro, F.; Giroletti, M.; D'Abrusco, R.
A well known property of the γ-ray sources detected by Cos-B in the 1970s, by the Compton Gamma-Ray Observatory in the 1990s, and recently by the Fermi observations is the presence of radio counterparts, particularly for those associated with extragalactic objects. This observational evidence is the basis of the radio-γ-ray connection established for the class of active galactic nuclei known as blazars. In particular, the main spectral property of the radio counterparts associated with γ-ray blazars is that they show a flat spectrum in the GHz frequency range. Our recent analysis dedicated to search blazar-like candidates as potential counterparts formore » the unidentified γ-ray sources allowed us to extend the radio-γ-ray connection in the MHz regime. We also showed that blazars below 1 GHz maintain flat radio spectra. Thus, on the basis of these new results, we assembled a low-frequency radio catalog of flat-spectrum sources built by combining the radio observations of the Westerbork Northern Sky Survey and of the Westerbork in the southern hemisphere catalog with those of the NRAO Very Large Array Sky survey (NVSS). This could be used in the future to search for new, unknown blazar-like counterparts of γ-ray sources. First, we found NVSS counterparts of Westerbork Synthesis Radio Telescope radio sources, and then we selected flat-spectrum radio sources according to a new spectral criterion, specifically defined for radio observations performed below 1 GHz. We also described the main properties of the catalog listing 28,358 radio sources and their logN-logS distributions. Finally, a comparison with the Green Bank 6 cm radio source catalog was performed to investigate the spectral shape of the low-frequency flat-spectrum radio sources at higher frequencies.« less
NASA Astrophysics Data System (ADS)
Singh, H.; Chen, Y.; Tabazadeh, A.; Fukui, Y.; Bey, I.; Yantosca, R.; Jacob, D.; Arnold, F.; Wohlfrom, K.; Atlas, E.; Flocke, F.; Blake, D.; Blake, N.; Heikes, B.; Snow, J.; Talbot, R.; Gregory, G.; Sachse, G.; Vay, S.; Kondo, Y.
2000-02-01
A large number of oxygenated organic chemicals (peroxyacyl nitrates, alkyl nitrates, acetone, formaldehyde, methanol, methylhydroperoxide, acetic acid and formic acid) were measured during the 1997 Subsonic Assessment (SASS) Ozone and Nitrogen Oxide Experiment (SONEX) airborne field campaign over the Atlantic. In this paper, we present a first picture of the distribution of these oxygenated organic chemicals (Ox-organic) in the troposphere and the lower stratosphere, and assess their source and sink relationships. In both the troposphere and the lower stratosphere, the total atmospheric abundance of these oxygenated species (ΣOx-organic) nearly equals that of total nonmethane hydrocarbons (ΣNMHC), which have been traditionally measured. A sizable fraction of the reactive nitrogen (10-30%) is present in its oxygenated organic form. The organic reactive nitrogen fraction is dominated by peroxyacetyl nitrate (PAN), with alkyl nitrates and peroxypropionyl nitrate (PPN) accounting for <5% of total NOy. Comparison of observations with the predictions of the Harvard three-dimensional global model suggests that in many key areas (e.g., formaldehyde and peroxides) substantial differences between measurements and theory are present and must be resolved. In the case of CH3OH, there appears to be a large mismatch between atmospheric concentrations and estimated sources, indicating the presence of major unknown removal processes. Instrument intercomparisons as well as disagreements between observations and model predictions are used to identify needed improvements in key areas. The atmospheric chemistry and sources of this group of chemicals is poorly understood even though their fate is intricately linked with upper tropospheric NOx and HOx cycles.
Quantum teleportation over 143 kilometres using active feed-forward.
Ma, Xiao-Song; Herbst, Thomas; Scheidl, Thomas; Wang, Daqing; Kropatschek, Sebastian; Naylor, William; Wittmann, Bernhard; Mech, Alexandra; Kofler, Johannes; Anisimova, Elena; Makarov, Vadim; Jennewein, Thomas; Ursin, Rupert; Zeilinger, Anton
2012-09-13
The quantum internet is predicted to be the next-generation information processing platform, promising secure communication and an exponential speed-up in distributed computation. The distribution of single qubits over large distances via quantum teleportation is a key ingredient for realizing such a global platform. By using quantum teleportation, unknown quantum states can be transferred over arbitrary distances to a party whose location is unknown. Since the first experimental demonstrations of quantum teleportation of independent external qubits, an internal qubit and squeezed states, researchers have progressively extended the communication distance. Usually this occurs without active feed-forward of the classical Bell-state measurement result, which is an essential ingredient in future applications such as communication between quantum computers. The benchmark for a global quantum internet is quantum teleportation of independent qubits over a free-space link whose attenuation corresponds to the path between a satellite and a ground station. Here we report such an experiment, using active feed-forward in real time. The experiment uses two free-space optical links, quantum and classical, over 143 kilometres between the two Canary Islands of La Palma and Tenerife. To achieve this, we combine advanced techniques involving a frequency-uncorrelated polarization-entangled photon pair source, ultra-low-noise single-photon detectors and entanglement-assisted clock synchronization. The average teleported state fidelity is well beyond the classical limit of two-thirds. Furthermore, we confirm the quality of the quantum teleportation procedure without feed-forward by complete quantum process tomography. Our experiment verifies the maturity and applicability of such technologies in real-world scenarios, in particular for future satellite-based quantum teleportation.
Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders
2010-06-01
Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.
Cho, Ching-Yi; Lai, Chou-Cheng; Lee, Ming-Luen; Hsu, Chien-Lun; Chen, Chun-Jen; Chang, Lo-Yi; Lo, Chiao-Wei; Chiang, Sheng-Fong; Wu, Keh-Gong
2017-02-01
Fever of unknown origin (FUO) was first described in 1961 as fever >38.3°C for at least 3 weeks with no apparent source after 1 week of investigations in the hospital. Infectious disease comprises the majority of cases (40-60%). There is no related research on FUO in children in Taiwan. The aim of this study is to determine the etiologies of FUO in children in Taiwan and to evaluate the relationship between the diagnosis and patient's demography and laboratory data. Children under 18 years old with fever >38.3°C for >2 weeks without apparent source after preliminary investigations at Taipei Veterans General Hospital during 2002-2012 were included. Fever duration, symptoms and signs, laboratory examinations, and final diagnosis were recorded. The distribution of etiologies and age, fever duration, laboratory examinations, and associated symptoms and signs were analyzed. A total of 126 children were enrolled; 60 were girls and 66 were boys. The mean age was 6.7 years old. Infection accounted for 27.0% of cases, followed by undiagnosed cases (23.8%), miscellaneous etiologies (19.8%), malignancies (16.6%), and autoimmune disorders (12.7%). Epstein-Barr virus (EBV) and cytomegalovirus (CMV) were the most commonly found pathogens for infectious disease, and Kawasaki disease (KD) was the top cause of miscellaneous diagnosis. Infectious disease remains the most common etiology. Careful history taking and physical examination are most crucial for making the diagnosis. Conservative treatment may be enough for most children with FUO, except for those suffering from malignancies. Copyright © 2015. Published by Elsevier B.V.
On the joint bimodality of temperature and moisture near stratocumulus cloud tops
NASA Technical Reports Server (NTRS)
Randall, D. A.
1983-01-01
The observed distributions of the thermodynamic variables near stratocumulus top are highly bimodal. Two simple models of sub-grid fractional cloudiness motivated by this observed bimodality are examined. In both models, certain low order moments of two independent, moist-conservative thermodynamic variables are assumed to be known. The first model is based on the assumption of two discrete populations of parcels: a warm-day population and a cool-moist population. If only the first and second moments are assumed to be known, the number of unknowns exceeds the number of independent equations. If the third moments are assumed to be known as well, the number of independent equations exceeds the number of unknowns. The second model is based on the assumption of a continuous joint bimodal distribution of parcels, obtained as the weighted sum of two binormal distributions. For this model, the third moments are used to obtain 9 independent nonlinear algebraic equations in 11 unknowns. Two additional equations are needed to determine the covariance within the two subpopulations. In case these two internal covariance vanish, the system of equations can be solved analytically.
Dynamic granularity of imaging systems
Geissel, Matthias; Smith, Ian C.; Shores, Jonathon E.; ...
2015-11-04
Imaging systems that include a specific source, imaging concept, geometry, and detector have unique properties such as signal-to-noise ratio, dynamic range, spatial resolution, distortions, and contrast. Some of these properties are inherently connected, particularly dynamic range and spatial resolution. It must be emphasized that spatial resolution is not a single number but must be seen in the context of dynamic range and consequently is better described by a function or distribution. We introduce the “dynamic granularity” G dyn as a standardized, objective relation between a detector’s spatial resolution (granularity) and dynamic range for complex imaging systems in a given environmentmore » rather than the widely found characterization of detectors such as cameras or films by themselves. We found that this relation can partly be explained through consideration of the signal’s photon statistics, background noise, and detector sensitivity, but a comprehensive description including some unpredictable data such as dust, damages, or an unknown spectral distribution will ultimately have to be based on measurements. Measured dynamic granularities can be objectively used to assess the limits of an imaging system’s performance including all contributing noise sources and to qualify the influence of alternative components within an imaging system. Our article explains the construction criteria to formulate a dynamic granularity and compares measured dynamic granularities for different detectors used in the X-ray backlighting scheme employed at Sandia’s Z-Backlighter facility.« less
Size distribution, characteristics and sources of heavy metals in haze episode in Beijing.
Duan, Jingchun; Tan, Jihua; Hao, Jiming; Chai, Fahe
2014-01-01
Size segragated samples were collected during high polluted winter haze days in 2006 in Beijing, China. Twenty nine elements and 9 water soluble ions were determined. Heavy metals of Zn, Pb, Mn, Cu, As, Cr, Ni, V and Cd were deeply studied considering their toxic effect on human being. Among these heavy metals, the levels of Mn, As and Cd exceeded the reference values of National Ambient Air Quality Standard (GB3095-2012) and guidelines of World Health Organization. By estimation, high percentage of atmospheric heavy metals in PM2.5 indicates it is an effective way to control atmospheric heavy metals by PM2.5 controlling. Pb, Cd, and Zn show mostly in accumulation mode, V, Mn and Cu exist mostly in both coarse and accumulation modes, and Ni and Cr exist in all of the three modes. Considering the health effect, the breakthrough rates of atmospheric heavy metals into pulmonary alveoli are: Pb (62.1%) > As (58.1%) > Cd (57.9%) > Zn (57.7%) > Cu (55.8%) > Ni (53.5%) > Cr (52.2%) > Mn (49.2%) > V (43.5%). Positive matrix factorization method was applied for source apportionment of studied heavy metals combined with some marker elements and ions such as K, As, SO4(2-) etc., and four factors (dust, vehicle, aged and transportation, unknown) are identified and the size distribution contribution of them to atmospheric heavy metals are discussed.
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
NASA Astrophysics Data System (ADS)
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro
2016-07-01
The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.
2015-01-01
Targeted environmental monitoring reveals contamination by known chemicals, but may exclude potentially pervasive but unknown compounds. Marine mammals are sentinels of persistent and bioaccumulative contaminants due to their longevity and high trophic position. Using nontargeted analysis, we constructed a mass spectral library of 327 persistent and bioaccumulative compounds identified in blubber from two ecotypes of common bottlenose dolphins (Tursiops truncatus) sampled in the Southern California Bight. This library of halogenated organic compounds (HOCs) consisted of 180 anthropogenic contaminants, 41 natural products, 4 with mixed sources, 8 with unknown sources, and 94 with partial structural characterization and unknown sources. The abundance of compounds whose structures could not be fully elucidated highlights the prevalence of undiscovered HOCs accumulating in marine food webs. Eighty-six percent of the identified compounds are not currently monitored, including 133 known anthropogenic chemicals. Compounds related to dichlorodiphenyltrichloroethane (DDT) were the most abundant. Natural products were, in some cases, detected at abundances similar to anthropogenic compounds. The profile of naturally occurring HOCs differed between ecotypes, suggesting more abundant offshore sources of these compounds. This nontargeted analytical framework provided a comprehensive list of HOCs that may be characteristic of the region, and its application within monitoring surveys may suggest new chemicals for evaluation. PMID:25526519
Universality of optimal measurements
NASA Astrophysics Data System (ADS)
Tarrach, Rolf; Vidal, Guifré
1999-11-01
We present optimal and minimal measurements on identical copies of an unknown state of a quantum bit when the quality of measuring strategies is quantified with the gain of information (Kullback-or mutual information-of probability distributions). We also show that the maximal gain of information occurs, among isotropic priors, when the state is known to be pure. Universality of optimal measurements follows from our results: using the fidelity or the gain of information, two different figures of merits, leads to exactly the same conclusions for isotropic distributions. We finally investigate the optimal capacity of N copies of an unknown state as a quantum channel of information.
Optimal minimal measurements of mixed states
NASA Astrophysics Data System (ADS)
Vidal, G.; Latorre, J. I.; Pascual, P.; Tarrach, R.
1999-07-01
The optimal and minimal measuring strategy is obtained for a two-state system prepared in a mixed state with a probability given by any isotropic a priori distribution. We explicitly construct the specific optimal and minimal generalized measurements, which turn out to be independent of the a priori probability distribution, obtaining the best guesses for the unknown state as well as a closed expression for the maximal mean-average fidelity. We do this for up to three copies of the unknown state in a way that leads to the generalization to any number of copies, which we then present and prove.
Soil Organic Carbon Transport in Headwater Tributaries of the Amazon River Traced by Branched GDGTs
NASA Astrophysics Data System (ADS)
Kirkels, F.; Peterse, F.; Ponton, C.; Feakins, S. J.; West, A. J.
2016-12-01
Transfer of soil organic carbon from land to sea by rivers plays a key role in the global carbon cycle by enabling long-term storage upon deposition in the marine environment, and generates archives of paleoinformation. Specific soil bacterial membrane lipids (branched glycerol dialkyl glycerol tetraethers, brGDGTs) can trace soil inputs to a river. BrGDGT distributions relate to soil pH and mean annual air temperature and can be inferred by a novel calibration [1]. In the Amazon Fan, down-core changes in brGDGTs have been used for paleoclimate reconstructions [2]. However, the effects of fluvial sourcing and transport on brGDGT signals in sedimentary deposits are largely unknown. In this study, we investigated the implications of upstream dynamics and hydrological variability (wet/dry season) on brGDGT distributions carried by the Madre de Dios River (Peru), a tributary of the upper Amazon River. The Madre de Dios basin covers a 4.5 km elevation gradient draining the eastern flank of the Andes to the Amazonian floodplains [3], along which we examined organic and mineral soils, and river suspended particulate matter (SPM). BrGDGT signals of SPM indicate sourcing of soils within the catchment, with concentrations increasing downstream indicating accumulation of this biomarker. River depth profiles demonstrated uniform brGDGT distributions and concentrations, suggesting no preferential transport and that brGDGTs are well-mixed in the river. These findings add to prior studies on brGDGTs in the downstream Amazon River [4, 5]. Our study highlights the importance of the upstream drainage basin to constrain the source of brGDGTs in rivers, to better interpret climate reconstructions with this proxy. [1] De Jonge et al. (2014) Geochim Cosmochim Act 141, 97-112 [2] Bendle et al. (2010) Geochem Geoph Geosy 11 [3] Ponton et al. (2014) Geophys. Res. Lett 41, 6420-6427. [4] Kim et al. (2012) Geochim Cosmochim Act 90, 163-180. [5] Zell et al. (2013) Front Microbio 4, 228.
Publications - GMC 249 | Alaska Division of Geological & Geophysical
DGGS GMC 249 Publication Details Title: Source rock geochemical and visual kerogen data from cuttings Reference Unknown, 1995, Source rock geochemical and visual kerogen data from cuttings (2,520-8,837') of the
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.
2018-05-01
In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.
Computer program determines exact two-sided tolerance limits for normal distributions
NASA Technical Reports Server (NTRS)
Friedman, H. A.; Webb, S. R.
1968-01-01
Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.
Assessing global carbon burial during Oceanic Anoxic Event 2, Cenomanian-Turonian boundary event
NASA Astrophysics Data System (ADS)
Owens, J. D.; Lyons, T. W.; Lowery, C. M.
2017-12-01
Reconstructing the areal extent and total amount of organic carbon burial during ancient events remains elusive even for the best documented oceanic anoxic event (OAE) in Earth history, the Cenomanian-Turonian boundary event ( 93.9 Ma), or OAE 2. Reports from 150 OAE 2 localities provide a wide global distribution. However, despite the large number of sections, the majority are found within the proto-Atlantic and Tethyan oceans and interior seaways. Considering these gaps in spatial coverage, the pervasive increase in organic carbon (OC) burial during OAE2 that drove carbon isotope values more positive (average of 4‰) can provide additional insight. These isotope data allow us to estimate the total global burial of OC, even for unstudied portions of the global ocean. Thus, we can solve for any `missing' OC sinks by comparing our estimates from a forward carbon-isotope box model with the known, mapped distribution of OC for OAE 2 sediments. Using the known OC distribution and reasonably extrapolating to the surrounding regions of analogous depositional conditions accounts for only 13% of the total seafloor, mostly in marginal marine settings. This small geographic area accounts for more OC burial than the entire modern ocean, but significantly less than the amount necessary to produce the observed isotope record. Using modern and OAE 2 average OC rates we extrapolate further to appropriate depositional settings in the unknown portions of seafloor, mostly deep abyssal plains. This addition significantly increases the predicted amount buried but still does not account for total burial. Additional sources, including hydrocarbon migration, lacustrine, and coal also cannot account for the missing OC. This difference points to unknown portions of the open ocean with high TOC contents or exceptionally high TOC in productive marginal marine regions, which are underestimated in our extrapolations. This difference might be explained by highly productive margins within the Pacific.
Distributed Synchronization Control of Multiagent Systems With Unknown Nonlinearities.
Su, Shize; Lin, Zongli; Garcia, Alfredo
2016-01-01
This paper revisits the distributed adaptive control problem for synchronization of multiagent systems where the dynamics of the agents are nonlinear, nonidentical, unknown, and subject to external disturbances. Two communication topologies, represented, respectively, by a fixed strongly-connected directed graph and by a switching connected undirected graph, are considered. Under both of these communication topologies, we use distributed neural networks to approximate the uncertain dynamics. Decentralized adaptive control protocols are then constructed to solve the cooperative tracker problem, the problem of synchronization of all follower agents to a leader agent. In particular, we show that, under the proposed decentralized control protocols, the synchronization errors are ultimately bounded, and their ultimate bounds can be reduced arbitrarily by choosing the control parameter appropriately. Simulation study verifies the effectiveness of our proposed protocols.
NASA Astrophysics Data System (ADS)
Strong, S. B.; Strikwerda, T.; Lario, D.; Raouafi, N.; Decker, R.
2010-12-01
The main components of interplanetary dust are created through destruction, erosion, and collision of asteroids and comets (e.g. Mann et al. 2006). Solar radiation forces distribute these interplanetary dust particles throughout the solar system. The percent contribution of these source particulates to the net interplanetary dust distribution can reveal information about solar nebula conditions, within which these objects are formed. In the absence of observational data (e.g. Helios, Pioneer), specifically at distances less than 0.3 AU, the precise dust distributions remain unknown and limited to 1 AU extrapolative models (e.g. Mann et al. 2003). We have developed a model suitable for the investigation of scattered dust and electron irradiance incident on a sensor for distances inward of 1 AU. The model utilizes the Grün et al. (1985) and Mann et al. (2004) dust distribution theory combined with Mie theory and Thomson electron scattering to determine the magnitude of solar irradiance scattered towards an optical sensor as a function of helio-ecliptic latitude and longitude. MESSENGER star tracker observations (launch to 2010) of the ambient celestial background combined with Helios data (Lienert et al. 1982) reveal trends in support of the model predictions. This analysis further emphasizes the need to characterize the inner solar system dust environment in anticipation of near-Solar missions.
Modeling the Extremely Lightweight Zerodur Mirror (ELZM) Thermal Soak Test
NASA Technical Reports Server (NTRS)
Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip
2017-01-01
Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.
Modeling the Extremely Lightweight Zerodur Mirror (ELZM) thermal soak test
NASA Astrophysics Data System (ADS)
Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip
2017-09-01
Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krčo, Marko; Goldsmith, Paul F., E-mail: marko@astro.cornell.edu
2016-05-01
We present a geometry-independent method for determining the shapes of radial volume density profiles of astronomical objects whose geometries are unknown, based on a single column density map. Such profiles are often critical to understand the physics and chemistry of molecular cloud cores, in which star formation takes place. The method presented here does not assume any geometry for the object being studied, thus removing a significant source of bias. Instead, it exploits contour self-similarity in column density maps, which appears to be common in data for astronomical objects. Our method may be applied to many types of astronomical objectsmore » and observable quantities so long as they satisfy a limited set of conditions, which we describe in detail. We derive the method analytically, test it numerically, and illustrate its utility using 2MASS-derived dust extinction in molecular cloud cores. While not having made an extensive comparison of different density profiles, we find that the overall radial density distribution within molecular cloud cores is adequately described by an attenuated power law.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Field, Scott E.; Hesthaven, Jan S.; Lau, Stephen R.
In the context of metric perturbation theory for nonspinning black holes, extreme mass ratio binary systems are described by distributionally forced master wave equations. Numerical solution of a master wave equation as an initial boundary value problem requires initial data. However, because the correct initial data for generic-orbit systems is unknown, specification of trivial initial data is a common choice, despite being inconsistent and resulting in a solution which is initially discontinuous in time. As is well known, this choice leads to a burst of junk radiation which eventually propagates off the computational domain. We observe another potential consequence ofmore » trivial initial data: development of a persistent spurious solution, here referred to as the Jost junk solution, which contaminates the physical solution for long times. This work studies the influence of both types of junk on metric perturbations, waveforms, and self-force measurements, and it demonstrates that smooth modified source terms mollify the Jost solution and reduce junk radiation. Our concluding section discusses the applicability of these observations to other numerical schemes and techniques used to solve distributionally forced master wave equations.« less
A Nontriggered Burst Supplement to the BATSE Gamma-Ray Burst Catalogs
NASA Technical Reports Server (NTRS)
Kommers, Jefferson M.; Lewin, Walter H. G.; Kouveliotou, Chryssa; vanParadijs, Jan; Pendleton, Geoffrey N.; Meegan, Charles A.; Fishman, Gerald J.
2001-01-01
The Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory detects gamma-ray bursts (GRBs) with a real-time burst detection (or "trigger") system running onboard the spacecraft. Under some circumstances, however, a GRB may not activate the on-board burst trigger. For example, the burst may be too faint to exceed the on-board detection threshold, or it may occur while the on-board burst trigger is disabled for technical reasons. This paper describes a catalog of 873 "nontriggered" GRBs that were detected in a search of the archival continuous data from BATSE recorded between 1991 December 9.0 and 1997 December 17.0. For each burst, the catalog gives an estimated source direction, duration, peak flux, and fluence. Similar data are presented for 50 additional bursts of unknown origin that were detected in the 25-50 keV range; these events may represent the low-energy "tail" of the GRB spectral distribution. This catalog increases the number of GRBs detected with BATSE by 48% during the time period covered by the search.
A Non-Triggered Burst Supplement to the BATSE Gamma-Ray Burst Catalogs
NASA Technical Reports Server (NTRS)
Kommers, J.; Lewin, W. H.; Kouveliotou, C.; vanParadijs, J.; Pendleton, G. N.; Meegan, C. A.; Fishman, G. J.
1998-01-01
The Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory detects gamma-ray bursts (GRBs) with a real-time burst detection (or "trigger") system running onboard the spacecraft. Under some circumstances, however, a GRB may not activate the onboard burst trigger. For example, the burst may be too faint to exceed the onboard detection threshold, or it may occur while the onboard burst trigger is disabled for technical reasons. This paper is a catalog of such "non-triggered" GRBs that were detected in a search of the archival continuous data from BATSE. It lists 873 non-triggered bursts that were recorded between 1991 December 9.0 and 1997 December 17.0. For each burst, the catalog gives an estimated source direction, duration, peak flux, and fluence. Similar data are presented for 50 additional bursts of unknown origin that were detected in the 25-50 keV range; these events may represent the low-energy "tail" of the GRB spectral distribution. This catalog increases the number of GRBs detected with BATSE by 48% during the time period covered by the search.
Interaction of wave with a body submerged below an ice sheet with multiple arbitrarily spaced cracks
NASA Astrophysics Data System (ADS)
Li, Z. F.; Wu, G. X.; Ji, C. Y.
2018-05-01
The problem of wave interaction with a body submerged below an ice sheet with multiple arbitrarily spaced cracks is considered, based on the linearized velocity potential theory together with the boundary element method. The ice sheet is modeled as a thin elastic plate with uniform properties, and zero bending moment and shear force conditions are enforced at the cracks. The Green function satisfying all the boundary conditions including those at cracks, apart from that on the body surface, is derived and is expressed in an explicit integral form. The boundary integral equation for the velocity potential is constructed with an unknown source distribution over the body surface only. The wave/crack interaction problem without the body is first solved directly without the need for source. The convergence and comparison studies are undertaken to show the accuracy and reliability of the solution procedure. Detailed numerical results through the hydrodynamic coefficients and wave exciting forces are provided for a body submerged below double cracks and an array of cracks. Some unique features are observed, and their mechanisms are analyzed.
Westgate, J.A.; Hamilton, T.D.; Gorton, M.P.
1983-01-01
Old Crow tephra is the first extensive Pleistocene tephra unit to be documented in the northwestern part of North America. It has a calc-alkaline dacitic composition with abundant pyroxene, plagioclase, and FeTi oxides, and minor hornblende, biotite, apatite, and zircon. Thin, clear, bubble-wall fragments are the dominant type of glass shard. This tephra can be recognized by its glass and phenocryst compositions, as determined by X-ray fluorescence, microprobe, and instrumental neutron activation techniques. It has an age between the limits of 60,000 and 120,000 yr, set by 14C and fission-track measurements, respectively. Old Crow tephra has been recognized in the Koyukuk Basin and Fairbanks region of Alaska, and in the Old Crow Lowlands of the northern Yukon Territory, some 600 km to the east-northeast. The source vent is unknown, but these occurrences, considered in relation to the distant locations of potential Quaternary volcanic sources, demonstrate the widespread distribution of this tephra and underscore its importance as a regional stratigraphic marker. ?? 1983.
Estimating Coastal Digital Elevation Model (DEM) Uncertainty
NASA Astrophysics Data System (ADS)
Amante, C.; Mesick, S.
2017-12-01
Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.
Source apportionment of VOCs in the Los Angeles area using positive matrix factorization
NASA Astrophysics Data System (ADS)
Brown, Steven G.; Frankel, Anna; Hafner, Hilary R.
Eight 3-h speciated hydrocarbon measurements were collected daily by the South Coast Air Quality Management District (SCAQMD) as part of the Photochemical Assessment Monitoring Stations (PAMS) program during the summers of 2001-03 at two sites in the Los Angeles air basin, Azusa and Hawthorne. Over 30 hydrocarbons from over 500 samples at Azusa and 600 samples at Hawthorne were subsequently analyzed using the multivariate receptor model positive matrix factorization (PMF). At Azusa and Hawthorne, five and six factors were identified, respectively, with a good comparison between predicted and measured mass. At Azusa, evaporative emissions (a median of 31% of the total mass), motor vehicle exhaust (22%), liquid/unburned gasoline (27%), coatings (17%), and biogenic emissions (3%) factors were identified. Factors identified at Hawthorne were evaporative emissions (a median of 34% of the total mass), motor vehicle exhaust (24%), industrial process losses (15%), natural gas (13%), liquid/unburned gasoline (13%), and biogenic emissions (1%). Together, the median contribution from mobile source-related factors (exhaust, evaporative emissions, and liquid/unburned gasoline) was 80% and 71% at Azusa and Hawthorne, respectively, similar to previous source apportionment results using the chemical mass balance (CMB) model. There is a difference in the distribution among mobile source factors compared to the CMB work, with an increase in the contribution from evaporative emissions, though the cause (changes in emissions or differences between models) is unknown.
NASA Astrophysics Data System (ADS)
Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.
2015-08-01
A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.
Star-disk interaction in Herbig Ae/Be stars
NASA Astrophysics Data System (ADS)
Speights, Christa Marie
2012-09-01
The question of the mechanism of certain types of stars is important. Classical T Tauri (CTTS) stars accrete magnetospherically, and Herbig Ae/Be stars (higher-mass analogs to CTTS) are thought to also accrete magnetospherically, but the source of a kG magnetic field is unknown, since these stars have radiative interiors. For magnetospheric accretion, an equation has been derived (Hartmann, 2001) which relates the truncation radius, stellar radius, stellar mass, mass accretion rate and magnetic field strength. Currently the magnetic field of Herbig stars is known to be somewhere between 0.1 kG and 10 kG. One goal of this research is to further constrain the magnetic field. In order to do that, I use the magnetospheric accretion equation. For CTTS, all of the variables used in the equation can be measured, so I gather this data from the literature and test the equation and find that it is consistent. Then I apply the equation to Herbig Ae stars and find that the error introduced from using random inclinations is too large to lower the current upper limit of the magnetic field range. If Herbig Ae stars are higher-mass analogs to CTTS, then they should have a similar magnetic field distribution. I compare the calculated Herbig Ae magnetic field distribution to several typical magnetic field distributions using the Kolmogorov-Smirnov test, and find that the data distribution does not match any of the distributions used. This means that Herbig Ae stars do not have well ordered kG fields like CTTS.
Yang, Xunan; Yu, Liuqian; Chen, Zefang; Xu, Meiying
2016-01-01
Traditional risk assessment and source apportionment of sediments based on bulk polycyclic aromatic hydrocarbons (PAHs) can introduce biases due to unknown aging effects in various sediments. We used a mild solvent (hydroxypropyl-β-cyclodextrin) to extract the bioavailable fraction of PAHs (a-PAHs) from sediment samples collected in Pearl River, southern China. We investigated the potential application of this technique for ecological risk assessments and source apportionment. We found that the distribution of PAHs was associated with human activities and that the a-PAHs accounted for a wide range (4.7%–21.2%) of total-PAHs (t-PAHs), and high risk sites were associated with lower t-PAHs but higher a-PAHs. The correlation between a-PAHs and the sediment toxicity assessed using tubificid worms (r = −0.654, P = 0.021) was greater than that from t-PAH-based risk assessment (r = −0.230, P = 0.472). Moreover, the insignificant correlation between a-PAH content and mPEC-Q of low molecular weight PAHs implied the potiential bias of t-PAH-based risk assessment. The source apportionment from mild extracted fractions was consistent across different indicators and was in accordance with typical pollution sources. Our results suggested that mild extraction-based approaches reduce the potential error from aging effects because the mild extracted PAHs provide a more direct indicator of bioavailability and fresher fractions in sediments. PMID:26976450
2014-07-01
Intelligence (www.aaai.org). All rights reserved. knowledge engineering, but it is often impractical due to high environment variance, or unknown events...distribution unlimited 13. SUPPLEMENTARY NOTES In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence , 27-31 July 2014...autonomy for responding to unexpected events in strategy simulations. Computational Intelligence , 29(2), 187-206. Leake, D. B. (1991), Goal-based
1. Drop Structure on the Arizona Crosscut Canal. Photographer unknown, ...
1. Drop Structure on the Arizona Crosscut Canal. Photographer unknown, no date. Note that caption is incorrect: in relation to Camelback Mountain (rear), this can only be the Old Crosscut. Source: reprinted from the 13th Annual Report of the U.S. Geological Survey, 1893. - Old Crosscut Canal, North Side of Salt River, Phoenix, Maricopa County, AZ
The characteristics and impact of source of infection on sepsis-related ICU outcomes.
Jeganathan, Niranjan; Yau, Stephen; Ahuja, Neha; Otu, Dara; Stein, Brian; Fogg, Louis; Balk, Robert
2017-10-01
Source of infection is an independent predictor of sepsis-related mortality. To date, studies have failed to evaluate differences in septic patients based on the source of infection. Retrospective study of all patients with sepsis admitted to the ICU of a university hospital within a 12month time period. Sepsis due to intravascular device and multiple sources had the highest number of positive blood cultures and microbiology whereas lung and abdominal sepsis had the least. The observed hospital mortality was highest for sepsis due to multiple sources and unknown cause, and was lowest when due to abdominal, genitourinary (GU) or skin/soft tissue. Patients with sepsis due to lungs, unknown and multiple sources had the highest rates of multi-organ failure, whereas those with sepsis due to GU and skin/soft tissue had the lowest rates. Those with multisource sepsis had a significantly higher median ICU length of stay and hospital cost. There are significant differences in patient characteristics, microbiology positivity, organs affected, mortality, length of stay and cost based on the source of sepsis. These differences should be considered in future studies to be able to deliver personalized care. Copyright © 2017 Elsevier Inc. All rights reserved.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
THE CONTRIBUTION OF FERMI -2LAC BLAZARS TO DIFFUSE TEV–PEV NEUTRINO FLUX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
2017-01-20
The recent discovery of a diffuse cosmic neutrino flux extending up to PeV energies raises the question of which astrophysical sources generate this signal. Blazars are one class of extragalactic sources which may produce such high-energy neutrinos. We present a likelihood analysis searching for cumulative neutrino emission from blazars in the 2nd Fermi -LAT AGN catalog (2LAC) using IceCube neutrino data set 2009-12, which was optimized for the detection of individual sources. In contrast to those in previous searches with IceCube, the populations investigated contain up to hundreds of sources, the largest one being the entire blazar sample in themore » 2LAC catalog. No significant excess is observed, and upper limits for the cumulative flux from these populations are obtained. These constrain the maximum contribution of 2LAC blazars to the observed astrophysical neutrino flux to 27% or less between around 10 TeV and 2 PeV, assuming the equipartition of flavors on Earth and a single power-law spectrum with a spectral index of −2.5. We can still exclude the fact that 2LAC blazars (and their subpopulations) emit more than 50% of the observed neutrinos up to a spectral index as hard as −2.2 in the same energy range. Our result takes into account the fact that the neutrino source count distribution is unknown, and it does not assume strict proportionality of the neutrino flux to the measured 2LAC γ -ray signal for each source. Additionally, we constrain recent models for neutrino emission by blazars.« less
Dupuis, Julian R; Guerrero, Felix D; Skoda, Steven R; Phillips, Pamela L; Welch, John B; Schlater, Jack L; Azeredo-Espin, Ana Maria L; Pérez de León, Adalberto A; Geib, Scott M
2018-05-19
New World screwworm (NWS), Cochliomyia hominivorax (Coquerel 1858) (Diptera: Calliphoridae), is a myiasis-causing fly that can be a serious threat to the health of livestock, wildlife, and humans. Its progressive eradication from the southern United States, Mexico, and Central America from the 1950s to 2000s is an excellent example of successful pest management using sterile insect technique (SIT). In late 2016, autochthonous NWS were detected in the Florida Keys, representing this species' first invasion in the United States in >30 yr. Rapid use of quarantine and SIT was successful in eliminating the infestation by early 2017; however, the geographic source of this infestation remains unknown. Here, we use amplicon sequencing to generate mitochondrial and nuclear sequence data representing all confirmed cases of NWS from this infestation, and compare these sequences to preexisting data sets sampling the native distribution of NWS. We ask two questions regarding the FL Keys outbreak. First, is this infestation the result of a single invasion from one source, or multiple invasions from different sources? And second, what is the geographic origin of this invasion? We found virtually no sequence variation between specimens collected from the FL Keys outbreak, which is consistent with a single source of introduction. However, we also found very little geographic resolution in any of the data sets, which precludes identification of the source of this outbreak. Our lack of success in answering our second question speaks to the need for finer-scale genetic or genomic assessments of NWS population structure, which would facilitate source determination of potential future outbreaks.
17. Photographic copy of photograph. Location unknown but assumed to ...
17. Photographic copy of photograph. Location unknown but assumed to be uper end of canal. Features no longer extant. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation service. Annual Report, Fiscal Year 1925. Vol. I, Narrative and Photographs, Irrigation District #4, California and Southern Arizona, RG 75, Entry 655, Box 28, National Archives, Washington, DC.) Photographer unknown. MAIN (TITLED FLORENCE) CANAL, WASTEWAY, SLUICEWAY, & BRIDGE, 1/26/25. - San Carlos Irrigation Project, Marin Canal, Amhurst-Hayden Dam to Picacho Reservoir, Coolidge, Pinal County, AZ
Distributed subterranean exploration and mapping with teams of UAVs
NASA Astrophysics Data System (ADS)
Rogers, John G.; Sherrill, Ryan E.; Schang, Arthur; Meadows, Shava L.; Cox, Eric P.; Byrne, Brendan; Baran, David G.; Curtis, J. Willard; Brink, Kevin M.
2017-05-01
Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.
NASA Technical Reports Server (NTRS)
Woronowicz, Michael
2017-01-01
Providers of payloads carried aboard the International Space Station must conduct analyses to demonstrate that any planned gaseous venting events generate no more than a certain level of material that may interfere with optical measurements from other experiments or payloads located nearby. This requirement is expressed in terms of a maximum column number density (CND). Depending on the level of rarefaction, such venting may be characterized by effusion for low flow rates, or by a sonic distribution at higher levels. Since the relative locations of other sensitive payloads are often unknown because they may refer to future projects, this requirement becomes a search for the maximum CND along any path.In another application, certain astronomical observations make use of CND to estimate light attenuation from a distant star through gaseous plumes, such as the Fermi Bubbles emanating from the vicinity of the black hole at the center of our Milky Way galaxy, in order to infer the amount of material being expelled via those plumes.This paper presents analytical CND expressions developed for general straight paths based upon a free molecule point source model for steady effusive flow and for a distribution fitted to model flows from a sonic orifice. Among other things, in this Mach number range it is demonstrated that the maximum CND from a distant location occurs along the path parallel to the source plane that intersects the plume axis. For effusive flows this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plumes axis. For sonic plumes this ratio is reduced to about 43.
The third catalog of active galactic nuclei detected by the Fermi large area telescope
Ackermann, M.; Ajello, M.; Atwood, W. B.; ...
2015-08-25
We present the third catalog of active galactic nuclei (AGNs) detected by the Fermi-LAT (3LAC). It is based on the third Fermi-LAT catalog (3FGL) of sources detected between 100 MeV and 300 GeV with a Test Statistic greater than 25, between 2008 August 4 and 2012 July 31. The 3LAC includes 1591 AGNs located at high Galactic latitudes (more » $$| b| \\gt 10^\\circ $$), a 71% increase over the second catalog based on 2 years of data. There are 28 duplicate associations, thus 1563 of the 2192 high-latitude gamma-ray sources of the 3FGL catalog are AGNs. Most of them (98%) are blazars. About half of the newly detected blazars are of unknown type, i.e., they lack spectroscopic information of sufficient quality to determine the strength of their emission lines. Based on their gamma-ray spectral properties, these sources are evenly split between flat-spectrum radio quasars (FSRQs) and BL Lacs. The most abundant detected BL Lacs are of the high-synchrotron-peaked (HSP) type. There were about 50% of the BL Lacs that had no measured redshifts. A few new rare outliers (HSP-FSRQs and high-luminosity HSP BL Lacs) are reported. The general properties of the 3LAC sample confirm previous findings from earlier catalogs. The fraction of 3LAC blazars in the total population of blazars listed in BZCAT remains non-negligible even at the faint ends of the BZCAT-blazar radio, optical, and X-ray flux distributions, which hints that even the faintest known blazars could eventually shine in gamma-rays at LAT-detection levels. Furthermore, the energy-flux distributions of the different blazar populations are in good agreement with extrapolation from earlier catalogs.« less
Unveiling the magnetic structure of VHE SNRs/PWNe with XIPE, the x-ray imaging-polarimetry explorer
NASA Astrophysics Data System (ADS)
de Ona Wilhelmi, E.; Vink, J.; Bykov, A.; Zanin, R.; Bucciantini, N.; Amato, E.; Bandiera, R.; Olmi, B.; Uvarov, Yu.; XIPE Science Working Group
2017-01-01
The dynamics, energetics and evolution of pulsar wind nebulae (PWNe) and supernova remnants (SNRs), are strongly affected by their magnetic field strength and distribution. They are usually strong, extended, sources of non-thermal X-ray radiation, producing intrinsically polarised radiation. The energetic wind around pulsars produces a highly-magnetised, structured flow, often displaying a jet and a torus and different features (i.e. wisps, knots). This magnetic-dominant wind evolves as it moves away from the pulsar magnetosphere to the surrounding large-scale nebula, becoming kinetic-dominant. Basic aspects such how this conversion is produced, or how the jets and torus are formed, as well as the level of turbulence in the nebula are still unknown. Likewise, the processes ruling the acceleration of particles in shell-like SNRs up to 1015 eV, including the amplification of the magnetic field, are not clear yet. Imaging polarimetry in this regard is crucial to localise the regions of shock acceleration and to measure the strength and the orientation of the magnetic field at these emission sites. X-ray polarimetry with the X-ray Imaging Polarimetry Explorer (XIPE) will allow the understanding of the magnetic field structure and intensity on different regions in SNRs and PWNe, helping to unveil long-standing questions such as i.e. acceleration of cosmic rays in SNRs or magnetic-to-kinetic energy transfer. SNRs and PWNe also represent the largest population of Galactic very-high energy gamma-ray sources, therefore the study of their magnetic distribution with XIPE will provide fundamental ingredients on the investigation of those sources at very high energies. We will discuss the physics case related to SNRs and PWNe and the expectations of the XIPE observations of some of the most prominent SNRs and PWNe.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-05
... Distribution of Source Material to Exempt Persons and to General Licensees and Revision of General License and..., Distribution of Source Material to Exempt Persons and to General Licensees and Revision of General License and Exemptions (Distribution of Source Material Rule). The Distribution of Source Material Rule amended the NRC's...
Manzano, Carlos A; Marvin, Chris; Muir, Derek; Harner, Tom; Martin, Jonathan; Zhang, Yifeng
2017-05-16
The aromatic fractions of snow, lake sediment, and air samples collected during 2011-2014 in the Athabasca oil sands region were analyzed using two-dimensional gas chromatography following a nontargeted approach. Commonly monitored aromatics (parent and alkylated-polycyclic aromatic hydrocarbons and dibenzothiophenes) were excluded from the analysis, focusing mainly on other heterocyclic aromatics. The unknowns detected were classified into isomeric groups and tentatively identified using mass spectral libraries. Relative concentrations of heterocyclic aromatics were estimated and were found to decrease with distance from a reference site near the center of the developments and with increasing depth of sediments. The same heterocyclic aromatics identified in snow, lake sediments, and air were observed in extracts of delayed petroleum coke, with similar distributions. This suggests that petroleum coke particles are a potential source of heterocyclic aromatics to the local environment, but other oil sands sources must also be considered. Although the signals of these heterocyclic aromatics diminished with distance, some were detected at large distances (>100 km) in snow and surface lake sediments, suggesting that the impact of industry can extend >50 km. The list of heterocyclic aromatics and the mass spectral library generated in this study can be used for future source apportionment studies.
(U) An Analytic Examination of Piezoelectric Ejecta Mass Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, Ian Lee
2017-02-02
Ongoing efforts to validate a Richtmyer-Meshkov instability (RMI) based ejecta source model [1, 2, 3] in LANL ASC codes use ejecta areal masses derived from piezoelectric sensor data [4, 5, 6]. However, the standard technique for inferring masses from sensor voltages implicitly assumes instantaneous ejecta creation [7], which is not a feature of the RMI source model. To investigate the impact of this discrepancy, we define separate “areal mass functions” (AMFs) at the source and sensor in terms of typically unknown distribution functions for the ejecta particles, and derive an analytic relationship between them. Then, for the case of single-shockmore » ejection into vacuum, we use the AMFs to compare the analytic (or “true”) accumulated mass at the sensor with the value that would be inferred from piezoelectric voltage measurements. We confirm the inferred mass is correct when creation is instantaneous, and furthermore prove that when creation is not instantaneous, the inferred values will always overestimate the true mass. Finally, we derive an upper bound for the error imposed on a perfect system by the assumption of instantaneous ejecta creation. When applied to shots in the published literature, this bound is frequently less than several percent. Errors exceeding 15% may require velocities or timescales at odds with experimental observations.« less
Prospects for the Detection of Fast Radio Bursts with the Murchison Widefield Array
NASA Astrophysics Data System (ADS)
Trott, Cathryn M.; Tingay, Steven J.; Wayth, Randall B.
2013-10-01
Fast radio bursts (FRBs) are short timescale (Lt1 s) astrophysical radio signals, presumed to be a signature of cataclysmic events of extragalactic origin. The discovery of six high-redshift events at ~1400 MHz from the Parkes radio telescope suggests that FRBs may occur at a high rate across the sky. The Murchison Widefield Array (MWA) operates at low radio frequencies (80-300 MHz) and is expected to detect FRBs due to its large collecting area (~2500 m2) and wide field-of-view (FOV, ~ 1000 deg2 at ν = 200 MHz). We compute the expected number of FRB detections for the MWA assuming a source population consistent with the reported detections. Our formalism properly accounts for the frequency-dependence of the antenna primary beam, the MWA system temperature, and unknown spectral index of the source population, for three modes of FRB detection: coherent; incoherent; and fast imaging. We find that the MWA's sensitivity and large FOV combine to provide the expectation of multiple detectable events per week in all modes, potentially making it an excellent high time resolution science instrument. Deviations of the expected number of detections from actual results will provide a strong constraint on the assumptions made for the underlying source population and intervening plasma distribution.
Properties of Dust Obscured Galaxies in the Nep-Deep Field
NASA Astrophysics Data System (ADS)
Oi, Nagisa; Matsuhara, Hideo; Pearson, Chris; Buat, Veronique; Burgarella, Denis; Malkan, Matt; Miyaji, Takamitsu; AKARI-NEP Team
2017-03-01
We selected 47 DOGs at z∼1.5 using optical R (or r^{'}), AKARI 18 μm, and 24 μm color in the AKARI North Ecliptic Pole (NEP) Deep survey field. Using the colors among 3, 4, 7, and 9μm, we classified them into 3 groups; bump DOGs (23 sources), power-law DOGs (16 sources), and unknown DOGs (8 sources). We built spectral energy distributions (SEDs) with optical to far-infrared photometric data and investigated their properties using SED fitting method. We found that AGN activity such as a AGN contribution to the infrared luminosity and a Chandra detection rate for bump and power-law DOGs are significantly different, while stellar component properties like a stellar mass and a star-formation rate are similar to each other. A specific star-formation rate range of power-law DOGs is slightly higher than that of bump DOGs with wide overlap. Herschel/PACS detection rates are almost the same between bump and power-law DOGs. On the other hand SPIRE detection rates show large differences between bump and power-law DOGs. These results might be explained by differences in dust temperatures. Both groups of DOGs host hot and/or warm dust (∼ 50 Kelvin), and many bump DOGs contain cooler dust (≤ 30 Kelvin)
NASA Astrophysics Data System (ADS)
Kurudirek, M.; Medhat, M. E.
2014-07-01
An alternative approach is used to measure normalized mass attenuation coefficients (μ/ρ) of materials with unknown thickness and density. The adopted procedure is based on the use of simultaneous emission of Kα and Kβ X-ray lines as well as gamma peaks from radioactive sources in transmission geometry. 109Cd and 60Co radioactive sources were used for the purpose of the investigation. It has been observed that using the simultaneous X- and/or gamma rays of different energy allows accurate determination of relative mass attenuation coefficients by eliminating the dependence of μ/ρ on thickness and density of the material.
De Rybel, Bert; Adibi, Milad; Breda, Alice S; Wendrich, Jos R; Smit, Margot E; Novák, Ondřej; Yamaguchi, Nobutoshi; Yoshida, Saiko; Van Isterdael, Gert; Palovaara, Joakim; Nijsse, Bart; Boekschoten, Mark V; Hooiveld, Guido; Beeckman, Tom; Wagner, Doris; Ljung, Karin; Fleck, Christian; Weijers, Dolf
2014-08-08
Coordination of cell division and pattern formation is central to tissue and organ development, particularly in plants where walls prevent cell migration. Auxin and cytokinin are both critical for division and patterning, but it is unknown how these hormones converge upon tissue development. We identify a genetic network that reinforces an early embryonic bias in auxin distribution to create a local, nonresponding cytokinin source within the root vascular tissue. Experimental and theoretical evidence shows that these cells act as a tissue organizer by positioning the domain of oriented cell divisions. We further demonstrate that the auxin-cytokinin interaction acts as a spatial incoherent feed-forward loop, which is essential to generate distinct hormonal response zones, thus establishing a stable pattern within a growing vascular tissue. Copyright © 2014, American Association for the Advancement of Science.
Spatial Distribution of Small Water Body Types across Indiana Ecoregions
Due to their large numbers and biogeochemical activity, small water bodies (SWB), such as ponds and wetlands, can have substantial cumulative effects on hydrologic, biogeochemical, and biological processes; yet the spatial distributions of various SWB types are often unknown. Usi...
41. Photocopy of progress photograph ca. 1974, photographer unknown. Original ...
41. Photocopy of progress photograph ca. 1974, photographer unknown. Original photograph Property of United States Air Force, 21" Space Command. This is the source for views 41 to 47. CAPE COD AIR STATION PAVE PAWS FACILITY - SHOWING BUILDING "RED IRON" STEEL STRUCTURE NEARING COMPLETION. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA
Murphy, H M; Thomas, M K; Medeiros, D T; McFADYEN, S; Pintar, K D M
2016-05-01
The estimated burden of endemic acute gastrointestinal illness (AGI) annually in Canada is 20·5 million cases. Approximately 4 million of these cases are domestically acquired and foodborne, yet the proportion of waterborne cases is unknown. A number of randomized controlled trials have been completed to estimate the influence of tap water from municipal drinking water plants on the burden of AGI. In Canada, 83% of the population (28 521 761 people) consumes tap water from municipal drinking water plants serving >1000 people. The drinking water-related AGI burden associated with the consumption of water from these systems in Canada is unknown. The objective of this research was to estimate the number of AGI cases attributable to consumption of drinking water from large municipal water supplies in Canada, using data from four household drinking water intervention trials. Canadian municipal water treatment systems were ranked into four categories based on source water type and quality, population size served, and treatment capability and barriers. The water treatment plants studied in the four household drinking water intervention trials were also ranked according to the aforementioned criteria, and the Canadian treatment plants were then scored against these criteria to develop four AGI risk groups. The proportion of illnesses attributed to distribution system events vs. source water quality/treatment failures was also estimated, to inform the focus of future intervention efforts. It is estimated that 334 966 cases (90% probability interval 183 006-501 026) of AGI per year are associated with the consumption of tap water from municipal systems that serve >1000 people in Canada. This study provides a framework for estimating the burden of waterborne illness at a national level and identifying existing knowledge gaps for future research and surveillance efforts, in Canada and abroad.
Using an epiphytic moss to identify previously unknown sources of atmospheric cadmium pollution
Geoffrey H. Donovan; Sarah E. Jovan; Demetrios Gatziolis; Igor Burstyn; Yvonne L. Michael; Michael C. Amacher; Vicente J. Monleon
2016-01-01
Urban networks of air-quality monitors are often too widely spaced to identify sources of air pollutants, especially if they do not disperse far from emission sources. The objectives of this study were to test the use of moss bio-indicators to develop a fine-scale map of atmospherically-derived cadmium and to identify the sources of cadmium in a complex urban setting....
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Clustering redshift distributions for the Dark Energy Survey
NASA Astrophysics Data System (ADS)
Helsby, Jennifer
Accurate determination of photometric redshifts and their errors is critical for large scale structure and weak lensing studies for constraining cosmology from deep, wide imaging surveys. Current photometric redshift methods suffer from bias and scatter due to incomplete training sets. Exploiting the clustering between a sample of galaxies for which we have spectroscopic redshifts and a sample of galaxies for which the redshifts are unknown can allow us to reconstruct the true redshift distribution of the unknown sample. Here we use this method in both simulations and early data from the Dark Energy Survey (DES) to determine the true redshift distributions of galaxies in photometric redshift bins. We find that cross-correlating with the spectroscopic samples currently used for training provides a useful test of photometric redshifts and provides reliable estimates of the true redshift distribution in a photometric redshift bin. We discuss the use of the cross-correlation method in validating template- or learning-based approaches to redshift estimation and its future use in Stage IV surveys.
NASA Astrophysics Data System (ADS)
Cassidy, J.; Zheng, Z.; Xu, Y.; Betz, V.; Lilge, L.
2017-04-01
Background: The majority of de novo cancers are diagnosed in low and middle-income countries, which often lack the resources to provide adequate therapeutic options. None or minimally invasive therapies such as Photodynamic Therapy (PDT) or photothermal therapies could become part of the overall treatment options in these countries. However, widespread acceptance is hindered by the current empirical training of surgeons in these optical techniques and a lack of easily usable treatment optimizing tools. Methods: Based on image processing programs, ITK-SNAP, and the publicly available FullMonte light propagation software, a work plan is proposed that allows for personalized PDT treatment planning. Starting with, contoured clinical CT or MRI images, the generation of 3D tetrahedral models in silico, execution of the Monte Carlo simulation and presentation of the 3D fluence rate, Φ, [mWcm-2] distribution a treatment plan optimizing photon source placement is developed. Results: Permitting 1-2 days for the installation of the required programs, novices can generate their first fluence, H [Jcm-2] or Φ distribution in a matter of hours. This is reduced to 10th of minutes with some training. Executing the photon simulation calculations is rapid and not the performance limiting process. Largest sources of errors are uncertainties in the contouring and unknown tissue optical properties. Conclusions: The presented FullMonte simulation is the fastest tetrahedral based photon propagation program and provides the basis for PDT treatment planning processes, enabling a faster proliferation of low cost, minimal invasive personalized cancer therapies.
Microseismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-07-01
At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
CLOTHES AS A SOURCE OF PARTICLES CONTRIBUTING TO THE "PERSONAL CLOUD"
Previous studies such as EPA's PTEAM Study have documented increased personal exposures to particles compared to either indoor or outdoor concentrations--a finding that bas been characterized as a "personal cloud." The sources of the personal cloud are unknown, but co...
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Lu, Laura
2008-01-01
This article provides the theory and application of the 2-stage maximum likelihood (ML) procedure for structural equation modeling (SEM) with missing data. The validity of this procedure does not require the assumption of a normally distributed population. When the population is normally distributed and all missing data are missing at random…
Fajardo, Geroncio C.; Posid, Joseph; Papagiotas, Stephen; Lowe, Luis
2015-01-01
There have been periodic electronic news media reports of potential bioterrorism-related incidents involving unknown substances (often referred to as “white powder”) since the 2001 intentional dissemination of Bacillus anthracis through the US Postal System. This study reviewed the number of unknown “white powder” incidents reported online by the electronic news media and compared them with unknown “white powder” incidents reported to the US Centers for Disease Control and Prevention (CDC) and the US Federal Bureau of Investigation (FBI) during a two-year period from June 1, 2009 and May 31, 2011. Results identified 297 electronic news media reports, 538 CDC reports, and 384 FBI reports of unknown “white powder.” This study showed different unknown “white powder” incidents captured by each of the three sources. However, the authors could not determine the public health implications of this discordance. PMID:25420771
2016-05-31
UMKC-YIP-TR-2016 May 2016 Technical Report Prompt Neutron Spectrometry for Identification of SNM in Unknown Shielding...University of Missouri – Kansas City MSND: Micro-structured Neutron Detector HRM: Handheld Radiation Monitor PHS: Pulse Height Spectrum ANI: Active... Neutron Interrogation Distribution Statement A 6 Administrative Information and Acknowledgements Members of the University of Missouri
Distributions of the Kullback-Leibler divergence with applications.
Belov, Dmitry I; Armstrong, Ronald D
2011-05-01
The Kullback-Leibler divergence (KLD) is a widely used method for measuring the fit of two distributions. In general, the distribution of the KLD is unknown. Under reasonable assumptions, common in psychometrics, the distribution of the KLD is shown to be asymptotically distributed as a scaled (non-central) chi-square with one degree of freedom or a scaled (doubly non-central) F. Applications of the KLD for detecting heterogeneous response data are discussed with particular emphasis on test security. © The British Psychological Society.
Discovering Peripheral Arterial Disease Cases from Radiology Notes Using Natural Language Processing
Savova, Guergana K.; Fan, Jin; Ye, Zi; Murphy, Sean P.; Zheng, Jiaping; Chute, Christopher G.; Kullo, Iftikhar J.
2010-01-01
As part of the Electronic Medical Records and Genomics Network, we applied, extended and evaluated an open source clinical Natural Language Processing system, Mayo’s Clinical Text Analysis and Knowledge Extraction System, for the discovery of peripheral arterial disease cases from radiology reports. The manually created gold standard consisted of 223 positive, 19 negative, 63 probable and 150 unknown cases. Overall accuracy agreement between the system and the gold standard was 0.93 as compared to a named entity recognition baseline of 0.46. Sensitivity for the positive, probable and unknown cases was 0.93–0.96, and for the negative cases was 0.72. Specificity and negative predictive value for all categories were in the 90’s. The positive predictive value for the positive and unknown categories was in the high 90’s, for the negative category was 0.84, and for the probable category was 0.63. We outline the main sources of errors and suggest improvements. PMID:21347073
Microseismic source locations with deconvolution migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2018-03-01
Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.
8 CFR 341.2 - Examination upon application.
Code of Federal Regulations, 2010 CFR
2010-01-01
... claimed is precluded by reason of death, refusal to testify, unknown whereabouts, advanced age, mental or physical incapacity, or severe illness or infirmity, another witness or witnesses shall be produced. A... as the relationship between the claimant and the citizen source or sources; the citizenship of the...
8 CFR 341.2 - Examination upon application.
Code of Federal Regulations, 2011 CFR
2011-01-01
... claimed is precluded by reason of death, refusal to testify, unknown whereabouts, advanced age, mental or physical incapacity, or severe illness or infirmity, another witness or witnesses shall be produced. A... as the relationship between the claimant and the citizen source or sources; the citizenship of the...
Revealing sources and chemical identity of iron ligands across the California Current System
NASA Astrophysics Data System (ADS)
Boiteau, R.; Repeta, D.; Fitzsimmons, J. N.; Parker, C.; Twining, B. S.; Baines, S.
2016-02-01
The California Current System is one of the most productive regions of the ocean, fueled by the upwelling of nutrient rich water. Differences in the supply of micronutrient iron to surface waters along the coast lead to a mosaic of iron-replete and iron-limited conditions across the region, affecting primary production and community composition. Most of the iron in this region is supplied by upwelling of iron from the benthic boundary layer that is complexed by strong organic ligands. However, the source, identity, and bioavailability of these ligands are unknown. Here, we used novel hyphenated chromatography mass spectrometry approaches to structurally characterize organic ligands across the region. With these methods, iron ligands are detected with liquid chromatography coupled to inductively coupled plasma mass spectrometry (LC-ICPMS), and then their mass and fragmentation spectra are determined by high resolution electrospray ionization mass spectrometry (LC-ESIMS). Iron isotopic exchange was used to compare the relative binding strengths of different ligands. Our survey revealed a broad range of ligands from multiple sources. Benthic boundary layers and anoxic sediments were sources of structurally amorphous weak ligands, likely organic degradation products, as well as siderophores, strong iron binding molecules that facilitate iron acquisition. In the euphotic zone, marine microbes and zooplankton grazing produced a wide distribution of other compounds that included known and novel siderophores. This work demonstrates that the chemical nature of ligands from different sources varies substantially and has important implications for iron biogeochemical cycling and availability to members of the microbial community.
3-D Deep Penetration Neutron Imaging of Thick Absorgin and Diffusive Objects Using Transport Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragusa, Jean; Bangerth, Wolfgang
2011-08-01
A current area of research interest in national security is to effectively and efficiently determine the contents of the many shipping containers that enter ports in the United States. This interest comes as a result of the 9/11 Commission Act passed by Congress in 2007 that requires 100% of inbound cargo to be scanned by 2012. It appears that this requirement will be achieved by 2012, but as of February of 2009 eighty percent of the 11.5 million inbound cargo containers were being scanned. The systems used today in all major U.S. ports to determine the presence of radioactive materialmore » within cargo containers are Radiation Portal Monitors (RPM). These devices generally exist in the form of a gate or series of gates that the containers can be driven through and scanned. The monitors are effective for determining the presence of radiation, but offer little more information about the particular source. This simple pass-fail system leads to many false alarms as many everyday items emit radiation including smoke detectors due to the Americium-241 source contained inside, bananas, milk, cocoa powder and lean beef due to the trace amounts of Potassium-40, and fire brick and kitty litter due to their high clay content which often contains traces of uranium and thorium. In addition, if an illuminating source is imposed on the boundary of the container, the contents of the container may become activated. These materials include steel, aluminum and many agricultural products. Current portal monitors also have not proven to be that effective at identifying natural or highly enriched uranium (HEU). In fact, the best available Advanced Spectroscopic Portal Monitors (ASP) are only capable of identifying bare HEU 70-88% of the time and masked HEU and depleted uranium (DU) only 53 percent of the time. Therefore, a better algorithm that uses more information collected from better detectors about the specific material distribution within the container is desired. The work reported here explores the inverse problem of optical tomography applied to heterogeneous domains. The neutral particle transport equation was used as the forward model for how neutral particles stream through and interact within these heterogeneous domains. A constrained optimization technique that uses Newtons method served as the basis of the inverse problem. Optical tomography aims at reconstructing the material properties using (a) illuminating sources and (b) detector readings. However, accurate simulations for radiation transport require that the particle (gamma and/or neutron) energy be appropriate discretize in the multigroup approximation. This, in turns, yields optical tomography problems where the number of unknowns grows (1) about quadratically with respect to the number of energy groups, G, (notably to reconstruct the scattering matrix) and (2) linearly with respect to the number of unknown material regions. As pointed out, a promising approach could rely on algorithms to appropriately select a material type per material zone rather than G2 values. This approach, though promising, still requires further investigation: (a) when switching from cross-section values unknowns to material type indices (discrete integer unknowns), integer programming techniques are needed since derivative information is no longer available; and (b) the issue of selecting the initial material zoning remains. The work reported here proposes an approach to solve the latter item, whereby a material zoning is proposed using one-group or few-groups transport approximations. The capabilities and limitations of the presented method were explored; they are briefly summarized next and later described in fuller details in the Appendices. The major factors that influenced the ability of the optimization method to reconstruct the cross sections of these domains included the locations of the sources used to illuminate the domains, the number of separate experiments used in the reconstruction, the locations where measurements were collected, the optical thickness of the domain, the amount of signal noise and signal bias applied to the measurements and the initial guess for the cross section distribution. All of these factors were explored for problems with and without scattering. Increasing the number of source and measurement locations and experiments generally was more successful at reconstructing optically thicker domains while producing less error in the image. The maximum optical thickness that could be reconstructed with this method was ten mean free paths for pure absorber and two mean free paths for scattering problems. Applying signal noise and signal bias to the measured fluxes produced more error in the produced image. Generally, Newtons method was more successful at reconstructing domains from an initial guess for the cross sections that was greater in magnitude than their true values than from an initial guess that was lower in magnitude.« less
CF NEUTRON TIME OF FLIGHT TRANSMISSION FOR MATERIAL IDENTIFICATION FOR WEAPONS TRAINERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihalczo, John T; Valentine, Timothy E; Blakeman, Edward D
2011-01-01
The neutron transmission, elastic scattering, and non elastic reactions can be used to distinguish various isotopes. Neutron transmission as a function of energy can be used in some cases to identify materials in unknown objects. A time tagged californium source that provides a fission spectrum of neutrons is a useful source for neutron time-of-flight (TOF) transmission measurements. Many nuclear weapons trainer units for a particular weapons system (no fissile, but of same weight and center of gravity) in shipping containers were returned to the National Nuclear Security Administration Y-12 National Security Complex in the mid 1990s. Nuclear Materials Identification Systemmore » (NMIS) measurements with a time tagged californium neutron source were used to verify that these trainers did not contain fissile material. In these blind tests, the time distributions of neutrons through the containers were measured as a function of position to locate the approximate center of the trainer in the container. Measurements were also performed with an empty container. TOF template matching measurements were then performed at this location for a large number of units. In these measurements, the californium source was located on one end of the container and a proton recoil scintillator was located on the other end. The variations in the TOF transmission for times corresponding to 1 to 5 MeV were significantly larger than statistical. Further examination of the time distribution or the energy dependence revealed that these variations corresponded to the variations in the neutron cross section of aluminum averaged over the energy resolution of the californium TOF measurement with a flight path of about 90 cm. Measurements using different thicknesses of aluminum were also performed with the source and detector separated the same distance as for the trainer measurements. These comparison measurements confirmed that the material in the trainers was aluminum, and the total thickness of aluminum through the trainers was determined. This is an example of how californium transmission TOF measurements can be used to identify materials.« less
The first catalog of active galactic nuclei detected by the FERMI large area telescope
Abdo, A. A.; Ackermann, M.; Ajello, M.; ...
2010-04-29
Here, we present the first catalog of active galactic nuclei (AGNs) detected by the Large Area Telescope (LAT), corresponding to 11 months of data collected in scientific operation mode. The First LAT AGN Catalog (1LAC) includes 671 γ-ray sources located at high Galactic latitudes (|b|>10°) that are detected with a test statistic greater than 25 and associated statistically with AGNs. Some LAT sources are associated with multiple AGNs, and consequently, the catalog includes 709 AGNs, comprising 300 BL Lacertae objects, 296 flat-spectrum radio quasars, 41 AGNs of other types, and 72 AGNs of unknown type. We also classify the blazarsmore » based on their spectral energy distributions as archival radio, optical, and X-ray data permit. In addition to the formal 1LAC sample, we provide AGN associations for 51 low-latitude LAT sources and AGN "affiliations" (unquantified counterpart candidates) for 104 high-latitude LAT sources without AGN associations. The overlap of the 1LAC with existing γ-ray AGN catalogs (LBAS, EGRET, AGILE, Swift, INTEGRAL, TeVCat) is briefly discussed. Various properties—such as γ-ray fluxes and photon power-law spectral indices, redshifts, γ-ray luminosities, variability, and archival radio luminosities—and their correlations are presented and discussed for the different blazar classes. Lastly, we compare the 1LAC results with predictions regarding the γ-ray AGN populations, and we comment on the power of the sample to address the question of the blazar sequence.« less
Landmeyer, James E.; Campbell, Bruce G.
2010-01-01
McBee is a small town of about 700 people located in Chesterfield County, South Carolina, in the Sandhills region of the upper Coastal Plain. The halogenated organic compounds ethylene dibromide (EDB) and dibromochloropropane (DBCP) have been detected in several public and domestic supply and irrigation wells since 2002 at concentrations above their U.S. Environmental Protection Agency Maximum Contaminant Limits of 0.05 and 0.2 microgram per liter (µg/L), respectively. The source(s) and release histories of EDB and DBCP to local groundwater are unknown, but believed to be related to their historical use between the 1940s and their ban in the late 1970s as fumigants to control nematode damage in peach orchards. However, gasoline and jet-fuel supplies also contained EDB and are an alternative source of contamination to groundwater. The detection of EDB and DBCP in water wells has raised health concerns because groundwater is the sole source of water supply in the McBee area. In April 2010, forensic, geochemical-based investigation was initiated by the U.S. Geological Survey in cooperation with the Alligator Rural Water & Sewer Company to provide additional data regarding EDB and DBCP in local groundwater. The investigation includes an assessment of the use, release, and disposal history of EDB and DBCP in the area, the distribution of EDB and DBCP concentrations in the unsaturated zone, and transport and fate in groundwater.
Fish early life stages are highly sensitive to exposure to persistent bioaccumulative toxicants (PBTs). The factors that contribute to this are unknown, but may include the distribution of PBTs to sensitive tissues during critical stages of development. Multiphoton laser scannin...
Spatial probability models of fire in the desert grasslands of the southwestern USA
USDA-ARS?s Scientific Manuscript database
Fire is an important driver of ecological processes in semiarid environments; however, the role of fire in desert grasslands of the Southwestern US is controversial and the regional fire distribution is largely unknown. We characterized the spatial distribution of fire in the desert grassland region...
Milman, Boris L
2005-01-01
A library consisting of 3766 MS(n) spectra of 1743 compounds, including 3126 MS2 spectra acquired mainly using ion trap (IT) and triple-quadrupole (QqQ) instruments, was composed of numerous collections/sources. Ionization techniques were mainly electrospray ionization and also atmospheric pressure chemical ionization and chemical ionization. The library was tested for the performance in identification of unknowns, and in this context this work is believed to be the largest of all known tests of product-ion mass spectral libraries. The MS2 spectra of the same compounds from different collections were in turn divided into spectra of 'unknown' and reference compounds. For each particular compound, library searches were performed resulting in selection by taking into account the best matches for each spectral collection/source. Within each collection/source, replicate MS2 spectra differed in the collision energy used. Overall, there were up to 950 search results giving the best match factors and their ranks in corresponding hit lists. In general, the correct answers were obtained as the 1st rank in up to 60% of the search results when retrieved with (on average) 2.2 'unknown' and 6.2 reference replicates per compound. With two or more replicates of both 'unknown' and reference spectra (the average numbers of replicates were 4.0 and 7.8, respectively), the fraction of correct answers in the 1st rank increased to 77%. This value is close to the performance of established electron ionization mass spectra libraries (up to 79%) found by other workers. The hypothesis that MS2 spectra better match reference spectra acquired using the same type of tandem mass spectrometer (IT or QqQ) was neither strongly proved nor rejected here. The present work shows that MS2 spectral libraries containing sufficiently numerous different entries for each compound are sufficiently efficient for identification of unknowns and suitable for use with different tandem mass spectrometers. 2005 John Wiley & Sons, Ltd.
Swift J181723.1-164300 is likely a new bursting neutron star low-mass X-ray binary
NASA Astrophysics Data System (ADS)
Parikh, Aastha; Wijnands, Rudy; Degenaar, Nathalie; Altamirano, Diego
2017-08-01
On 28 July 2017 Swift/BAT triggered (#00765081) on an event corresponding to a previously unknown source (Barthelmy et al. 2017, GCN #21369, #21385). Its properties suggested it was likely a Galactic source and not a gamma-ray burst.
ERIC Educational Resources Information Center
Soil Conservation Service (USDA), Washington, DC.
Nonpoint source pollution is both a relatively recent concern and a complex phenomenon with many unknowns. Knowing the extent to which agricultural sources contribute to the total pollutant load, the extent to which various control practices decrease this load, and the effect of reducing the pollutants delivered to a water body are basic to the…
The chemistry of poisons in amphibian skin.
Daly, J W
1995-01-01
Poisons are common in nature, where they often serve the organism in chemical defense. Such poisons either are produced de novo or are sequestered from dietary sources or symbiotic organisms. Among vertebrates, amphibians are notable for the wide range of noxious agents that are contained in granular skin glands. These compounds include amines, peptides, proteins, steroids, and both water-soluble and lipid-soluble alkaloids. With the exception of the alkaloids, most seem to be produced de novo by the amphibian. The skin of amphibians contains many structural classes of alkaloids previously unknown in nature. These include the batrachotoxins, which have recently been discovered to also occur in skin and feathers of a bird, the histrionicotoxins, the gephyrotoxins, the decahydroquinolines, the pumiliotoxins and homopumiliotoxins, epibatidine, and the samandarines. Some amphibian skin alkaloids are clearly sequestered from the diet, which consists mainly of small arthropods. These include pyrrolizidine and indolizidine alkaloids from ants, tricyclic coccinellines from beetles, and pyrrolizidine oximes, presumably from millipedes. The sources of other alkaloids in amphibian skin, including the batrachotoxins, the decahydroquinolines, the histrionicotoxins, the pumiliotoxins, and epibatidine, are unknown. While it is possible that these are produced de novo or by symbiotic microorganisms, it appears more likely that they are sequestered by the amphibians from as yet unknown dietary sources. PMID:7816854
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Wright, T. J.
2006-12-01
We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.
Leptospirosis in Mexico: Epidemiology and Potential Distribution of Human Cases
Sánchez-Montes, Sokani; Espinosa-Martínez, Deborah V.; Ríos-Muñoz, César A.; Berzunza-Cruz, Miriam; Becker, Ingeborg
2015-01-01
Background Leptospirosis is widespread in Mexico, yet the potential distribution and risk of the disease remain unknown. Methodology/Principal Findings We analysed morbidity and mortality according to age and gender based on three sources of data reported by the Ministry of Health and the National Institute of Geography and Statics of Mexico, for the decade 2000–2010. A total of 1,547 cases were reported in 27 states, the majority of which were registered during the rainy season, and the most affected age group was 25–44 years old. Although leptospirosis has been reported as an occupational disease of males, analysis of morbidity in Mexico showed no male preference. A total number of 198 deaths were registered in 21 states, mainly in urban settings. Mortality was higher in males (61.1%) as compared to females (38.9%), and the case fatality ratio was also increased in males. The overall case fatality ratio in Mexico was elevated (12.8%), as compared to other countries. We additionally determined the potential disease distribution by examining the spatial epidemiology combined with spatial modeling using ecological niche modeling techniques. We identified regions where leptospirosis could be present and created a potential distribution map using bioclimatic variables derived from temperature and precipitation. Our data show that the distribution of the cases was more related to temperature (75%) than to precipitation variables. Ecological niche modeling showed predictive areas that were widely distributed in central and southern Mexico, excluding areas characterized by extreme climates. Conclusions/Significance In conclusion, an epidemiological surveillance of leptospirosis is recommended in Mexico, since 55.7% of the country has environmental conditions fulfilling the criteria that favor the presence of the disease. PMID:26207827
Leptospirosis in Mexico: Epidemiology and Potential Distribution of Human Cases.
Sánchez-Montes, Sokani; Espinosa-Martínez, Deborah V; Ríos-Muñoz, César A; Berzunza-Cruz, Miriam; Becker, Ingeborg
2015-01-01
Leptospirosis is widespread in Mexico, yet the potential distribution and risk of the disease remain unknown. We analysed morbidity and mortality according to age and gender based on three sources of data reported by the Ministry of Health and the National Institute of Geography and Statics of Mexico, for the decade 2000-2010. A total of 1,547 cases were reported in 27 states, the majority of which were registered during the rainy season, and the most affected age group was 25-44 years old. Although leptospirosis has been reported as an occupational disease of males, analysis of morbidity in Mexico showed no male preference. A total number of 198 deaths were registered in 21 states, mainly in urban settings. Mortality was higher in males (61.1%) as compared to females (38.9%), and the case fatality ratio was also increased in males. The overall case fatality ratio in Mexico was elevated (12.8%), as compared to other countries. We additionally determined the potential disease distribution by examining the spatial epidemiology combined with spatial modeling using ecological niche modeling techniques. We identified regions where leptospirosis could be present and created a potential distribution map using bioclimatic variables derived from temperature and precipitation. Our data show that the distribution of the cases was more related to temperature (75%) than to precipitation variables. Ecological niche modeling showed predictive areas that were widely distributed in central and southern Mexico, excluding areas characterized by extreme climates. In conclusion, an epidemiological surveillance of leptospirosis is recommended in Mexico, since 55.7% of the country has environmental conditions fulfilling the criteria that favor the presence of the disease.
NASA Technical Reports Server (NTRS)
Laster, Rachel M.
2004-01-01
Scientists in the Office of Life and Microgravity Sciences and Applications within the Microgravity Research Division oversee studies in important physical, chemical, and biological processes in microgravity environment. Research is conducted in microgravity environment because of the beneficial results that come about for experiments. When research is done in normal gravity, scientists are limited to results that are affected by the gravity of Earth. Microgravity provides an environment where solid, liquid, and gas can be observed in a natural state of free fall and where many different variables are eliminated. One challenge that NASA faces is that space flight opportunities need to be used effectively and efficiently in order to ensure that some of the most scientifically promising research is conducted. Different vibratory sources are continually active aboard the International Space Station (ISS). Some of the vibratory sources include crew exercise, experiment setup, machinery startup (life support fans, pumps, freezer/compressor, centrifuge), thruster firings, and some unknown events. The Space Acceleration Measurement System (SAMs), which acts as the hardware and carefully positioned aboard the ISS, along with the Microgravity Environment Monitoring System MEMS), which acts as the software and is located here at NASA Glenn, are used to detect these vibratory sources aboard the ISS and recognize them as disturbances. The various vibratory disturbances can sometimes be harmful to the scientists different research projects. Some vibratory disturbances are recognized by the MEMS's database and some are not. Mainly, the unknown events that occur aboard the International Space Station are the ones of major concern. To better aid in the research experiments, the unknown events are identified and verified as unknown events. Features, such as frequency, acceleration level, time and date of recognition of the new patterns are stored in an Excel database. My task is to carefully synthesize frequency and acceleration patterns of unknown events within the Excel database into a new file to determine whether or not certain information that is received i s considered a real vibratory source. Once considered as a vibratory source, further analysis is carried out. The resulting information is used to retrain the MEMS to recognize them as known patterns. These different vibratory disturbances are being constantly monitored to observe if, in any way, the disturbances have an effect on the microgravity environment that research experiments are exposed to. If the disturbance has little or no effect on the experiments, then research is continued. However, if the disturbance is harmful to the experiment, scientists act accordingly by either minimizing the source or terminating the research and neither NASA's time nor money is wasted.
Micro-seismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Loss-tolerant measurement-device-independent quantum private queries
NASA Astrophysics Data System (ADS)
Zhao, Liang-Yuan; Yin, Zhen-Qiang; Chen, Wei; Qian, Yong-Jun; Zhang, Chun-Mei; Guo, Guang-Can; Han, Zheng-Fu
2017-01-01
Quantum private queries (QPQ) is an important cryptography protocol aiming to protect both the user’s and database’s privacy when the database is queried privately. Recently, a variety of practical QPQ protocols based on quantum key distribution (QKD) have been proposed. However, for QKD-based QPQ the user’s imperfect detectors can be subjected to some detector- side-channel attacks launched by the dishonest owner of the database. Here, we present a simple example that shows how the detector-blinding attack can damage the security of QKD-based QPQ completely. To remove all the known and unknown detector side channels, we propose a solution of measurement-device-independent QPQ (MDI-QPQ) with single- photon sources. The security of the proposed protocol has been analyzed under some typical attacks. Moreover, we prove that its security is completely loss independent. The results show that practical QPQ will remain the same degree of privacy as before even with seriously uncharacterized detectors.
Loss-tolerant measurement-device-independent quantum private queries.
Zhao, Liang-Yuan; Yin, Zhen-Qiang; Chen, Wei; Qian, Yong-Jun; Zhang, Chun-Mei; Guo, Guang-Can; Han, Zheng-Fu
2017-01-04
Quantum private queries (QPQ) is an important cryptography protocol aiming to protect both the user's and database's privacy when the database is queried privately. Recently, a variety of practical QPQ protocols based on quantum key distribution (QKD) have been proposed. However, for QKD-based QPQ the user's imperfect detectors can be subjected to some detector- side-channel attacks launched by the dishonest owner of the database. Here, we present a simple example that shows how the detector-blinding attack can damage the security of QKD-based QPQ completely. To remove all the known and unknown detector side channels, we propose a solution of measurement-device-independent QPQ (MDI-QPQ) with single- photon sources. The security of the proposed protocol has been analyzed under some typical attacks. Moreover, we prove that its security is completely loss independent. The results show that practical QPQ will remain the same degree of privacy as before even with seriously uncharacterized detectors.
NASA Technical Reports Server (NTRS)
Raymond, William H.; Olson, William S.
1990-01-01
Delay in the spin-up of precipitation early in numerical atmospheric forecasts is a deficiency correctable by diabatic initialization combined with diabatic forcing. For either to be effective requires some knowledge of the magnitude and vertical placement of the latent heating fields. Until recently the best source of cloud and rain water data was the remotely sensed vertical integrated precipitation rate or liquid water content. Vertical placement of the condensation remains unknown. Some information about the vertical distribution of the heating rates and precipitating liquid water and ice can be obtained from retrieval techniques that use a physical model of precipitating clouds to refine and improve the interpretation of the remotely sensed data. A description of this procedure and an examination of its 3-D liquid water products, along with improved modeling methods that enhance or speed-up storm development is discussed.
The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters.
Suaria, Giuseppe; Avio, Carlo G; Mineo, Annabella; Lattin, Gwendolyn L; Magaldi, Marcello G; Belmonte, Genuario; Moore, Charles J; Regoli, Francesco; Aliani, Stefano
2016-11-23
The Mediterranean Sea has been recently proposed as one of the most impacted regions of the world with regards to microplastics, however the polymeric composition of these floating particles is still largely unknown. Here we present the results of a large-scale survey of neustonic micro- and meso-plastics floating in Mediterranean waters, providing the first extensive characterization of their chemical identity as well as detailed information on their abundance and geographical distribution. All particles >700 μm collected in our samples were identified through FT-IR analysis (n = 4050 particles), shedding for the first time light on the polymeric diversity of this emerging pollutant. Sixteen different classes of synthetic materials were identified. Low-density polymers such as polyethylene and polypropylene were the most abundant compounds, followed by polyamides, plastic-based paints, polyvinyl chloride, polystyrene and polyvinyl alcohol. Less frequent polymers included polyethylene terephthalate, polyisoprene, poly(vinyl stearate), ethylene-vinyl acetate, polyepoxide, paraffin wax and polycaprolactone, a biodegradable polyester reported for the first time floating in off-shore waters. Geographical differences in sample composition were also observed, demonstrating sub-basin scale heterogeneity in plastics distribution and likely reflecting a complex interplay between pollution sources, sinks and residence times of different polymers at sea.
The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters
NASA Astrophysics Data System (ADS)
Suaria, Giuseppe; Avio, Carlo G.; Mineo, Annabella; Lattin, Gwendolyn L.; Magaldi, Marcello G.; Belmonte, Genuario; Moore, Charles J.; Regoli, Francesco; Aliani, Stefano
2016-11-01
The Mediterranean Sea has been recently proposed as one of the most impacted regions of the world with regards to microplastics, however the polymeric composition of these floating particles is still largely unknown. Here we present the results of a large-scale survey of neustonic micro- and meso-plastics floating in Mediterranean waters, providing the first extensive characterization of their chemical identity as well as detailed information on their abundance and geographical distribution. All particles >700 μm collected in our samples were identified through FT-IR analysis (n = 4050 particles), shedding for the first time light on the polymeric diversity of this emerging pollutant. Sixteen different classes of synthetic materials were identified. Low-density polymers such as polyethylene and polypropylene were the most abundant compounds, followed by polyamides, plastic-based paints, polyvinyl chloride, polystyrene and polyvinyl alcohol. Less frequent polymers included polyethylene terephthalate, polyisoprene, poly(vinyl stearate), ethylene-vinyl acetate, polyepoxide, paraffin wax and polycaprolactone, a biodegradable polyester reported for the first time floating in off-shore waters. Geographical differences in sample composition were also observed, demonstrating sub-basin scale heterogeneity in plastics distribution and likely reflecting a complex interplay between pollution sources, sinks and residence times of different polymers at sea.
"Almost Darks": HI Mapping and Optical Analysis
NASA Astrophysics Data System (ADS)
Singer, Quinton; Ball, Catie; Cannon, John M.; Leisman, Luke; Haynes, Martha P.; Adams, Elizabeth A.; Bernal Neira, David; Giovanelli, Riccardo; Hallenbeck, Gregory L.; Janesh, William; Janowiecki, Steven; Jozsa, Gyula; Rhode, Katherine L.; Salzer, John Joseph
2017-01-01
We present VLA HI imaging of the "Almost Dark" galaxies AGC 227982, AGC 268363, and AGC 219533. Selected from the ALFALFA survey, "Almost Dark" galaxies have significant HI reservoirs but lack an obvious stellar counterpart in survey-depth ground-based optical imaging. These three HI-rich objects harbor some of the most extreme levels of suppressed star formation amongst the isolated sources in the ALFALFA catalog. Our new multi-configuration, high angular (~20") and spectral (1.7 km/s) resolution HI observations produce spatially resolved column density and velocity distribution moment maps. We compare these images to Sloan Digitized Sky Survey (SDSS) optical images. By localizing the HI gas, we identify previously unknown optical components (offset from the ALFALFA pointing center) for AGC 227982 and AGC 268363, and confirm the association with a very low surface brightness stellar counterpart for AGC 219533. Baryonic masses are derived from VLA flux integral values and ALFALFA distance estimates, giving answers consistent with those derived from ALFALFA fluxes. All three sources appear to have fairly regular HI morphologies and show evidence of ordered rotation.Support for this work was provided by NSF grant 1211683 to JMC at Macalester College.
Parameters optimization for the energy management system of hybrid electric vehicle
NASA Astrophysics Data System (ADS)
Tseng, Chyuan-Yow; Hung, Yi-Hsuan; Tsai, Chien-Hsiung; Huang, Yu-Jen
2007-12-01
Hybrid electric vehicle (HEV) has been widely studied recently due to its high potential in reduction of fuel consumption, exhaust emission, and lower noise. Because of comprised of two power sources, the HEV requires an energy management system (EMS) to distribute optimally the power sources for various driving conditions. The ITRI in Taiwan has developed a HEV consisted of a 2.2L internal combustion engine (ICE), a 18KW motor/generator (M/G), a 288V battery pack, and a continuous variable transmission (CVT). The task of the present study is to design an energy management strategy of the EMS for the HEV. Due to the nonlinear nature and the fact of unknown system model of the system, a kind of simplex method based energy management strategy is proposed for the HEV system. The simplex method is a kind of optimization strategy which is generally used to find out the optimal parameters for un-modeled systems. The way to apply the simplex method for the design of the EMS is presented. The feasibility of the proposed method was verified by perform numerical simulation on the FTP75 drive cycles.
Liu, Qiaoxia; Zhou, Binbin; Wang, Xinliang; Ke, Yanxiong; Jin, Yu; Yin, Lihui; Liang, Xinmiao
2012-12-01
A search library about benzylisoquinoline alkaloids was established based on preparation of alkaloid fractions from Rhizoma coptidis, Cortex phellodendri, and Rhizoma corydalis. In this work, two alkaloid fractions from each herbal medicine were first prepared based on selective separation on the "click" binaphthyl column. And then these alkaloid fractions were analyzed on C18 column by liquid chromatography coupled with tandem mass spectrometry. Many structure-related compounds were included in these alkaloids fractions, which led to easy separation and good MS response in further work. Therefore, a search library of 52 benzylisoquinoline alkaloids was established, which included eight aporphine, 19 tetrahydroprotoberberine, two protopine, two benzyltetrahydroisoquinoline, and 21 protoberberine alkaloids. The information of the search library contained compound names, structures, retention times, accurate masses, fragmentation pathways of benzylisoquionline alkaloids, and their sources from three herbal medicines. Using such a library, the alkaloids, especially those trace and unknown components in some herbal medicine could be accurately and quickly identified. In addition, the distribution of benzylisoquinoline alkaloids in the herbal medicines could be also summarized by searching the source samples in the library. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The emission function of ground-based light sources: State of the art and research challenges
NASA Astrophysics Data System (ADS)
Solano Lamphar, Héctor Antonio
2018-05-01
To understand the night sky radiance generated by the light emissions of urbanised areas, different researchers are currently proposing various theoretical approaches. The distribution of the radiant intensity as a function of the zenith angle is one of the most unknown properties on modelling skyglow. This is due to the collective effects of the artificial radiation emitted from the ground-based light sources. The emission function is a key property in characterising the sky brightness under arbitrary conditions, therefore it is required by modellers, environmental engineers, urban planners, light pollution researchers, and experimentalists who study the diffuse light of the night sky. As a matter of course, the emission function considers the public lighting system, which is in fact the main generator of the skyglow. Still, another class of light-emitting devices are gaining importance since their overuse and the urban sprawl of recent years. This paper will address the importance of the emission function in modelling skyglow and the factors involved in its characterization. On this subject, the author's intention is to organise, integrate, and evaluate previously published research in order to state the progress of current research toward clarifying this topic.
Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting
Husen, Mohd Nizam; Lee, Sukhan
2016-01-01
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis. PMID:27845711
Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting.
Husen, Mohd Nizam; Lee, Sukhan
2016-11-11
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis.
Turning Noise into Signal: Utilizing Impressed Pipeline Currents for EM Exploration
NASA Astrophysics Data System (ADS)
Lindau, Tobias; Becken, Michael
2017-04-01
Impressed Current Cathodic Protection (ICCP) systems are extensively used for the protection of central Europe's dense network of oil-, gas- and water pipelines against destruction by electrochemical corrosion. While ICCP systems usually provide protection by injecting a DC current into the pipeline, mandatory pipeline integrity surveys demand a periodical switching of the current. Consequently, the resulting time varying pipe currents induce secondary electric- and magnetic fields in the surrounding earth. While these fields are usually considered to be unwanted cultural noise in electromagnetic exploration, this work aims at utilizing the fields generated by the ICCP system for determining the electrical resistivity of the subsurface. The fundamental period of the switching cycles typically amounts to 15 seconds in Germany and thereby roughly corresponds to periods used in controlled source EM applications (CSEM). For detailed studies we chose an approximately 30km long pipeline segment near Herford, Germany as a test site. The segment is located close to the southern margin of the Lower Saxony Basin (LSB) and part of a larger gas pipeline composed of multiple segments. The current injected into the pipeline segment originates in a rectified 50Hz AC signal which is periodically switched on and off. In contrast to the usual dipole sources used in CSEM surveys, the current distribution along the pipeline is unknown and expected to be non-uniform due to coating defects that cause current to leak into the surrounding soil. However, an accurate current distribution is needed to model the fields generated by the pipeline source. We measured the magnetic fields at several locations above the pipeline and used Biot-Savarts-Law to estimate the currents decay function. The resulting frequency dependent current distribution shows a current decay away from the injection point as well as a frequency dependent phase shift which is increasing with distance from the injection point. Electric field data were recorded at 45 stations located in an area of about 60 square kilometers in the vicinity to the pipeline. Additionally, the injected source current was recorded directly at the injection point. Transfer functions between the local electric fields and the injected source current are estimated for frequencies ranging from 0.03Hz to 15Hz using robust time series processing techniques. The resulting transfer functions are inverted for a 3D conductivity model of the subsurface using an elaborate pipeline model. We interpret the model with regards to the local geologic setting, demonstrating the methods capabilities to image the subsurface.
78 FR 56685 - SourceGas Distribution LLC; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-13
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. CP13-540-000] SourceGas Distribution LLC; Notice of Application Take notice that on August 27, 2013, SourceGas Distribution LLC (Source... areas across the Nebraska-Colorado border within which SourceGas may, without further commission...
Schütz, Mathias; Waschke, Jens; Marckmann, Georg; Steger, Florian
2017-05-01
During the reign of National Socialism (NS) anatomical institutes regularly received bodies of executed prisoners in steadily increasing numbers. After 1939, the execution site at Stadelheim prison in Munich supplied not only Munich anatomy but also the institutes in Erlangen, Innsbruck and Würzburg. Due to the disappearance of the Munich body journals, the exact dimension and procedure of body procurement from Stadelheim remained unknown for 70 years. After consultation of a wide range of sources, including rediscovered fragments of the body journals, it is now possible to give an almost comprehensive account of the developments. This article deals with the attempts at recovering information on body procurement from Stadelheim prison during the NS period, which already indicated the significance of Munich anatomy in organizing the distribution of bodies. Thereafter, it addresses the number and distinct groups of Stadelheim prisoners, executed and delivered to the four anatomical institutes, the differences in the handling of their bodies, and the extent to which in particular Munich anatomy profited from the massive increase in executions. Finally, it unveils the role of the Munich Anatomical Institute in distributing those bodies among the anatomies during the Second World War, making it not only the main beneficiary but also the interim center of this process. Copyright © 2017 Elsevier GmbH. All rights reserved.
Microplastics in the Antarctic marine system: An emerging area of research.
Waller, Catherine L; Griffiths, Huw J; Waluda, Claire M; Thorpe, Sally E; Loaiza, Iván; Moreno, Bernabé; Pacherres, Cesar O; Hughes, Kevin A
2017-11-15
It was thought that the Southern Ocean was relatively free of microplastic contamination; however, recent studies and citizen science projects in the Southern Ocean have reported microplastics in deep-sea sediments and surface waters. Here we reviewed available information on microplastics (including macroplastics as a source of microplastics) in the Southern Ocean. We estimated primary microplastic concentrations from personal care products and laundry, and identified potential sources and routes of transmission into the region. Estimates showed the levels of microplastic pollution released into the region from ships and scientific research stations were likely to be negligible at the scale of the Southern Ocean, but may be significant on a local scale. This was demonstrated by the detection of the first microplastics in shallow benthic sediments close to a number of research stations on King George Island. Furthermore, our predictions of primary microplastic concentrations from local sources were five orders of magnitude lower than levels reported in published sampling surveys (assuming an even dispersal at the ocean surface). Sea surface transfer from lower latitudes may contribute, at an as yet unknown level, to Southern Ocean plastic concentrations. Acknowledging the lack of data describing microplastic origins, concentrations, distribution and impacts in the Southern Ocean, we highlight the urgent need for research, and call for routine, standardised monitoring in the Antarctic marine system. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Digging for the Truth: Photon Archeology with GLAST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stecker, F. W.
2007-07-12
Stecker, Malkan and Scully, have shown how ongoing deep surveys of galaxy luminosity functions, spectral energy distributions and backwards evolution models of star formation rates can be used to calculate the past history of intergalactic photon densities for energies from 0.03 eV to the Lyman limit at 13.6 eV and for redshifts out to 6 (called here the intergalactic background light or IBL). From these calculations of the IBL at various redshifts, they predict the present and past optical depth of the universe to high energy {gamma}-rays owing to interactions with photons of the IBL and the 2.7 K CMB.more » We discuss here how this proceedure can be reversed by looking for sharp cutoffs in the spectra of extragalactic {gamma}-ray sources such as blazars at high redshifts in the multi-GeV energy range with GLAST (Gamma-Ray Large Are Space Telescope). By determining the cutoff energies of sources with known redshifts, we can refine our determination of the IBL photon densities in the past, i.e., the archeo-IBL, and therefore get a better measure of the past history of the total star formation rate. Conversely, observations of sharp high energy cutoffs in the {gamma}-ray spectra of sources at unknown redshifts can be used instead of spectral lines to give a measure of their redshifts.« less
Marine Web Portal as an Interface between Users and Marine Data and Information Sources
NASA Astrophysics Data System (ADS)
Palazov, A.; Stefanov, A.; Marinova, V.; Slabakova, V.
2012-04-01
Fundamental elements of the success of marine data and information management system and an effective support of marine and maritime economic activities are the speed and the ease with which users can identify, locate, get access, exchange and use oceanographic and marine data and information. There are a lot of activities and bodies have been identified as marine data and information users, such as: science, government and local authorities, port authorities, shipping, marine industry, fishery and aquaculture, tourist industry, environmental protection, coast protection, oil spills combat, Search and Rescue, national security, civil protection, and general public. On other hand diverse sources of real-time and historical marine data and information exist and generally they are fragmented, distributed in different places and sometimes unknown for the users. The marine web portal concept is to build common web based interface which will provide users fast and easy access to all available marine data and information sources, both historical and real-time such as: marine data bases, observing systems, forecasting systems, atlases etc. The service is regionally oriented to meet user needs. The main advantage of the portal is that it provides general look "at glance" on all available marine data and information as well as direct user to easy discover data and information in interest. It is planned to provide personalization ability, which will give the user instrument to tailor visualization according its personal needs.
ERIC Educational Resources Information Center
Pfaffman, Jay
2008-01-01
Free/Open Source Software (FOSS) applications meet many of the software needs of high school science classrooms. In spite of the availability and quality of FOSS tools, they remain unknown to many teachers and utilized by fewer still. In a world where most software has restrictions on copying and use, FOSS is an anomaly, free to use and to…
Actinomyces cardiffensis sp. nov. from Human Clinical Sources
Hall, Val; Collins, Mattew D.; Hutson, Roger; Falsen, Enevold; Duerden, Brian I.
2002-01-01
Eight strains of a previously undescribed catalase-negative Actinomyces-like bacterium were recovered from human clinical specimens. The morphological and biochemical characteristics of the isolates were consistent with their assignment to the genus Actinomyces, but they did not appear to correspond to any recognized species. 16S rRNA gene sequence analysis showed the organisms represent a hitherto unknown species within the genus Actinomyces related to, albeit distinct from, a group of species which includes Actinomyces turicensis and close relatives. Based on biochemical and molecular genetic evidence, it is proposed that the unknown isolates from human clinical sources be classified as a new species, Actinomyces cardiffensis sp. nov. The type strain of Actinomyces cardiffensis is CCUG 44997T. PMID:12202588
Recent human history governs global ant invasion dynamics
Cleo Bertelsmeier; Sébastien Ollier; Andrew Liebhold; Laurent Keller
2017-01-01
Human trade and travel are breaking down biogeographic barriers, resulting in shifts in the geographical distribution of organisms, yet it remains largely unknown whether different alien species generally follow similar spatiotemporal colonization patterns and how such patterns are driven by trends in global trade. Here, we analyse the global distribution of 241 alien...
Distributed Learning Enhances Relational Memory Consolidation
ERIC Educational Resources Information Center
Litman, Leib; Davachi, Lila
2008-01-01
It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of…
Fajardo, Geroncio C; Posid, Joseph; Papagiotas, Stephen; Lowe, Luis
2015-01-01
There have been periodic electronic news media reports of potential bioterrorism-related incidents involving unknown substances (often referred to as "white powder") since the 2001 intentional dissemination of Bacillus anthracis through the U.S. Postal System. This study reviewed the number of unknown "white powder" incidents reported online by the electronic news media and compared them with unknown "white powder" incidents reported to the U.S. Centers for Disease Control and Prevention (CDC) and the U.S. Federal Bureau of Investigation (FBI) during a 2-year period from June 1, 2009 and May 31, 2011. Results identified 297 electronic news media reports, 538 CDC reports, and 384 FBI reports of unknown "white powder." This study showed different unknown "white powder" incidents captured by each of the three sources. However, the authors could not determine the public health implications of this discordance. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Swift J1822.3-1606: A Probable New SGR in Ground Analysis of BAT Data
NASA Astrophysics Data System (ADS)
Cummings, J. R.; Burrows, D.; Campana, S.; Kennea, J. A.; Krimm, H. A.; Palmer, D. M.; Sakamoto, T.; Zan, S.
2011-07-01
At 2011-07-14 at 12:47:47.1 UTC, Swift-BAT triggered (#457261) on a previously unknown source, Swift J1822.3-1606. This was at the same time as Fermi-GBM trigger #332340476. Only a subthreshold source was detected onboard. There were two subsequent rate increases of similar size, probably from the same source at about T+26 sec and T+308 sec, the latter also causing a rate trigger with no significant source found onboard (#457263).
Microbial source tracking (MST) assays have been mostly employed in temperate climates. However, their value as monitoring tools in tropical and subtropical regions is unknown since the geographic and temporal stability of the assays has not been extensively tested. The objective...
Chronic kidney disease of unknown etiology in Sri Lanka
2016-01-01
Introduction In the last two decades, chronic kidney disease of unknown etiology (CKDu) has emerged as a significant contributor to the burden of chronic kidney disease (CKD) in rural Sri Lanka. It is characterized by the absence of identified causes for CKD. The prevalence of CKDu is 15.1–22.9% in some Sri Lankan districts, and previous research has found an association with farming occupations. Methods A systematic literature review in Pubmed, Embase, Scopus, and Lilacs databases identified 46 eligible peer-reviewed articles and one conference abstract. Results Geographical mapping indicates a relationship between CKDu and agricultural irrigation water sources. Health mapping studies, human biological studies, and environment-based studies have explored possible causative agents. Most studies focused on likely causative agents related to agricultural practices, geographical distribution based on the prevalence and incidence of CKDu, and contaminants identified in drinking water. Nonetheless, the link between agrochemicals or heavy metals and CKDu remains to be established. No definitive cause for CKDu has been identified. Discussion Evidence to date suggests that the disease is related to one or more environmental agents, however pinpointing a definite cause for CKDu is challenging. It is plausible that CKDu is multifactorial. No specific guidelines or recommendations exist for treatment of CKDu, and standard management protocols for CKD apply. Changes in agricultural practices, provision of safe drinking water, and occupational safety precautions are recommended by the World Health Organization. PMID:27399161
Chronic kidney disease of unknown etiology in Sri Lanka.
Rajapakse, Senaka; Shivanthan, Mitrakrishnan Chrishan; Selvarajah, Mathu
2016-07-01
In the last two decades, chronic kidney disease of unknown etiology (CKDu) has emerged as a significant contributor to the burden of chronic kidney disease (CKD) in rural Sri Lanka. It is characterized by the absence of identified causes for CKD. The prevalence of CKDu is 15.1-22.9% in some Sri Lankan districts, and previous research has found an association with farming occupations. A systematic literature review in Pubmed, Embase, Scopus, and Lilacs databases identified 46 eligible peer-reviewed articles and one conference abstract. Geographical mapping indicates a relationship between CKDu and agricultural irrigation water sources. Health mapping studies, human biological studies, and environment-based studies have explored possible causative agents. Most studies focused on likely causative agents related to agricultural practices, geographical distribution based on the prevalence and incidence of CKDu, and contaminants identified in drinking water. Nonetheless, the link between agrochemicals or heavy metals and CKDu remains to be established. No definitive cause for CKDu has been identified. Evidence to date suggests that the disease is related to one or more environmental agents, however pinpointing a definite cause for CKDu is challenging. It is plausible that CKDu is multifactorial. No specific guidelines or recommendations exist for treatment of CKDu, and standard management protocols for CKD apply. Changes in agricultural practices, provision of safe drinking water, and occupational safety precautions are recommended by the World Health Organization.
Bartoń, Kamil A.; Scott, Beth E.; Travis, Justin M.J.
2014-01-01
Foraging in the marine environment presents particular challenges for air-breathing predators. Information about prey capture rates, the strategies that diving predators use to maximise prey encounter rates and foraging success are still largely unknown and difficult to observe. As well, with the growing awareness of potential climate change impacts and the increasing interest in the development of renewable sources it is unknown how the foraging activity of diving predators such as seabirds will respond to both the presence of underwater structures and the potential corresponding changes in prey distributions. Motivated by this issue we developed a theoretical model to gain general understanding of how the foraging efficiency of diving predators may vary according to landscape structure and foraging strategy. Our theoretical model highlights that animal movements, intervals between prey capture and foraging efficiency are likely to critically depend on the distribution of the prey resource and the size and distribution of introduced underwater structures. For multiple prey loaders, changes in prey distribution affected the searching time necessary to catch a set amount of prey which in turn affected the foraging efficiency. The spatial aggregation of prey around small devices (∼ 9 × 9 m) created a valuable habitat for a successful foraging activity resulting in shorter intervals between prey captures and higher foraging efficiency. The presence of large devices (∼ 24 × 24 m) however represented an obstacle for predator movement, thus increasing the intervals between prey captures. In contrast, for single prey loaders the introduction of spatial aggregation of the resources did not represent an advantage suggesting that their foraging efficiency is more strongly affected by other factors such as the timing to find the first prey item which was found to occur faster in the presence of large devices. The development of this theoretical model represents a useful starting point to understand the energetic reasons for a range of potential predator responses to spatial heterogeneity and environmental uncertainties in terms of search behaviour and predator–prey interactions. We highlight future directions that integrated empirical and modelling studies should take to improve our ability to predict how diving predators will be impacted by the deployment of manmade structures in the marine environment. PMID:25250211
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
NASA Astrophysics Data System (ADS)
Takahashi, N.; Kodaira, S.; Yamashita, M.; Miura, S.; Sato, T.; No, T.; Tatsumi, Y.; Kaneda, Y.
2009-12-01
Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has carried out seismic experiments using a multichannel reflection system and ocean bottom seismographs (OBSs) in the Izu-Ogasawara (Bonin)-Mariana (IBM) arc region since 2002 to understand growth process of continental crust. The source was an airgun array with a total capacity of 12,000 cubic inches and the OBSs as the receiver were deployed with an interval of 5 km for all seismic refraction experiments. As the results, we obtained crustal structures across the whole IBM arc with an interval of 50 km and detected the structural characteristics showing the crustal growth process. The IBM arc is one of typical oceanic island arc, which crustal growth started from subduction of an oceanic crust beneath the other oceanic crust. The arc crust has developed through repeatedly magmatic accretion from subduction slab and backarc opening. The volcanism has activated in Eocene, Oligocene, Miocene and Quaternary (e.g., Taylor, 1992), however, these detailed locations of past volcanic arc has been remained as one of unknown issues. In addition, a role of crustal rifting for the crustal growth has also been still unknown issue yet. Our seismic structures show three rows of past volcanic arc crusts except current arc. A rear arc and a forearc side have one and two, respectively. The first one, which was already reported by Kodaira et al. (2008), distributes in northern side from 27 N of the rear arc region. The second one, which develops in the forearc region next to the recent volcanic front, distributes in whole of the Izu-Ogasawara arc having crustal variation along arc direction. Ones of them sometimes have thicker crust than that beneath current volcanic front and no clear topographic high. Last one in the forearc connects to the Ogasawara Ridge. However, thickest crust is not always located beneath these volcanic arcs. The initial rifting region like the northern end of the Mariana Trough and the Sumisu Rift has thicker crust than that beneath recent volcanic front, although crustal thinning with high velocity lower crust was detected beneath advanced rifted region. This suggests that the magmatic underplating play a role to make open the crust. The magmatic underplating accompanied with the initial rifting is one of important issues to discuss the crustal evolution.
Zhu, Xiaoyan; Shen, Wenqiang; Huang, Junyang; Zhang, Tianquan; Zhang, Xiaobo; Cui, Yuanjiang; Sang, Xianchun; Ling, Yinghua; Li, Yunfeng; Wang, Nan; Zhao, Fangmin; Zhang, Changwei; Yang, Zhenglin; He, Guanghua
2018-03-01
Sugars are the most abundant organic compounds produced by plants, and can be used to build carbon skeletons and generate energy. The sugar accumulation 1 (OsSAC1) gene encodes a protein with an unknown function that exhibits four N-terminal transmembrane regions and two conserved domains of unknown function, DUF4220 and DUF594. OsSAC1 was found to be poorly and specifically expressed at the bottoms of young leaves and in the developing leaf sheaths. Subcellular location results showed that OsSAC1 was co-localized with ER:mCherry and targeted the endoplasmic reticulum (ER). OsSAC1 has been found to affect sugar partitioning in rice (Oryza sativa). I2/KI starch staining, ultrastructure observations and starch content measurements indicated that more and larger starch granules accumulated in ossac1 source leaves than in wild-type (WT) source leaves. Additionally, higher sucrose and glucose concentrations accumulated in the ossac1 source leaves than in WT source leaves, whereas lower sucrose and glucose concentrations were observed in the ossac1 young leaves and developing leaf sheaths than in those of the WT. Much greater expression of OsAGPL1 and OsAGPS1 (responsible for starch synthesis) and significantly less expression of OscFBP1, OscFBP2, OsSPS1 and OsSPS11 (responsible for sucrose synthesis) and OsSWEET11, OsSWEET14 and OsSUT1 (responsible for sucrose loading) occurred in ossac1 source leaves than in WT source leaves. A greater amount of the rice plasmodesmatal negative regulator OsGSD1 was detected in ossac1 young leaves and developing leaf sheaths than in those of the WT. These results suggest that ER-targeted OsSAC1 may indirectly regulate sugar partitioning in carbon-demanding young leaves and developing leaf sheaths.
Altered Cortical Swallowing Processing in Patients with Functional Dysphagia: A Preliminary Study
Wollbrink, Andreas; Warnecke, Tobias; Winkels, Martin; Pantev, Christo; Dziewas, Rainer
2014-01-01
Objective Current neuroimaging research on functional disturbances provides growing evidence for objective neuronal correlates of allegedly psychogenic symptoms, thereby shifting the disease concept from a psychological towards a neurobiological model. Functional dysphagia is such a rare condition, whose pathogenetic mechanism is largely unknown. In the absence of any organic reason for a patient's persistent swallowing complaints, sensorimotor processing abnormalities involving central neural pathways constitute a potential etiology. Methods In this pilot study we measured cortical swallow-related activation in 5 patients diagnosed with functional dysphagia and a matched group of healthy subjects applying magnetoencephalography. Source localization of cortical activation was done with synthetic aperture magnetometry. To test for significant differences in cortical swallowing processing between groups, a non-parametric permutation test was afterwards performed on individual source localization maps. Results Swallowing task performance was comparable between groups. In relation to control subjects, in whom activation was symmetrically distributed in rostro-medial parts of the sensorimotor cortices of both hemispheres, patients showed prominent activation of the right insula, dorsolateral prefrontal cortex and lateral premotor, motor as well as inferolateral parietal cortex. Furthermore, activation was markedly reduced in the left medial primary sensory cortex as well as right medial sensorimotor cortex and adjacent supplementary motor area (p<0.01). Conclusions Functional dysphagia - a condition with assumed normal brain function - seems to be associated with distinctive changes of the swallow-related cortical activation pattern. Alterations may reflect exaggerated activation of a widely distributed vigilance, self-monitoring and salience rating network that interferes with down-stream deglutition sensorimotor control. PMID:24586948
(U) An Analytic Study of Piezoelectric Ejecta Mass Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, Ian Lee
2017-02-16
We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less
14 CFR 23.1310 - Power source capacity and distribution.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...
14 CFR 23.1310 - Power source capacity and distribution.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...
14 CFR 23.1310 - Power source capacity and distribution.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...
Using an APOS Framework to Understand Teachers' Responses to Questions on the Normal Distribution
ERIC Educational Resources Information Center
Bansilal, Sarah
2014-01-01
This study is an exploration of teachers' engagement with concepts embedded in the normal distribution. The participants were a group of 290 in-service teachers enrolled in a teacher development program. The research instrument was an assessment task that can be described as an "unknown percentage" problem, which required the application…
Abdallah, Chifaou; Maillard, Louis G; Rikir, Estelle; Jonas, Jacques; Thiriaux, Anne; Gavaret, Martine; Bartolomei, Fabrice; Colnat-Coulbois, Sophie; Vignal, Jean-Pierre; Koessler, Laurent
2017-01-01
We aimed to prospectively assess the anatomical concordance of electric source localizations of interictal discharges with the epileptogenic zone (EZ) estimated by stereo-electroencephalography (SEEG) according to different subgroups: the type of epilepsy, the presence of a structural MRI lesion, the aetiology and the depth of the EZ. In a prospective multicentric observational study, we enrolled 85 consecutive patients undergoing pre-surgical SEEG investigation for focal drug-resistant epilepsy. Electric source imaging (ESI) was performed before SEEG. Source localizations were obtained from dipolar and distributed source methods. Anatomical concordance between ESI and EZ was defined according to 36 predefined sublobar regions. ESI was interpreted blinded to- and subsequently compared with SEEG estimated EZ. 74 patients were finally analyzed. 38 patients had temporal and 36 extra-temporal lobe epilepsy. MRI was positive in 52. 41 patients had malformation of cortical development (MCD), 33 had another or an unknown aetiology. EZ was medial in 27, lateral in 13, and medio-lateral in 34. In the overall cohort, ESI completely or partly localized the EZ in 85%: full concordance in 13 cases and partial concordance in 50 cases. The rate of ESI full concordance with EZ was significantly higher in (i) frontal lobe epilepsy (46%; p = 0.05), (ii) cases of negative MRI (36%; p = 0.01) and (iii) MCD (27%; p = 0.03). The rate of ESI full concordance with EZ was not statistically different according to the depth of the EZ. We prospectively demonstrated that ESI more accurately estimated the EZ in subgroups of patients who are often the most difficult cases in epilepsy surgery: frontal lobe epilepsy, negative MRI and the presence of MCD.
Unveiling slim accretion disc in AGN through X-ray and Infrared observations
NASA Astrophysics Data System (ADS)
Castelló-Mor, Núria; Kaspi, Shai; Netzer, Hagai; Du, Pu; Hu, Chen; Ho, Luis C.; Bai, Jin-Ming; Bian, Wei-Hao; Yuan, Ye-Fei; Wang, Jian-Min
2017-05-01
In this work, which is a continuation of Castelló-Mor et al., we present new X-ray and infrared (IR) data for a sample of active galactic nuclei (AGN) covering a wide range in Eddington ratio over a small luminosity range. In particular, we rigorously explore the dependence of the optical-to-X-ray spectral index αOX and the IR-to-optical spectral index on the dimensionless accretion rate, \\dot{M} = \\dot{m}/η, where \\dot{m} = LAGN/LEdd and η is the mass-to-radiation conversion efficiency, in low- and high-accretion rate sources. We find that the spectral energy distribution (SED) of the faster accreting sources is surprisingly similar to those from the comparison sample of sources with lower accretion rate. In particular: (I) The optical-to-UV AGN SED of slow and fast accreting AGN can be fitted with thin accretion disc (AD) models. (II) The value of αOX is very similar in slow and fast accreting systems up to a dimensionless accretion rate \\dot{M}c ˜ 10. We only find a correlation between αOX and \\dot{M} for sources with \\dot{M} > \\dot{M}c. In such cases, the faster accreting sources appear to have systematically larger αOX values. (III) We also find that the torus in the faster accreting systems seems to be less efficient in reprocessing the primary AGN radiation having lower IR-to-optical spectral slopes. These findings, failing to recover the predicted differences between the SEDs of slim and thin ADs within the observed spectral window, suggest that additional physical processes or very special geometry act to reduce the extreme-UV radiation in fast accreting AGN. This may be related to photon trapping, strong winds and perhaps other yet unknown physical processes.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.
2017-12-01
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.
Concordance measure and discriminatory accuracy in transformation cure models.
Zhang, Yilong; Shao, Yongzhao
2018-01-01
Many populations of early-stage cancer patients have non-negligible latent cure fractions that can be modeled using transformation cure models. However, there is a lack of statistical metrics to evaluate prognostic utility of biomarkers in this context due to the challenges associated with unknown cure status and heavy censorship. In this article, we develop general concordance measures as evaluation metrics for the discriminatory accuracy of transformation cure models including the so-called promotion time cure models and mixture cure models. We introduce explicit formulas for the consistent estimates of the concordance measures, and show that their asymptotically normal distributions do not depend on the unknown censoring distribution. The estimates work for both parametric and semiparametric transformation models as well as transformation cure models. Numerical feasibility of the estimates and their robustness to the censoring distributions are illustrated via simulation studies and demonstrated using a melanoma data set. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Exposure to air pollution particles can be associated with increased human morbidity and mortality. The mechanism(s) of lung injury remains unknown. We tested the hypothesis that lung exposure to oil fly ash (an emission source air pollution particle) causes in vivo free radical ...
AGN classification for X-ray sources in the 105 month Swift/BAT survey
NASA Astrophysics Data System (ADS)
Masetti, N.; Bassani, L.; Palazzi, E.; Malizia, A.; Stephen, J. B.; Ubertini, P.
2018-03-01
We here provide classifications for 8 hard X-ray sources listed as 'unknown AGN' in the 105 month Swift/BAT all-sky survey catalogue (Oh et al. 2018, ApJS, 235, 4). The corresponding optical spectra were extracted from the 6dF Galaxy Survey (Jones et al. 2009, MNRAS, 399, 683).
Alfonse, Lauren E; Garrett, Amanda D; Lun, Desmond S; Duffy, Ken R; Grgicak, Catherine M
2018-01-01
DNA-based human identity testing is conducted by comparison of PCR-amplified polymorphic Short Tandem Repeat (STR) motifs from a known source with the STR profiles obtained from uncertain sources. Samples such as those found at crime scenes often result in signal that is a composite of incomplete STR profiles from an unknown number of unknown contributors, making interpretation an arduous task. To facilitate advancement in STR interpretation challenges we provide over 25,000 multiplex STR profiles produced from one to five known individuals at target levels ranging from one to 160 copies of DNA. The data, generated under 144 laboratory conditions, are classified by total copy number and contributor proportions. For the 70% of samples that were synthetically compromised, we report the level of DNA damage using quantitative and end-point PCR. In addition, we characterize the complexity of the signal by exploring the number of detected alleles in each profile. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Galy, V.; Oppo, D.; Dubois, N.; Arbuszewski, J. A.; Mohtadi, M.; Schefuss, E.; Rosenthal, Y.; Linsley, B. K.
2016-12-01
There is ample evidence suggesting that rainfall distribution across the Indo-Pacific Warm Pool (IPWP) - a key component of the global climate system - has substantially varied over the last deglaciation. Yet, the precise nature of these hydroclimate changes remains to be elucidated. In particular, the relative importance of variations in precipitation seasonality versus annual precipitation amount is essentially unknown. Here we use a set of surface sediments from the IPWP covering a wide range of modern hydroclimate conditions to evaluate how plant wax stable isotope composition records rainfall distribution in the area. We focus on long chain fatty acids, which are exclusively produced by vascular plants living on nearby land and delivered to the ocean by rivers. We relate the C (δ13C) and H (δD) isotope composition of long chain fatty acids preserved in surface sediments to modern precipitation distribution and stable isotope composition in their respective source area. We show that: 1) δ13C values reflect vegetation distribution (in particular the relative abundance of C3 and C4 plants) and are primarily recording precipitation seasonality (Dubois et al., 2014) and, 2) once corrected for plant fractionation effects, δD values reflect the amount-weighted average stable isotope composition of precipitation and are primarily recording annual precipitation amounts. We propose that combining the C and H isotope composition of long chain fatty acids thus allows independent reconstructions of precipitation seasonality and annual amounts in the IPWP. The practical implications for reconstructing past hydroclimate in the IPWP will be discussed.
Deducing Electron Properties from Hard X-Ray Observations
NASA Technical Reports Server (NTRS)
Kontar, E. P.; Brown, J. C.; Emslie, A. G.; Hajdas, W.; Holman, G. D.; Hurford, G. J.; Kasparova, J.; Mallik, P. C. V.; Massone, A. M.; McConnell, M. L.;
2011-01-01
X-radiation from energetic electrons is the prime diagnostic of flare-accelerated electrons. The observed X-ray flux (and polarization state) is fundamentally a convolution of the cross-section for the hard X-ray emission process(es) in question with the electron distribution function, which is in turn a function of energy, direction, spatial location and time. To address the problems of particle propagation and acceleration one needs to infer as much information as possible on this electron distribution function, through a deconvolution of this fundamental relationship. This review presents recent progress toward this goal using spectroscopic, imaging and polarization measurements, primarily from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Previous conclusions regarding the energy, angular (pitch angle) and spatial distributions of energetic electrons in solar flares are critically reviewed. We discuss the role and the observational evidence of several radiation processes: free-free electron-ion, free-free electron-electron, free-bound electron-ion, photoelectric absorption and Compton backscatter (albedo), using both spectroscopic and imaging techniques. This unprecedented quality of data allows for the first time inference of the angular distributions of the X-ray-emitting electrons and improved model-independent inference of electron energy spectra and emission measures of thermal plasma. Moreover, imaging spectroscopy has revealed hitherto unknown details of solar flare morphology and detailed spectroscopy of coronal, footpoint and extended sources in flaring regions. Additional attempts to measure hard X-ray polarization were not sufficient to put constraints on the degree of anisotropy of electrons, but point to the importance of obtaining good quality polarization data in the future.
Brunstein, Maia; Teremetz, Maxime; Hérault, Karine; Tourain, Christophe; Oheim, Martin
2014-01-01
Total internal reflection fluorescence microscopy (TIRFM) achieves subdiffraction axial sectioning by confining fluorophore excitation to a thin layer close to the cell/substrate boundary. However, it is often unknown how thin this light sheet actually is. Particularly in objective-type TIRFM, large deviations from the exponential intensity decay expected for pure evanescence have been reported. Nonevanescent excitation light diminishes the optical sectioning effect, reduces contrast, and renders TIRFM-image quantification uncertain. To identify the sources of this unwanted fluorescence excitation in deeper sample layers, we here combine azimuthal and polar beam scanning (spinning TIRF), atomic force microscopy, and wavefront analysis of beams passing through the objective periphery. Using a variety of intracellular fluorescent labels as well as negative staining experiments to measure cell-induced scattering, we find that azimuthal beam spinning produces TIRFM images that more accurately portray the real fluorophore distribution, but these images are still hampered by far-field excitation. Furthermore, although clearly measureable, cell-induced scattering is not the dominant source of far-field excitation light in objective-type TIRF, at least for most types of weakly scattering cells. It is the microscope illumination optical path that produces a large cell- and beam-angle invariant stray excitation that is insensitive to beam scanning. This instrument-induced glare is produced far from the sample plane, inside the microscope illumination optical path. We identify stray reflections and high-numerical aperture aberrations of the TIRF objective as one important source. This work is accompanied by a companion paper (Pt.2/2). PMID:24606927
NASA Astrophysics Data System (ADS)
Shishov, V. I.; Chashei, I. V.; Oreshko, V. V.; Logvinenko, S. V.; Tyul'bashev, S. A.; Subaev, I. A.; Svidskii, P. M.; Lapshin, V. B.; Dagkesamanskii, R. D.
2016-12-01
The design properties and technical characteristics of the upgraded Large Phased Array (LPA) are briefly described. The results of an annual cycle of observations of interplanetary scintillations of radio sources on the LPA with the new 96-beam BEAM 3 system are presented. Within a day, about 5000 radio sources displaying second-timescale fluctuations in their flux densities due to interplanetary scintillations were observed. At present, the parameters of many of these radio sources are unknown. Therefore, the number of sources with root-mean-square flux-density fluctuations greater than 0.2 Jy in a 3° × 3° area of sky was used to characterize the scintillation level. The observational data obtained during the period of the maximum of solar cycle 24 can be interpreted using a three-component model for the spatial structure of the solar wind, consisting of a stable global component, propagating disturbances, and corotating structures. The global component corresponds to the spherically symmetric structure of the distribution of the turbulent interplanetary plasma. Disturbances propagating from the Sun are observed against the background of the global structure. Propagating disturbances recorded at heliocentric distances of 0.4-1 AU and at all heliolatitudes reach the Earth's orbit one to two days after the scintillation enhancement. Enhancements of ionospheric scintillations are observed during night-time. Corotating disturbances have a recurrence period of 27 d . Disturbances of the ionosphere are observed as the coronal base of a corotating structure approaches the western edge of the solar limb.
NASA Technical Reports Server (NTRS)
Helgason, K.; Cappelluti, N.; Hasinger, G.; Kashlinsky, A.; Ricotti, M.
2014-01-01
A spatial clustering signal has been established in Spitzer/IRAC measurements of the unresolved cosmic near-infrared background (CIB) out to large angular scales, approx. 1deg. This CIB signal, while significantly exceeding the contribution from the remaining known galaxies, was further found to be coherent at a highly statistically significant level with the unresolved soft cosmic X-ray background (CXB). This measurement probes the unresolved CXB to very faint source levels using deep near-IR source subtraction.We study contributions from extragalactic populations at low to intermediate redshifts to the measured positive cross-power signal of the CIB fluctuations with the CXB. We model the X-ray emission from active galactic nuclei (AGNs), normal galaxies, and hot gas residing in virialized structures, calculating their CXB contribution including their spatial coherence with all infrared emitting counterparts. We use a halo model framework to calculate the auto and cross-power spectra of the unresolved fluctuations based on the latest constraints of the halo occupation distribution and the biasing of AGNs, galaxies, and diffuse emission. At small angular scales (1), the 4.5microns versus 0.5-2 keV coherence can be explained by shot noise from galaxies and AGNs. However, at large angular scales (approx.10), we find that the net contribution from the modeled populations is only able to account for approx. 3% of the measured CIB×CXB cross-power. The discrepancy suggests that the CIB×CXB signal originates from the same unknown source population producing the CIB clustering signal out to approx. 1deg.
Determining the Intensity of a Point-Like Source Observed on the Background of AN Extended Source
NASA Astrophysics Data System (ADS)
Kornienko, Y. V.; Skuratovskiy, S. I.
2014-12-01
The problem of determining the time dependence of intensity of a point-like source in case of atmospheric blur is formulated and solved by using the Bayesian statistical approach. A pointlike source is supposed to be observed on the background of an extended source with constant in time though unknown brightness. The equation system for optimal statistical estimation of the sequence of intensity values in observation moments is obtained. The problem is particularly relevant for studying gravitational mirages which appear while observing a quasar through the gravitational field of a far galaxy.
Chandran, A; Mazumder, A
2015-12-01
The aims of this study were to investigate the temporal variation in Escherichia coli density and its sources at the drinking water intake of Comox Lake for a period of 3 years (2011-2013). Density of E. coli was assessed by standard membrane filtration method. Source tracking of E. coli were done by using BOX-A1R-based rep-PCR DNA fingerprinting method. Over the years, the mean E. coli density ranged from nondetectable to 9·8 CFU 100 ml(-1) . The density of E. coli in each of the years did not show any significant difference (P > 0·05); however, a comparatively higher density was observed during the fall. Wildlife was (64·28%, 153/238) identified as the major contributing source of E. coli, followed by human (18·06%, 43/238) and unknown sources (17·64%, 42/238). Although the sources were varied by year and season, over all, the predominant contributing sources were black bear, human, unknown, elk, horse and gull. The findings of this investigation identified the multiple animal sources contributing faecal bacteria into the drinking water intake of Comox Lake and their varying temporal occurrence. The results of this study can reliably inform the authorities about the most vulnerable period (season) of faecal bacterial loading and their potential sources in the lake for improving risk assessment and pollution mitigation. © 2015 The Society for Applied Microbiology.
Ambient Noise Interferometry and Surface Wave Array Tomography: Promises and Problems
NASA Astrophysics Data System (ADS)
van der Hilst, R. D.; Yao, H.; de Hoop, M. V.; Campman, X.; Solna, K.
2008-12-01
In the late 1990ies most seismologists would have frowned at the possibility of doing high-resolution surface wave tomography with noise instead of with signal associated with ballistic source-receiver propagation. Some may still do, but surface wave tomography with Green's functions estimated through ambient noise interferometry ('sourceless tomography') has transformed from a curiosity into one of the (almost) standard tools for analysis of data from dense seismograph arrays. Indeed, spectacular applications of ambient noise surface wave tomography have recently been published. For example, application to data from arrays in SE Tibet revealed structures in the crust beneath the Tibetan plateau that could not be resolved by traditional tomography (Yao et al., GJI, 2006, 2008). While the approach is conceptually simple, in application the proverbial devil is in the detail. Full reconstruction of the Green's function requires that the wavefields used are diffusive and that ambient noise energy is evenly distributed in the spatial dimensions of interest. In the field, these conditions are not usually met, and (frequency dependent) non-uniformity of the noise sources may lead to incomplete reconstruction of the Green's function. Furthermore, ambient noise distributions can be time-dependent, and seasonal variations have been documented. Naive use of empirical Green's functions may produce (unknown) bias in the tomographic models. The degrading effect on EGFs of the directionality of noise distribution forms particular challenges for applications beyond isotropic surface wave inversions, such as inversions for (azimuthal) anisotropy and attempts to use higher modes (or body waves). Incomplete Green's function reconstruction can (probably) not be prevented, but it may be possible to reduce the problem and - at least - understand the degree of incomplete reconstruction and prevent it from degrading the tomographic model. We will present examples of Rayleigh wave inversions and discuss strategies to mitigate effects of incomplete Green's function reconstruction on tomographic images.
Sex Distribution of Paper Mulberry (Broussonetia papyrifera) in the Pacific
Peñailillo, Johany; Olivares, Gabriela; Moncada, Ximena; Payacán, Claudia; Chang, Chi-Shan; Chung, Kuo-Fang; Matthews, Peter J.; Seelenfreund, Andrea; Seelenfreund, Daniela
2016-01-01
Background Paper mulberry (Broussonetia papyrifera (L.) L'Hér. ex Vent) is a dioecious tree native to East Asia and mainland Southeast-Asia, introduced prehistorically to Polynesia as a source of bark fiber by Austronesian-speaking voyagers. In Oceania, trees are coppiced and harvested for production of bark-cloth, so flowering is generally unknown. A survey of botanical records of paper mulberry revealed a distributional disjunction: the tree is apparently absent in Borneo and the Philippines. A subsequent study of chloroplast haplotypes linked paper mulberry of Remote Oceania directly to a population in southern Taiwan, distinct from known populations in mainland Southeast-Asia. Methodology We describe the optimization and use of a DNA marker designed to identify sex in paper mulberry. We used this marker to determine the sex distribution in selected localities across Asia, Near and Remote Oceania. We also characterized all samples using the ribosomal internal transcribed spacer sequence (ITS) in order to relate results to a previous survey of ITS diversity. Results In Near and Remote Oceania, contemporary paper mulberry plants are all female with the exception of Hawaii, where plants of both sexes are found. In its natural range in Asia, male and female plants are found, as expected. Male plants in Hawaii display an East Asian ITS genotype, consistent with modern introduction, while females in Remote Oceania share a distinctive variant. Conclusions Most paper mulberry plants now present in the Pacific appear to be descended from female clones introduced prehistorically. In Hawaii, the presence of male and female plants is thought to reflect a dual origin, one a prehistoric female introduction and the other a modern male introduction by Japanese/Chinese immigrants. If only female clones were dispersed from a source-region in Taiwan, this may explain the absence of botanical records and breeding populations in the Philippines and Borneo, and Remote Oceania. PMID:27529483
Terrestrial dissolved organic matter distribution in the North Sea.
Painter, Stuart C; Lapworth, Dan J; Woodward, E Malcolm S; Kroeger, Silke; Evans, Chris D; Mayor, Daniel J; Sanders, Richard J
2018-07-15
The flow of terrestrial carbon to rivers and inland waters is a major term in the global carbon cycle. The organic fraction of this flux may be buried, remineralized or ultimately stored in the deep ocean. The latter can only occur if terrestrial organic carbon can pass through the coastal and estuarine filter, a process of unknown efficiency. Here, data are presented on the spatial distribution of terrestrial fluorescent and chromophoric dissolved organic matter (FDOM and CDOM, respectively) throughout the North Sea, which receives organic matter from multiple distinct sources. We use FDOM and CDOM as proxies for terrestrial dissolved organic matter (tDOM) to test the hypothesis that tDOM is quantitatively transferred through the North Sea to the open North Atlantic Ocean. Excitation emission matrix fluorescence and parallel factor analysis (EEM-PARAFAC) revealed a single terrestrial humic-like class of compounds whose distribution was restricted to the coastal margins and, via an inverse salinity relationship, to major riverine inputs. Two distinct sources of fluorescent humic-like material were observed associated with the combined outflows of the Rhine, Weser and Elbe rivers in the south-eastern North Sea and the Baltic Sea outflow to the eastern central North Sea. The flux of tDOM from the North Sea to the Atlantic Ocean appears insignificant, although tDOM export may occur through Norwegian coastal waters unsampled in our study. Our analysis suggests that the bulk of tDOM exported from the Northwest European and Scandinavian landmasses is buried or remineralized internally, with potential losses to the atmosphere. This interpretation implies that the residence time in estuarine and coastal systems exerts an important control over the fate of tDOM and needs to be considered when evaluating the role of terrestrial carbon losses in the global carbon cycle. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.
Nakano, Shusuke; Yokoyama, Yuta; Aoyagi, Satoka; Himi, Naoyuki; Fletcher, John S; Lockyer, Nicholas P; Henderson, Alex; Vickerman, John C
2016-06-08
Time-of-flight secondary ion mass spectrometry (ToF-SIMS) provides detailed chemical structure information and high spatial resolution images. Therefore, ToF-SIMS is useful for studying biological phenomena such as ischemia. In this study, in order to evaluate cerebral microinfarction, the distribution of biomolecules generated by ischemia was measured with ToF-SIMS. ToF-SIMS data sets were analyzed by means of multivariate analysis for interpreting complex samples containing unknown information and to obtain biomolecular mapping indicated by fragment ions from the target biomolecules. Using conventional ToF-SIMS (primary ion source: Bi cluster ion), it is difficult to detect secondary ions beyond approximately 1000 u. Moreover, the intensity of secondary ions related to biomolecules is not always high enough for imaging because of low concentration even if the masses are lower than 1000 u. However, for the observation of biomolecular distributions in tissues, it is important to detect low amounts of biological molecules from a particular area of tissue. Rat brain tissue samples were measured with ToF-SIMS (J105, Ionoptika, Ltd., Chandlers Ford, UK), using a continuous beam of Ar clusters as a primary ion source. ToF-SIMS with Ar clusters efficiently detects secondary ions related to biomolecules and larger molecules. Molecules detected by ToF-SIMS were examined by analyzing ToF-SIMS data using multivariate analysis. Microspheres (45 μm diameter) were injected into the rat unilateral internal carotid artery (MS rat) to cause cerebral microinfarction. The rat brain was sliced and then measured with ToF-SIMS. The brain samples of a normal rat and the MS rat were examined to find specific secondary ions related to important biomolecules, and then the difference between them was investigated. Finally, specific secondary ions were found around vessels incorporating microspheres in the MS rat. The results suggest that important biomolecules related to cerebral microinfarction can be detected by ToF-SIMS.
Sex Distribution of Paper Mulberry (Broussonetia papyrifera) in the Pacific.
Peñailillo, Johany; Olivares, Gabriela; Moncada, Ximena; Payacán, Claudia; Chang, Chi-Shan; Chung, Kuo-Fang; Matthews, Peter J; Seelenfreund, Andrea; Seelenfreund, Daniela
2016-01-01
Paper mulberry (Broussonetia papyrifera (L.) L'Hér. ex Vent) is a dioecious tree native to East Asia and mainland Southeast-Asia, introduced prehistorically to Polynesia as a source of bark fiber by Austronesian-speaking voyagers. In Oceania, trees are coppiced and harvested for production of bark-cloth, so flowering is generally unknown. A survey of botanical records of paper mulberry revealed a distributional disjunction: the tree is apparently absent in Borneo and the Philippines. A subsequent study of chloroplast haplotypes linked paper mulberry of Remote Oceania directly to a population in southern Taiwan, distinct from known populations in mainland Southeast-Asia. We describe the optimization and use of a DNA marker designed to identify sex in paper mulberry. We used this marker to determine the sex distribution in selected localities across Asia, Near and Remote Oceania. We also characterized all samples using the ribosomal internal transcribed spacer sequence (ITS) in order to relate results to a previous survey of ITS diversity. In Near and Remote Oceania, contemporary paper mulberry plants are all female with the exception of Hawaii, where plants of both sexes are found. In its natural range in Asia, male and female plants are found, as expected. Male plants in Hawaii display an East Asian ITS genotype, consistent with modern introduction, while females in Remote Oceania share a distinctive variant. Most paper mulberry plants now present in the Pacific appear to be descended from female clones introduced prehistorically. In Hawaii, the presence of male and female plants is thought to reflect a dual origin, one a prehistoric female introduction and the other a modern male introduction by Japanese/Chinese immigrants. If only female clones were dispersed from a source-region in Taiwan, this may explain the absence of botanical records and breeding populations in the Philippines and Borneo, and Remote Oceania.
Monaco, D; Riccio, A; Chianese, E; Adamo, P; Di Rosa, S; Fagnano, M
2015-10-01
In this paper, the behaviour and distribution patterns of heavy hydrocarbons and several polycyclic aromatic hydrocarbon (PAH) priority pollutants, as listed by the US Environmental Protection Agency, were evaluated in 891 soil samples. The samples were collected in three expected polluted rural sites in Campania (southern Italy) as part of the LIFE11 ECOREMED project, funded by the European Commission, to test innovative agriculture-based soil restoration techniques. These sites have been selected because they have been used for the temporary storage of urban and building waste (Teverola), subject to illicit dumping of unknown material (Trentola-Ducenta), or suspected to be polluted by metals due to agricultural practices (Giugliano). Chemical analysis of soil samples allowed the baseline pollution levels to be determined prior to any intervention. It was found that these areas can be considered contaminated for residential use, in accordance with Italian environmental law (Law Decree 152/2006). Statistical analysis applied to the data proved that average mean concentrations of heavy hydrocarbons could be as high as 140 mg/kg of dry soil with peaks of 700 mg/kg of dry soil, for the Trentola-Ducenta site; the median concentration of analytical results for hydrocarbon (HC) concentration for the Trentola-Ducenta and Giugliano sites was 63 and 73.4 mg/kg dry soil, respectively; for Teverola, the median level was 35 mg/kg dry soil. Some PAHs (usually benzo(a)pyrene) also exceeded the maximum allowed level in all sites. From the principal component analysis applied to PAH concentrations, it emerged that pollutants can be supposed to derive from a single source for the three sites. Diagnostic ratios calculated to determine possible PAH sources suggest petroleum combustion or disposal practice. Our sampling protocol also showed large dishomogeneity in soil pollutant spatial distribution, even at a scale as small as 3.3 m, indicating that variability could emerge at very short spatial scales.
Accurately Mapping M31's Microlensing Population
NASA Astrophysics Data System (ADS)
Crotts, Arlin
2004-07-01
We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity function over much of M31.
1986-12-01
Force and other branches of the military are placing an increased emphasis on system reliablity and maintainability. In studying current systems ...used in the research of proposed systems , by predicting MTTF and MTTR of the new parts and thus, predict the reliability of those parts. The statistics...effectiveness of new systems . Aitchison’s book on the lognormal distribution, printed and used by Cambridge University, highlighted the distributions
Khan, Muhammad Usman; Besis, Athanasios; Li, Jun; Zhang, Gan; Malik, Riffat Naseem
2017-10-01
Data regarding flame retardants (FRs) in indoor and outdoor air and their exposure to population are scarce and especially unknown in the case of Pakistan. The current study was designed to probe FR concentrations and distribution pattern in indoor and outdoor air at different altitudinal zones (DAZs) of Pakistan with special emphasis on their risk to the exposed population. In this study, passive air samplers for the purpose of FR deposition were deployed in indoor and outdoor air at the industrial, rural, and background/colder zones/sites. All the indoor and outdoor air samples collected from DAZs were analyzed for the target FRs (9.30-472.30 pg/m 3 ), showing a decreasing trend as follows: ∑NBFRs > ∑PBDEs > ∑DP. However, significant correlations among FRs in the indoor and outdoor air at DAZs signified a similar source of FR origin that is used in different consumer goods. Furthermore, air mass trajectories revealed that movement of air over industrial area sources influenced concentrations of FRs at rural sites. The FR concentrations, estimated daily intake (EDI) and the hazard quotient (HQ), were recorded to be higher in toddlers than those in adults. In addition, indoor air samples showed higher FR levels, EDI and HQ, than outdoor air samples. An elevated FR concentrations and their prevalent exposure risks were recorded in the industrial zones followed by rural and background zones. The HQ for BDE-47 and BDE-99 in the indoor and outdoor air samples at different industrial and rural sites were recorded to be >1 in toddlers and adults, this further warrants a health risk in the population. However, FR investigation in indoor and outdoor air samples will provide a baseline data in Pakistan to take further steps by the government and agencies for its implementations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interpreting the spatio-temporal patterns of sea turtle strandings: Going with the flow
Hart, K.M.; Mooreside, P.; Crowder, L.B.
2006-01-01
Knowledge of the spatial and temporal distribution of specific mortality sources is crucial for management of species that are vulnerable to human interactions. Beachcast carcasses represent an unknown fraction of at-sea mortalities. While a variety of physical (e.g., water temperature) and biological (e.g., decomposition) factors as well as the distribution of animals and their mortality sources likely affect the probability of carcass stranding, physical oceanography plays a major role in where and when carcasses strand. Here, we evaluate the influence of nearshore physical oceanographic and wind regimes on sea turtle strandings to decipher seasonal trends and make qualitative predictions about stranding patterns along oceanfront beaches. We use results from oceanic drift-bottle experiments to check our predictions and provide an upper limit on stranding proportions. We compare predicted current regimes from a 3D physical oceanographic model to spatial and temporal locations of both sea turtle carcass strandings and drift bottle landfalls. Drift bottle return rates suggest an upper limit for the proportion of sea turtle carcasses that strand (about 20%). In the South Atlantic Bight, seasonal development of along-shelf flow coincides with increased numbers of strandings of both turtles and drift bottles in late spring and early summer. The model also predicts net offshore flow of surface waters during winter - the season with the fewest relative strandings. The drift bottle data provide a reasonable upper bound on how likely carcasses are to reach land from points offshore and bound the general timeframe for stranding post-mortem (< two weeks). Our findings suggest that marine turtle strandings follow a seasonal regime predictable from physical oceanography and mimicked by drift bottle experiments. Managers can use these findings to reevaluate incidental strandings limits and fishery takes for both nearshore and offshore mortality sources. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Murray, J. R.
2016-12-01
Finite-fault source algorithms can greatly benefit earthquake early warning (EEW) systems. Estimates of finite-fault parameters provide spatial information, which can significantly improve real-time shaking calculations and help with disaster response. In this project, we have focused on integrating a finite-fault seismic-geodetic algorithm into the West Coast ShakeAlert framework. The seismic part is FinDer 2, a C++ version of the algorithm developed by Böse et al. (2012). It interpolates peak ground accelerations and calculates the best fault length and strike from template matching. The geodetic part is a C++ version of BEFORES, the algorithm developed by Minson et al. (2014) that uses a Bayesian methodology to search for the most probable slip distribution on a fault of unknown orientation. Ultimately, these two will be used together where FinDer generates a Bayesian prior for BEFORES via the methodology of Minson et al. (2015), and the joint solution will generate estimates of finite-fault extent, strike, dip, best slip distribution, and magnitude. We have created C++ versions of both FinDer and BEFORES using open source libraries and have developed a C++ Application Protocol Interface (API) for them both. Their APIs allow FinDer and BEFORES to contribute to the ShakeAlert system via an open source messaging system, ActiveMQ. FinDer has been receiving real-time data, detecting earthquakes, and reporting messages on the development system for several months. We are also testing FinDer extensively with Earthworm tankplayer files. BEFORES has been tested with ActiveMQ messaging in the ShakeAlert framework, and works off a FinDer trigger. We are finishing the FinDer-BEFORES connections in this framework, and testing this system via seismic-geodetic tankplayer files. This will include actual and simulated data.
Method and apparatus for reducing the harmonic currents in alternating-current distribution networks
Beverly, Leon H.; Hance, Richard D.; Kristalinski, Alexandr L.; Visser, Age T.
1996-01-01
An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer.
Method and apparatus for reducing the harmonic currents in alternating-current distribution networks
Beverly, L.H.; Hance, R.D.; Kristalinski, A.L.; Visser, A.T.
1996-11-19
An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer. 23 figs.
Bailly, Jean-Stéphane; Vinatier, Fabrice
2018-01-01
To optimize ecosystem services provided by agricultural drainage networks (ditches) in headwater catchments, we need to manage the spatial distribution of plant species living in these networks. Geomorphological variables have been shown to be important predictors of plant distribution in other ecosystems because they control the water regime, the sediment deposition rates and the sun exposure in the ditches. Whether such variables may be used to predict plant distribution in agricultural drainage networks is unknown. We collected presence and absence data for 10 herbaceous plant species in a subset of a network of drainage ditches (35 km long) within a Mediterranean agricultural catchment. We simulated their spatial distribution with GLM and Maxent model using geomorphological variables and distance to natural lands and roads. Models were validated using k-fold cross-validation. We then compared the mean Area Under the Curve (AUC) values obtained for each model and other metrics issued from the confusion matrices between observed and predicted variables. Based on the results of all metrics, the models were efficient at predicting the distribution of seven species out of ten, confirming the relevance of geomorphological variables and distance to natural lands and roads to explain the occurrence of plant species in this Mediterranean catchment. In particular, the importance of the landscape geomorphological variables, ie the importance of the geomorphological features encompassing a broad environment around the ditch, has been highlighted. This suggests that agro-ecological measures for managing ecosystem services provided by ditch plants should focus on the control of the hydrological and sedimentological connectivity at the catchment scale. For example, the density of the ditch network could be modified or the spatial distribution of vegetative filter strips used for sediment trapping could be optimized. In addition, the vegetative filter strips could constitute new seed bank sources for species that are affected by the distance to natural lands and roads. PMID:29360857
Liu, Gang; Tao, Yu; Zhang, Ya; Lut, Maarten; Knibbe, Willem-Jan; van der Wielen, Paul; Liu, Wentso; Medema, Gertjan; van der Meer, Walter
2017-11-01
Biofilm formation, loose deposit accumulation and water quality deterioration in drinking water distribution systems have been widely reported. However, the accumulation and distribution of harbored elements and microbes in the different niches (loose deposits, PVC-U biofilm, and HDPE biofilm) and their corresponding potential contribution to water quality deterioration remain unknown. This precludes an in-depth understanding of water quality deterioration and the development of proactive management strategies. The present study quantitatively evaluated the distribution of elements, ATP, Aeromonas spp., and bacterial communities in distribution pipes (PVC-U, D = 110 mm, loose deposit and biofilm niches) and household connection pipes (HDPE, D = 32 mm, HDPE biofilm niches) at ten locations in an unchlorinated distribution system. The results show that loose deposits in PVC-U pipes, acting as sinks, constitute a hotspot (highest total amount per meter pipe) for elements, ATP, and target bacteria groups (e.g., Aeromonas spp., Mycobacterium spp., and Legionella spp.). When drinking water distribution system niches with harbored elements and microbes become sources in the event of disturbances, the highest quality deterioration potential (QDP) is that of HDPE biofilm; this can be attributed to its high surface-to-volume ratio. 16s rRNA analysis demonstrates that, at the genus level, the bacterial communities in the water, loose deposits, PVC-U biofilm, and HDPE biofilm were dominated, respectively, by Polaromonas spp. (2-23%), Nitrosipra spp. (1-47%), Flavobacterium spp. (1-36%), and Flavobacterium spp. (5-67%). The combined results of elemental composition and bacterial community analyses indicate that different dominant bio-chemical processes might occur within the different niches-for example, iron-arsenic oxidizing in loose deposits, bio-calumniation in PVC-U biofilm, and methane oxidizing in HDPE biofilm. The release of 20% loose deposits, 20% PVC-U biofilm and 10% HDPE biofilm will cause significant changes of water bacterial community. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Rudi, Gabrielle; Bailly, Jean-Stéphane; Vinatier, Fabrice
2018-01-01
To optimize ecosystem services provided by agricultural drainage networks (ditches) in headwater catchments, we need to manage the spatial distribution of plant species living in these networks. Geomorphological variables have been shown to be important predictors of plant distribution in other ecosystems because they control the water regime, the sediment deposition rates and the sun exposure in the ditches. Whether such variables may be used to predict plant distribution in agricultural drainage networks is unknown. We collected presence and absence data for 10 herbaceous plant species in a subset of a network of drainage ditches (35 km long) within a Mediterranean agricultural catchment. We simulated their spatial distribution with GLM and Maxent model using geomorphological variables and distance to natural lands and roads. Models were validated using k-fold cross-validation. We then compared the mean Area Under the Curve (AUC) values obtained for each model and other metrics issued from the confusion matrices between observed and predicted variables. Based on the results of all metrics, the models were efficient at predicting the distribution of seven species out of ten, confirming the relevance of geomorphological variables and distance to natural lands and roads to explain the occurrence of plant species in this Mediterranean catchment. In particular, the importance of the landscape geomorphological variables, ie the importance of the geomorphological features encompassing a broad environment around the ditch, has been highlighted. This suggests that agro-ecological measures for managing ecosystem services provided by ditch plants should focus on the control of the hydrological and sedimentological connectivity at the catchment scale. For example, the density of the ditch network could be modified or the spatial distribution of vegetative filter strips used for sediment trapping could be optimized. In addition, the vegetative filter strips could constitute new seed bank sources for species that are affected by the distance to natural lands and roads.
Plume mapping and isotopic characterisation of anthropogenic methane sources
NASA Astrophysics Data System (ADS)
Zazzeri, G.; Lowry, D.; Fisher, R. E.; France, J. L.; Lanoisellé, M.; Nisbet, E. G.
2015-06-01
Methane stable isotope analysis, coupled with mole fraction measurement, has been used to link isotopic signature to methane emissions from landfill sites, coal mines and gas leaks in the United Kingdom. A mobile Picarro G2301 CRDS (Cavity Ring-Down Spectroscopy) analyser was installed on a vehicle, together with an anemometer and GPS receiver, to measure atmospheric methane mole fractions and their relative location while driving at speeds up to 80 kph. In targeted areas, when the methane plume was intercepted, air samples were collected in Tedlar bags, for δ13C-CH4 isotopic analysis by CF-GC-IRMS (Continuous Flow Gas Chromatography-Isotope Ratio Mass Spectrometry). This method provides high precision isotopic values, determining δ13C-CH4 to ±0.05 per mil. The bulk signature of the methane plume into the atmosphere from the whole source area was obtained by Keeling plot analysis, and a δ13C-CH4 signature, with the relative uncertainty, allocated to each methane source investigated. Both landfill and natural gas emissions in SE England have tightly constrained isotopic signatures. The averaged δ13C-CH4 for landfill sites is -58 ± 3‰. The δ13C-CH4 signature for gas leaks is also fairly constant around -36 ± 2‰, a value characteristic of homogenised North Sea supply. In contrast, signatures for coal mines in N. England and Wales fall in a range of -51.2 ± 0.3‰ to -30.9 ± 1.4‰, but can be tightly constrained by region. The study demonstrates that CRDS-based mobile methane measurement coupled with off-line high precision isotopic analysis of plume samples is an efficient way of characterising methane sources. It shows that isotopic measurements allow type identification, and possible location of previously unknown methane sources. In modelling studies this measurement provides an independent constraint to determine the contributions of different sources to the regional methane budget and in the verification of inventory source distribution.
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Photocopy of photograph. Photographer unknown. Poster from the World War ...
Photocopy of photograph. Photographer unknown. Poster from the World War II period. During drives to encourage purchase of war bonds, posters featuring female shipyard workers were widely distributed purchasers were allowed one vote for each bond bought. Votes were cast and the woman who got the most votes was named "War Bond Girl." The contest was won by Kay McGinty, 4th row, 2nd column. - Naval Base Philadelphia-Philadelphia Naval Shipyard, League Island, Philadelphia, Philadelphia County, PA
The Swift-BAT Hard X-ray Transient Monitor
NASA Technical Reports Server (NTRS)
Krimm, Hans; Markwardt, C. B.; Sanwal, D.; Tueller, J.
2006-01-01
The Burst Alert Telescope (BAT) on the Swift satellite is a large field of view instrument that continually monitors the sky to provide the gamma-ray burst trigger for Swift. An average of more than 70% of the sky is observed on a daily basis. The survey mode data is processed on two sets on time scales: from one minute to one day as part of the transient monitor program, and from one spacecraft pointing (approx.20 minutes) to the full mission duration for the hard X-ray survey program. The transient monitor has recently become public through the web site http:// swift.gsfc.nasa.gov/docs/swift/results/transients/. Sky images are processed to detect astrophysical sources in the 15-50 keV energy band and the detected flux or upper limit is calculated for >100 sources on time scales up to one day. Light curves are updated each time that new BAT data becomes available (approx.10 times daily). In addition, the monitor is sensitive to an outburst from a new or unknown source. Sensitivity as a function of time scale for catalog and unknown sources will be presented. The daily exposure for a typical source is approx.1500-3000 seconds, with a 1-sigma sensitivity of approx.4 mCrab. 90% of the sources are sampled at least every 16 days, but many sources are sampled daily. It is expected that the Swift-BAT transient monitor will become an important resource for the high energy astrophysics community.
Swift-BAT: Transient Source Monitoring
NASA Astrophysics Data System (ADS)
Barbier, L. M.; Barthelmy, S.; Cummings, J.; Gehrels, N.; Krimm, H.; Markwardt, C.; Mushotzky, R.; Parsons, A.; Sakamoto, T.; Tueller, J.; Fenimore, E.; Palmer, D.; Skinner, G.; Swift-BAT Team
2005-12-01
The Burst Alert Telescope (BAT) on the Swift satellite is a large field of view instrument that continually monitors the sky to provide the gamma-ray burst trigger for Swift. An average of more than 70% of the sky is observed on a daily basis. The survey mode data is processed on two sets of time scales: from one minute to one day as part of the transient monitor program, and from one spacecraft pointing ( ˜20 minutes) to the full mission duration for the hard X-ray survey program. In the transient monitor program, sky images are processed to detect astrophysical sources in six energy bands covering 15-350 keV. The detected flux or upper limit in each energy band is calculated for >300 objects on time scales up to one day. In addition, the monitor is sensitive to an outburst from a new or unknown source. Sensitivity as a function of time scale for catalog and unknown sources will be presented. The daily exposure for a typical source is ˜1500 - 3000 seconds, with a 1-sigma sensitivity of ˜4mCrab. 90% of the sources are sampled at least every 16 days, but many sources are sampled daily. The BAT team will soon make the results of the transient monitor public to the astrophysical community through the Swift mission web page. It is expected that the Swift-BAT transient monitor will become an important resource for the high energy astrophysics community.
Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey
2017-01-01
The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...
Design of Genetic Algorithms for Topology Control of Unmanned Vehicles
2010-01-01
decentralised topology control mechanism distributed among active running software agents to achieve a uniform spread of terrestrial unmanned vehicles...14. ABSTRACT We present genetic algorithms (GAs) as a decentralised topology control mechanism distributed among active running software agents to...inspired topology control algorithm. The topology control of UVs using a decentralised solution over an unknown geographical terrain is a challenging
Coaching the exploration and exploitation in active learning for interactive video retrieval.
Wei, Xiao-Yong; Yang, Zhen-Qun
2013-03-01
Conventional active learning approaches for interactive video/image retrieval usually assume the query distribution is unknown, as it is difficult to estimate with only a limited number of labeled instances available. Thus, it is easy to put the system in a dilemma whether to explore the feature space in uncertain areas for a better understanding of the query distribution or to harvest in certain areas for more relevant instances. In this paper, we propose a novel approach called coached active learning that makes the query distribution predictable through training and, therefore, avoids the risk of searching on a completely unknown space. The estimated distribution, which provides a more global view of the feature space, can be used to schedule not only the timing but also the step sizes of the exploration and the exploitation in a principled way. The results of the experiments on a large-scale data set from TRECVID 2005-2009 validate the efficiency and effectiveness of our approach, which demonstrates an encouraging performance when facing domain-shift, outperforms eight conventional active learning methods, and shows superiority to six state-of-the-art interactive video retrieval systems.
NASA Astrophysics Data System (ADS)
Dumitru, Mircea; Djafari, Ali-Mohammad
2015-01-01
The recent developments in chronobiology need a periodic components variation analysis for the signals expressing the biological rhythms. A precise estimation of the periodic components vector is required. The classical approaches, based on FFT methods, are inefficient considering the particularities of the data (short length). In this paper we propose a new method, using the sparsity prior information (reduced number of non-zero values components). The considered law is the Student-t distribution, viewed as a marginal distribution of a Infinite Gaussian Scale Mixture (IGSM) defined via a hidden variable representing the inverse variances and modelled as a Gamma Distribution. The hyperparameters are modelled using the conjugate priors, i.e. using Inverse Gamma Distributions. The expression of the joint posterior law of the unknown periodic components vector, hidden variables and hyperparameters is obtained and then the unknowns are estimated via Joint Maximum A Posteriori (JMAP) and Posterior Mean (PM). For the PM estimator, the expression of the posterior law is approximated by a separable one, via the Bayesian Variational Approximation (BVA), using the Kullback-Leibler (KL) divergence. Finally we show the results on synthetic data in cancer treatment applications.
The H.E.S.S. Galactic plane survey
NASA Astrophysics Data System (ADS)
H. E. S. S. Collaboration; Abdalla, H.; Abramowski, A.; Aharonian, F.; Benkhali, F. Ait; Angüner, E. O.; Arakawa, M.; Arrieta, M.; Aubert, P.; Backes, M.; Balzer, A.; Barnard, M.; Becherini, Y.; Tjus, J. Becker; Berge, D.; Bernhard, S.; Bernlöhr, K.; Blackwell, R.; Böttcher, M.; Boisson, C.; Bolmont, J.; Bonnefoy, S.; Bordas, P.; Bregeon, J.; Brun, F.; Brun, P.; Bryan, M.; Büchele, M.; Bulik, T.; Capasso, M.; Carrigan, S.; Caroff, S.; Carosi, A.; Casanova, S.; Cerruti, M.; Chakraborty, N.; Chaves, R. C. G.; Chen, A.; Chevalier, J.; Colafrancesco, S.; Condon, B.; Conrad, J.; Davids, I. D.; Decock, J.; Deil, C.; Devin, J.; deWilt, P.; Dirson, L.; Djannati-Ataï, A.; Domainko, W.; Donath, A.; Drury, L. O.'C.; Dutson, K.; Dyks, J.; Edwards, T.; Egberts, K.; Eger, P.; Emery, G.; Ernenwein, J.-P.; Eschbach, S.; Farnier, C.; Fegan, S.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Förster, A.; Funk, S.; Füßling, M.; Gabici, S.; Gallant, Y. A.; Garrigoux, T.; Gast, H.; Gaté, F.; Giavitto, G.; Giebels, B.; Glawion, D.; Glicenstein, J. F.; Gottschall, D.; Grondin, M.-H.; Hahn, J.; Haupt, M.; Hawkes, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hinton, J. A.; Hofmann, W.; Hoischen, C.; Holch, T. L.; Holler, M.; Horns, D.; Ivascenko, A.; Iwasaki, H.; Jacholkowska, A.; Jamrozy, M.; Jankowsky, D.; Jankowsky, F.; Jingo, M.; Jouvin, L.; Jung-Richardt, I.; Kastendieck, M. A.; Katarzyński, K.; Katsuragawa, M.; Katz, U.; Kerszberg, D.; Khangulyan, D.; Khélifi, B.; King, J.; Klepser, S.; Klochkov, D.; Kluźniak, W.; Komin, Nu.; Kosack, K.; Krakau, S.; Kraus, M.; Krüger, P. P.; Laffon, H.; Lamanna, G.; Lau, J.; Lees, J.-P.; Lefaucheur, J.; Lemière, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Leser, E.; Lohse, T.; Lorentz, M.; Liu, R.; López-Coto, R.; Lypova, I.; Marandon, V.; Malyshev, D.; Marcowith, A.; Mariaud, C.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; Meintjes, P. J.; Meyer, M.; Mitchell, A. M. W.; Moderski, R.; Mohamed, M.; Mohrmann, L.; Morå, K.; Moulin, E.; Murach, T.; Nakashima, S.; de Naurois, M.; Ndiyavala, H.; Niederwanger, F.; Niemiec, J.; Oakes, L.; O'Brien, P.; Odaka, H.; Ohm, S.; Ostrowski, M.; Oya, I.; Padovani, M.; Panter, M.; Parsons, R. D.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perennes, C.; Petrucci, P.-O.; Peyaud, B.; Piel, Q.; Pita, S.; Poireau, V.; Poon, H.; Prokhorov, D.; Prokoph, H.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Rauth, R.; Reimer, A.; Reimer, O.; Renaud, M.; de los Reyes, R.; Rieger, F.; Rinchiuso, L.; Romoli, C.; Rowell, G.; Rudak, B.; Rulten, C. B.; Safi-Harb, S.; Sahakian, V.; Saito, S.; Sanchez, D. A.; Santangelo, A.; Sasaki, M.; Schandri, M.; Schlickeiser, R.; Schüssler, F.; Schulz, A.; Schwanke, U.; Schwemmer, S.; Seglar-Arroyo, M.; Settimo, M.; Seyffert, A. S.; Shafi, N.; Shilon, I.; Shiningayamwe, K.; Simoni, R.; Sol, H.; Spanier, F.; Spir-Jacob, M.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Steppa, C.; Sushch, I.; Takahashi, T.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tibaldo, L.; Tiziani, D.; Tluczykont, M.; Trichard, C.; Tsirou, M.; Tsuji, N.; Tuffs, R.; Uchiyama, Y.; van der Walt, D. J.; van Eldik, C.; van Rensburg, C.; van Soelen, B.; Vasileiadis, G.; Veh, J.; Venter, C.; Viana, A.; Vincent, P.; Vink, J.; Voisin, F.; Völk, H. J.; Vuillaume, T.; Wadiasingh, Z.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; White, R.; Wierzcholska, A.; Willmann, P.; Wörnlein, A.; Wouters, D.; Yang, R.; Zaborov, D.; Zacharias, M.; Zanin, R.; Zdziarski, A. A.; Zech, A.; Zefi, F.; Ziegler, A.; Zorn, J.; Żywucka, N.
2018-04-01
We present the results of the most comprehensive survey of the Galactic plane in very high-energy (VHE) γ-rays, including a public release of Galactic sky maps, a catalog of VHE sources, and the discovery of 16 new sources of VHE γ-rays. The High Energy Spectroscopic System (H.E.S.S.) Galactic plane survey (HGPS) was a decade-long observation program carried out by the H.E.S.S. I array of Cherenkov telescopes in Namibia from 2004 to 2013. The observations amount to nearly 2700 h of quality-selected data, covering the Galactic plane at longitudes from ℓ = 250° to 65° and latitudes |b|≤ 3°. In addition to the unprecedented spatial coverage, the HGPS also features a relatively high angular resolution (0.08° ≈ 5 arcmin mean point spread function 68% containment radius), sensitivity (≲1.5% Crab flux for point-like sources), and energy range (0.2-100 TeV). We constructed a catalog of VHE γ-ray sources from the HGPS data set with a systematic procedure for both source detection and characterization of morphology and spectrum. We present this likelihood-based method in detail, including the introduction of a model component to account for unresolved, large-scale emission along the Galactic plane. In total, the resulting HGPS catalog contains 78 VHE sources, of which 14 are not reanalyzed here, for example, due to their complex morphology, namely shell-like sources and the Galactic center region. Where possible, we provide a firm identification of the VHE source or plausible associations with sources in other astronomical catalogs. We also studied the characteristics of the VHE sources with source parameter distributions. 16 new sources were previously unknown or unpublished, and we individually discuss their identifications or possible associations. We firmly identified 31 sources as pulsar wind nebulae (PWNe), supernova remnants (SNRs), composite SNRs, or gamma-ray binaries. Among the 47 sources not yet identified, most of them (36) have possible associations with cataloged objects, notably PWNe and energetic pulsars that could power VHE PWNe. The source catalog is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A1
Liang, Jennifer L; Dziuban, Eric J; Craun, Gunther F; Hill, Vincent; Moore, Matthew R; Gelting, Richard J; Calderon, Rebecca L; Beach, Michael J; Roy, Sharon L
2006-12-22
Since 1971, CDC, the U.S. Environmental Protection Agency (EPA), and the Council of State and Territorial Epidemiologists have maintained a collaborative Waterborne Disease and Outbreaks Surveillance System for collecting and reporting data related to occurrences and causes of waterborne disease and outbreaks (WBDOs). This surveillance system is the primary source of data concerning the scope and effects of WBDOs in the United States. Data presented summarize 36 WBDOs that occurred during January 2003-December 2004 and nine previously unreported WBDOs that occurred during 1982-2002. The surveillance system includes data on WBDOs associated with drinking water, water not intended for drinking (excluding recreational water), and water of unknown intent. Public health departments in the states, territories, localities, and Freely Associated States (i.e., the Republic of the Marshall Islands, the Federated States of Micronesia, and the Republic of Palau, formerly parts of the U.S.-administered Trust Territory of the Pacific Islands) are primarily responsible for detecting and investigating WBDOs and voluntarily reporting them to CDC by using a standard form. During 2003-2004, a total of 36 WBDOs were reported by 19 states; 30 were associated with drinking water, three were associated with water not intended for drinking, and three were associated with water of unknown intent. The 30 drinking water-associated WBDOs caused illness among an estimated 2,760 persons and were linked to four deaths. Etiologic agents were identified in 25 (83.3%) of these WBDOs: 17 (68.0%) involved pathogens (i.e., 13 bacterial, one parasitic, one viral, one mixed bacterial/parasitic, and one mixed bacterial/parasitic/viral), and eight (32.0%) involved chemical/toxin poisonings. Gastroenteritis represented 67.7% of the illness related to drinking water-associated WBDOs; acute respiratory illness represented 25.8%, and dermatitis represented 6.5%. The classification of deficiencies contributing to WBDOs has been revised to reflect the categories of concerns associated with contamination at or in the source water, treatment facility, or distribution system (SWTD) that are under the jurisdiction of water utilities, versus those at points not under the jurisdiction of a water utility or at the point of water use (NWU/POU), which includes commercially bottled water. A total of 33 deficiencies were cited in the 30 WBDOs associated with drinking water: 17 (51.5%) NWU/POU, 14 (42.4%) SWTD, and two (6.1%) unknown. The most frequently cited NWU/POU deficiencies involved Legionella spp. in the drinking water system (n = eight [47.1%]). The most frequently cited SWTD deficiencies were associated with distribution system contamination (n = six [42.9%]). Contaminated ground water was a contributing factor in seven times as many WBDOs (n = seven) as contaminated surface water (n = one). Approximately half (51.5%) of the drinking water deficiencies occurred outside the jurisdiction of a water utility in situations not currently regulated by EPA. The majority of the WBDOs in which deficiencies were not regulated by EPA were associated with Legionella spp. or chemicals/toxins. Problems in the distribution system were the most commonly identified deficiencies under the jurisdiction of a water utility, underscoring the importance of preventing contamination after water treatment. The substantial proportion of WBDOs involving contaminated ground water provides support for the Ground Water Rule (finalized in October 2006), which specifies when corrective action is required for public ground water systems. CDC and EPA use surveillance data to identify the types of water systems, deficiencies, and etiologic agents associated with WBDOs and to evaluate the adequacy of current technologies and practices for providing safe drinking water. Surveillance data also are used to establish research priorities, which can lead to improved water-quality regulation development. The growing proportion of drinking water deficiencies that are not addressed by current EPA rules emphasizes the need to address risk factors for water contamination in the distribution system and at points not under the jurisdiction of water utilities.
Taheri, Mehdi; Sheikholeslam, Farid; Najafi, Majddedin; Zekri, Maryam
2017-07-01
In this paper, consensus problem is considered for second order multi-agent systems with unknown nonlinear dynamics under undirected graphs. A novel distributed control strategy is suggested for leaderless systems based on adaptive fuzzy wavelet networks. Adaptive fuzzy wavelet networks are employed to compensate for the effect of unknown nonlinear dynamics. Moreover, the proposed method is developed for leader following systems and leader following systems with state time delays. Lyapunov functions are applied to prove uniformly ultimately bounded stability of closed loop systems and to obtain adaptive laws. Three simulation examples are presented to illustrate the effectiveness of the proposed control algorithms. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
van Geel, Nanja; Speeckaert, Reinhart
2017-04-01
Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.
The long hold: Storing data at the National Archives
NASA Technical Reports Server (NTRS)
Thibodeau, Kenneth
1991-01-01
A description of the information collection and storage needs of the National Archives and Records Administration (NARA) is presented. The unique situation of NARA is detailed. Two aspects which make the issue of obsolescence especially complex and costly are dealing with incoherent data and satisfying unknown and unknowable requirements. The data is incoherent because it comes from a wide range of independent sources, covers unrelated subjects, and is organized and encoded in ways that are not only not controlled but often unknown until received. NARA's mission to preserve and provide access to records with enduring value makes NARA, in effect, the agent of future generations. NARA's responsibility to the future places itself is a perpetual quandary of devotion to serving needs which are unknown.
Schindler, B K; Bruns, S; Lach, G
2015-03-15
Mushrooms have, repeatedly, been shown to contain nicotine. Speculation about the source of contamination has been widespread, however the source of nicotine remains unknown. Previous studies indicate that putrescine, an intermediate in nicotine biosynthesis, can be formed in mushrooms, which might be metabolised to form nicotine. Thus, endogenous formation may be a possible cause for elevated nicotine levels in mushrooms. We present evidence from the literature that may support this hypothesis. Copyright © 2014 Elsevier Ltd. All rights reserved.
On estimating the phase of periodic waveform in additive Gaussian noise, part 2
NASA Astrophysics Data System (ADS)
Rauch, L. L.
1984-11-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1984-01-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
Distributed Adaptive Neural Control for Stochastic Nonlinear Multiagent Systems.
Wang, Fang; Chen, Bing; Lin, Chong; Li, Xuehua
2016-11-14
In this paper, a consensus tracking problem of nonlinear multiagent systems is investigated under a directed communication topology. All the followers are modeled by stochastic nonlinear systems in nonstrict feedback form, where nonlinearities and stochastic disturbance terms are totally unknown. Based on the structural characteristic of neural networks (in Lemma 4), a novel distributed adaptive neural control scheme is put forward. The raised control method not only effectively handles unknown nonlinearities in nonstrict feedback systems, but also copes with the interactions among agents and coupling terms. Based on the stochastic Lyapunov functional method, it is indicated that all the signals of the closed-loop system are bounded in probability and all followers' outputs are convergent to a neighborhood of the output of leader. At last, the efficiency of the control method is testified by a numerical example.
2015-01-01
This study demonstrates the value of legacy literature and historic collections as a source of data on environmental history. Chenopodium vulvaria L. has declined in northern Europe and is of conservation concern in several countries, whereas in other countries outside Europe it has naturalised and is considered an alien weed. In its European range it is considered native in the south, but the northern boundary of its native range is unknown. It is hypothesised that much of its former distribution in northern Europe was the result of repeated introductions from southern Europe and that its decline in northern Europe is the result of habitat change and a reduction in the number of propagules imported to the north. A historical analysis of its ecology and distribution was conducted by mining legacy literature and historical botanical collections. Text analysis of habitat descriptions written on specimens and published in botanical literature covering a period of more than 200 years indicate that the habitat and introduction pathways of C. vulvaria have changed with time. Using the non-European naturalised range in a climate niche model, it is possible to project the range in Europe. By comparing this predicted model with a similar model created from all observations, it is clear that there is a large discrepancy between the realized and predicted distributions. This is discussed together with the social, technological and economic changes that have occurred in northern Europe, with respect to their influence on C. vulvaria. PMID:25653906
NASA Astrophysics Data System (ADS)
Suenaga, Nobuaki; Ji, Yingfeng; Yoshioka, Shoichi; Feng, Deshan
2018-04-01
The downdip limit of seismogenic interfaces inferred from the subduction thermal regime by thermal models has been suggested to relate to the faulting instability caused by the brittle failure regime in various plate convergent systems. However, the featured three-dimensional thermal state, especially along the horizontal (trench-parallel) direction of a subducted oceanic plate, remains poorly constrained. To robustly investigate and further map the horizontal (trench-parallel) distribution of the subduction regime and subsequently induced slab dewatering in a descending plate beneath a convergent margin, we construct a regional thermal model that incorporates an up-to-date three-dimensional slab geometry and the MORVEL plate velocity to simulate the plate subduction history in Hikurangi. Our calculations suggest an identified thrust zone featuring remarkable slab dehydration near the Taupo volcanic arc in the North Island distributed in the Kapiti, Manawatu, and Raukumara region. The calculated average subduction-associated slab dehydration of 0.09 to 0.12 wt%/km is greater than the dehydration in other portions of the descending slab and possibly contributes to an along-arc variation in the interplate pore fluid pressure. A large-scale slab dehydration (>0.05 wt%/km) and a high thermal gradient (>4 °C/km) are also identified in the Kapiti, Manawatu, and Raukumara region and are associated with frequent deep slow slip events. An intraslab dehydration that exceeds 0.2 wt%/km beneath Manawatu near the source region of tectonic tremors suggests an unknown relationship in the genesis of slow earthquakes.
Kamata, Motoyuki; Asami, Mari; Matsui, Yoshihiko
2017-07-01
Triketone herbicides are becoming popular because of their herbicidal activity against sulfonylurea-resistant weeds. Among these herbicides, tefuryltrione (TFT) is the first registered herbicide for rice farming, and recently its distribution has grown dramatically. In this study, we developed analytical methods for TFT and its degradation product 2-chloro-4-methylsulfonyl-3-[(tetrahydrofuran-2-yl-methoxy) methyl] benzoic acid (CMTBA). TFT was found frequently in surface waters in rice production areas at concentrations as high as 1.9 μg/L. The maximum observed concentration was lower than but close to 2 μg/L, which is the Japanese reference concentration of ambient water quality for pesticides. However, TFT was not found in any drinking waters even though the source waters were purified by conventional coagulation and filtration processes; this was due to chlorination, which transforms TFT to CMTBA. The conversion rate of TFT to CMBA on chlorination was almost 100%, and CMTBA was stable in the presence of chlorine. Moreover, CMTBA was found in drinking waters sampled from household water taps at a similar concentration to that of TFT in the source water of the water purification plant. Although the acceptable daily intake and the reference concentration of CMTBA are unknown, the highest concentration in drinking water exceeded 0.1 μg/L, which is the maximum allowable concentration for any individual pesticide and its relevant metabolites in the European Union Drinking Directive. Copyright © 2017 Elsevier Ltd. All rights reserved.
On Detecting Repetition from Fast Radio Bursts
NASA Astrophysics Data System (ADS)
Connor, Liam; Petroff, Emily
2018-07-01
Fast radio bursts (FRBs) are bright, millisecond-duration radio pulses of unknown origin. To date, only one (FRB 121102) out of several dozen has been seen to repeat, though the extent to which it is exceptional remains unclear. We discuss detecting repetition from FRBs, which will be very important for understanding their physical origin, and which also allows for host galaxy localization. We show how the combination of instrument sensitivity, beam shapes, and individual FRB luminosity functions affect the detection of sources with repetition that is not necessarily described by a homogeneous Poisson process. We demonstrate that the Canadian Hydrogen Intensity Mapping Experiment (CHIME) could detect many new repeating FRBs for which host galaxies could be subsequently localized using other interferometers, but it will not be an ideal instrument for monitoring FRB 121102. If the luminosity distributions of repeating FRBs are given by power laws with significantly more dim than bright bursts, CHIME’s repetition discoveries could preferentially come not from its own discoveries, but from sources first detected with lower-sensitivity instruments like the Australian Square Kilometer Array Pathfinder in fly’s eye mode. We then discuss observing strategies for upcoming surveys, and advocate following up sources at approximately regular intervals and with telescopes of higher sensitivity when possible. Finally, we discuss doing pulsar-like periodicity searching on FRB follow-up data, based on the idea that while most pulses are undetectable, folding on an underlying rotation period could reveal the hidden signal.
NASA Technical Reports Server (NTRS)
Bowman, Elizabeth M.; Carpenter, Joyce; Roy, Robert J.; Van Keuren, Steve; Wilson, Mark E.
2015-01-01
Since 2007, the Oxygen Generation System (OGS) on board the International Space Station (ISS) has been producing oxygen for crew respiration via water electrolysis. As water is consumed in the OGS recirculating water loop, make-up water is furnished by the ISS potable water bus. A rise in Total Organic Carbon (TOC) was observed beginning in February, 2011, which continues through the present date. Increasing TOC is of concern because the organic constituents responsible for the TOC were unknown and had not been identified; hence their impacts on the operation of the electrolytic cell stack components and on microorganism growth rates and types are unknown. Identification of the compounds responsible for the TOC increase, their sources, and estimates of their loadings in the OGA as well as possible mitigation strategies are presented.
Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery
Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin
2017-01-01
This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565
Side-emitting fiber optic position sensor
Weiss, Jonathan D [Albuquerque, NM
2008-02-12
A side-emitting fiber optic position sensor and method of determining an unknown position of an object by using the sensor. In one embodiment, a concentrated beam of light source illuminates the side of a side-emitting fiber optic at an unknown axial position along the fiber's length. Some of this side-illuminated light is in-scattered into the fiber and captured. As the captured light is guided down the fiber, its intensity decreases due to loss from side-emission away from the fiber and from bulk absorption within the fiber. By measuring the intensity of light emitted from one (or both) ends of the fiber with a photodetector(s), the axial position of the light source is determined by comparing the photodetector's signal to a calibrated response curve, look-up table, or by using a mathematical model. Alternatively, the side-emitting fiber is illuminated at one end, while a photodetector measures the intensity of light emitted from the side of the fiber, at an unknown position. As the photodetector moves further away from the illuminated end, the detector's signal strength decreases due to loss from side-emission and/or bulk absorption. As before, the detector's signal is correlated to a unique position along the fiber.
The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters
Suaria, Giuseppe; Avio, Carlo G.; Mineo, Annabella; Lattin, Gwendolyn L.; Magaldi, Marcello G.; Belmonte, Genuario; Moore, Charles J.; Regoli, Francesco; Aliani, Stefano
2016-01-01
The Mediterranean Sea has been recently proposed as one of the most impacted regions of the world with regards to microplastics, however the polymeric composition of these floating particles is still largely unknown. Here we present the results of a large-scale survey of neustonic micro- and meso-plastics floating in Mediterranean waters, providing the first extensive characterization of their chemical identity as well as detailed information on their abundance and geographical distribution. All particles >700 μm collected in our samples were identified through FT-IR analysis (n = 4050 particles), shedding for the first time light on the polymeric diversity of this emerging pollutant. Sixteen different classes of synthetic materials were identified. Low-density polymers such as polyethylene and polypropylene were the most abundant compounds, followed by polyamides, plastic-based paints, polyvinyl chloride, polystyrene and polyvinyl alcohol. Less frequent polymers included polyethylene terephthalate, polyisoprene, poly(vinyl stearate), ethylene-vinyl acetate, polyepoxide, paraffin wax and polycaprolactone, a biodegradable polyester reported for the first time floating in off-shore waters. Geographical differences in sample composition were also observed, demonstrating sub-basin scale heterogeneity in plastics distribution and likely reflecting a complex interplay between pollution sources, sinks and residence times of different polymers at sea. PMID:27876837
Finn, C A; Sisson, T W; Deszcz-Pan, M
2001-02-01
Hydrothermally altered rocks can weaken volcanoes, increasing the potential for catastrophic sector collapses that can lead to destructive debris flows. Evaluating the hazards associated with such alteration is difficult because alteration has been mapped on few active volcanoes and the distribution and severity of subsurface alteration is largely unknown on any active volcano. At Mount Rainier volcano (Washington, USA), collapses of hydrothermally altered edifice flanks have generated numerous extensive debris flows and future collapses could threaten areas that are now densely populated. Preliminary geological mapping and remote-sensing data indicated that exposed alteration is contained in a dyke-controlled belt trending east-west that passes through the volcano's summit. But here we present helicopter-borne electromagnetic and magnetic data, combined with detailed geological mapping, to show that appreciable thicknesses of mostly buried hydrothermally altered rock lie mainly in the upper west flank of Mount Rainier. We identify this as the likely source for future large debris flows. But as negligible amounts of highly altered rock lie in the volcano's core, this might impede collapse retrogression and so limit the volumes and inundation areas of future debris flows. Our results demonstrate that high-resolution geophysical and geological observations can yield unprecedented views of the three-dimensional distribution of altered rock.
Mehrshad, Maliheh; Rodriguez-Valera, Francisco; Amoozegar, Mohammad Ali; López-García, Purificación; Ghai, Rohit
2018-03-01
The dark ocean microbiota represents the unknown majority in the global ocean waters. The SAR202 cluster belonging to the phylum Chloroflexi was the first microbial lineage discovered to specifically inhabit the aphotic realm, where they are abundant and globally distributed. The absence of SAR202 cultured representatives is a significant bottleneck towards understanding their metabolic capacities and role in the marine environment. In this work, we use a combination of metagenome-assembled genomes from deep-sea datasets and publicly available single-cell genomes to construct a genomic perspective of SAR202 phylogeny, metabolism and biogeography. Our results suggest that SAR202 cluster members are medium sized, free-living cells with a heterotrophic lifestyle, broadly divided into two distinct clades. We present the first evidence of vertical stratification of these microbes along the meso- and bathypelagic ocean layers. Remarkably, two distinct species of SAR202 cluster are highly abundant in nearly all deep bathypelagic metagenomic datasets available so far. SAR202 members metabolize multiple organosulfur compounds, many appear to be sulfite-oxidizers and are predicted to play a major role in sulfur turnover in the dark water column. This concomitantly suggests an unsuspected availability of these nutrient sources to allow for the high abundance of these microbes in the deep sea.
Rare earths in the Leadville Limestone and its marble derivates
Jarvis, J.C.; Wildeman, T.R.; Banks, N.G.
1975-01-01
Samples of unaltered and metamorphosed Leadville Limestone (Mississippian, Colorado) were analyzed by neutron activation for ten rare-earth elements (REE). The total abundance of the REE in the least-altered limestone is 4-12 ppm, and their distribution patterns are believed to be dominated by the carbonate minerals. The abundances of the REE in the marbles and their sedimentary precursors are comparable, but the distribution patterns are not. Eu is enriched over the other REE in the marbles, and stratigraphically upward in the formation (samples located progressively further from the heat source), the light REE become less enriched relative to the heavy REE. The Eu anomaly is attributed to its ability, unique among the REE, to change from the 3+ to 2+ oxidation state. Whether this results in preferential mobilization of the other REE or whether this reflects the composition of the pore fluid during metamorphism is unknown. Stratigraphically selective depletion of the heavy REE may be attributed to more competition for the REE between fluid and carbonate minerals in the lower strata relative to the upper strata. This competition could have been caused by changes in the temperature of the pore fluid or to the greater resistance to solution of the dolomite in the lower parts of the formation than the calcite in the upper parts. ?? 1975.
Aerogeophysical measurements of collapse-prone hydrothermally altered zones at Mount Rainier volcano
Finn, C.A.; Sisson, T.W.; Deszcz-Pan, M.
2001-01-01
Hydrothermally altered rocks can weaken volcanoes, increasing the potential for catastrophic sector collapses that can lead to destructive debris flows1. Evaluating the hazards associated with such alteration is difficult because alteration has been mapped on few active volcanoes1-4 and the distribution and severity of subsurface alteration is largely unknown on any active volcano. At Mount Rainier volcano (Washington, USA), collapses of hydrothermally altered edifice flanks have generated numerous extensive debris flows5,6 and future collapses could threaten areas that are now densely populated7. Preliminary geological mapping and remote-sensing data indicated that exposed alteration is contained in a dyke-controlled belt trending east-west that passes through the volcano's summit3-5,8. But here we present helicopter-borne electromagnetic and magnetic data, combined with detailed geological mapping, to show that appreciable thicknesses of mostly buried hydrothermally altered rock lie mainly in the upper west flank of Mount Rainier. We identify this as the likely source for future large debris flows. But as negligible amounts of highly altered rock lie in the volcano's core, this might impede collapse retrogression and so limit the volumes and inundation areas of future debris flows. Our results demonstrate that high-resolution geophysical and geological observations can yield unprecedented views of the three-dimensional distribution of altered rock.
Wireless Power Transfer for Distributed Estimation in Sensor Networks
NASA Astrophysics Data System (ADS)
Mai, Vien V.; Shin, Won-Yong; Ishibashi, Koji
2017-04-01
This paper studies power allocation for distributed estimation of an unknown scalar random source in sensor networks with a multiple-antenna fusion center (FC), where wireless sensors are equipped with radio-frequency based energy harvesting technology. The sensors' observation is locally processed by using an uncoded amplify-and-forward scheme. The processed signals are then sent to the FC, and are coherently combined at the FC, at which the best linear unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve the following two power allocation problems: 1) minimizing distortion under various power constraints; and 2) minimizing total transmit power under distortion constraints, where the distortion is measured in terms of mean-squared error of the BLUE. Two iterative algorithms are developed to solve the non-convex problems, which converge at least to a local optimum. In particular, the above algorithms are designed to jointly optimize the amplification coefficients, energy beamforming, and receive filtering. For each problem, a suboptimal design, a single-antenna FC scenario, and a common harvester deployment for colocated sensors, are also studied. Using the powerful semidefinite relaxation framework, our result is shown to be valid for any number of sensors, each with different noise power, and for an arbitrarily number of antennas at the FC.
Hummel, Jürgen; Gee, Carole T; Südekum, Karl-Heinz; Sander, P Martin; Nogge, Gunther; Clauss, Marcus
2008-05-07
Sauropod dinosaurs, the dominant herbivores throughout the Jurassic, challenge general rules of large vertebrate herbivory. With body weights surpassing those of any other megaherbivore, they relied almost exclusively on pre-angiosperm plants such as gymnosperms, ferns and fern allies as food sources, plant groups that are generally believed to be of very low nutritional quality. However, the nutritive value of these taxa is virtually unknown, despite their importance in the reconstruction of the ecology of Mesozoic herbivores. Using a feed evaluation test for extant herbivores, we show that the energy content of horsetails and of certain conifers and ferns is at a level comparable to extant browse. Based on our experimental results, plants such as Equisetum, Araucaria, Ginkgo and Angiopteris would have formed a major part of sauropod diets, while cycads, tree ferns and podocarp conifers would have been poor sources of energy. Energy-rich but slow-fermenting Araucaria, which was globally distributed in the Jurassic, was probably targeted by giant, high-browsing sauropods with their presumably very long ingesta retention times. Our data make possible a more realistic calculation of the daily food intake of an individual sauropod and improve our understanding of how large herbivorous dinosaurs could have flourished in pre-angiosperm ecosystems.
Hummel, Jürgen; Gee, Carole T; Südekum, Karl-Heinz; Sander, P. Martin; Nogge, Gunther; Clauss, Marcus
2008-01-01
Sauropod dinosaurs, the dominant herbivores throughout the Jurassic, challenge general rules of large vertebrate herbivory. With body weights surpassing those of any other megaherbivore, they relied almost exclusively on pre-angiosperm plants such as gymnosperms, ferns and fern allies as food sources, plant groups that are generally believed to be of very low nutritional quality. However, the nutritive value of these taxa is virtually unknown, despite their importance in the reconstruction of the ecology of Mesozoic herbivores. Using a feed evaluation test for extant herbivores, we show that the energy content of horsetails and of certain conifers and ferns is at a level comparable to extant browse. Based on our experimental results, plants such as Equisetum, Araucaria, Ginkgo and Angiopteris would have formed a major part of sauropod diets, while cycads, tree ferns and podocarp conifers would have been poor sources of energy. Energy-rich but slow-fermenting Araucaria, which was globally distributed in the Jurassic, was probably targeted by giant, high-browsing sauropods with their presumably very long ingesta retention times. Our data make possible a more realistic calculation of the daily food intake of an individual sauropod and improve our understanding of how large herbivorous dinosaurs could have flourished in pre-angiosperm ecosystems. PMID:18252667
Planktonic Euryarchaeota are a significant source of archaeal tetraether lipids in the ocean.
Lincoln, Sara A; Wai, Brenner; Eppley, John M; Church, Matthew J; Summons, Roger E; DeLong, Edward F
2014-07-08
Archaea are ubiquitous in marine plankton, and fossil forms of archaeal tetraether membrane lipids in sedimentary rocks document their participation in marine biogeochemical cycles for >100 million years. Ribosomal RNA surveys have identified four major clades of planktonic archaea but, to date, tetraether lipids have been characterized in only one, the Marine Group I Thaumarchaeota. The membrane lipid composition of the other planktonic archaeal groups--all uncultured Euryarchaeota--is currently unknown. Using integrated nucleic acid and lipid analyses, we found that Marine Group II Euryarchaeota (MG-II) contributed significantly to the tetraether lipid pool in the North Pacific Subtropical Gyre at shallow to intermediate depths. Our data strongly suggested that MG-II also synthesize crenarchaeol, a tetraether lipid previously considered to be a unique biomarker for Thaumarchaeota. Metagenomic datasets spanning 5 y indicated that depth stratification of planktonic archaeal groups was a stable feature in the North Pacific Subtropical Gyre. The consistent prevalence of MG-II at depths where the bulk of exported organic matter originates, together with their ubiquitous distribution over diverse oceanic provinces, suggests that this clade is a significant source of tetraether lipids to marine sediments. Our results are relevant to archaeal lipid biomarker applications in the modern oceans and the interpretation of these compounds in the geologic record.
Small Scale Chemical Segregation Within Keplerian Disk Candidate G35.20-0.74N
NASA Astrophysics Data System (ADS)
Allen, Veronica; van der Tak, Floris; Sánchez-Monge, Álvaro; Cesaroni, Riccardo; Beltrán, Maria T.
2016-06-01
In the study of high-mass star formation, hot cores are empirically defined stages where chemically rich emission is detected toward a massive protostar. It is unknown whether the physical origin of this emission is a disk, inner envelope, or outflow cavity wall and whether the hot core stage is common to all massive stars. With the advent of the highly sensitive sub-millimeter interferometer, ALMA, the ability to chemically characterize high mass star forming regions other than Orion has become possible. In the up-and-coming field of observational astrochemistry, these sensitive high resolution observations have opened up opportunities to find small scale variations in young protostellar sources.We have done an in depth analysis of high spatial resolution (~1000 AU) Cycle 0 ALMA observations of the high mass star forming region G35.20-0.74N, where Sánchez-Monge et al (2013) found evidence for Keplerian rotation. After further chemical analysis, numerous complex organic species have been identified in this region and we notice an interesting asymmetry in the distribution of the Nitrogen-bearing species within this source. In my talk, I will briefly outline the case for the disk and the consequences for this hypothesis following the chemical segregation we have seen.
Microbiome profiling of drinking water in relation to incidence of inflammatory bowel disease.
Forbes, Jessica D; Van Domselaar, Gary; Sargent, Michael; Green, Chris; Springthorpe, Susan; Krause, Denis O; Bernstein, Charles N
2016-09-01
The etiology of inflammatory bowel disease (IBD) is unknown; current research is focused on determining environmental factors. One consideration is drinking water: water systems harbour considerable microbial diversity, with bacterial concentrations estimated at 10(6)-10(8) cells/L. Perhaps differences in microbial ecology of water sources may impact differential incidence rates of IBD. Regions of Manitoba were geographically mapped according to incidence rates of IBD and identified as high (HIA) or low (LIA) incidence areas. Bulk water, filter material, and pipe wall samples were collected from public buildings in different jurisdictions and their population structure analyzed using 16S rDNA sequencing. At the phylum level, Proteobacteria were observed significantly less frequently (P = 0.02) in HIA versus LIA. The abundance of Proteobacteria was also found to vary according to water treatment distribution networks. Gammaproteobacteria was the most abundant class of bacteria and was observed more frequently (P = 0.006) in LIA. At the genus level, microbes found to associate with HIA include Bradyrhizobium (P = 0.02) and Pseudomonas (P = 0.02). Particular microbes were found to associate with LIA or HIA, based on sample location and (or) type. This work lays out a basis for further studies exploring water as a potential environmental source for IBD triggers.
Amplification and Attenuation Across USArray Using Ambient Noise Wavefront Tracking
NASA Astrophysics Data System (ADS)
Bowden, Daniel C.; Tsai, Victor C.; Lin, Fan-Chi
2017-12-01
As seismic traveltime tomography continues to be refined using data from the vast USArray data set, it is advantageous to also exploit the amplitude information carried by seismic waves. We use ambient noise cross correlation to make observations of surface wave amplification and attenuation at shorter periods (8-32 s) than can be observed with only traditional teleseismic earthquake sources. We show that the wavefront tracking approach can be successfully applied to ambient noise correlations, yielding results quite similar to those from earthquake observations at periods of overlap. This consistency indicates that the wavefront tracking approach is viable for use with ambient noise correlations, despite concerns of the inhomogeneous and unknown distribution of noise sources. The resulting amplification and attenuation maps correlate well with known tectonic and crustal structure; at the shortest periods, our amplification and attenuation maps correlate well with surface geology and known sedimentary basins, while our longest period amplitudes are controlled by crustal thickness and begin to probe upper mantle materials. These amplification and attenuation observations are sensitive to crustal materials in different ways than traveltime observations and may be used to better constrain temperature or density variations. We also value them as an independent means of describing the lateral variability of observed Rayleigh wave amplitudes without the need for 3-D tomographic inversions.
Lasier, Peter J.; Washington, John W.; Hassan, Sayed M.; Jenkins, Thomas M.
2011-01-01
Concentrations of perfluorinated chemicals (PFCs) were measured in surface waters and sediments from the Coosa River watershed in northwest Georgia, USA, to examine their distribution downstream of a suspected source. Samples from eight sites were analyzed using liquid chromatography-tandem mass spectrometry. Sediments were also used in 28-d exposures with the aquatic oligochaete, Lumbriculus variegatus, to assess PFC bioaccumulation. Concentrations of PFCs in surface waters and sediments increased significantly below a land-application site (LAS) of municipal/industrial wastewater and were further elevated by unknown sources downstream. Perfluorinated carboxylic acids (PFCAs) with eight or fewer carbons were the most prominent in surface waters. Those with 10 or more carbons predominated sediment and tissue samples. Perfluorooctane sulfonate (PFOS) was the major homolog in contaminated sediments and tissues. This pattern among sediment PFC concentrations was consistent among sites and reflected homolog concentrations emanating from the LAS. Concentrations of PFCs in oligochaete tissues revealed patterns similar to those observed in the respective sediments. The tendency to bioaccumulate increased with PFCA chain length and the presence of the sulfonate moiety. Biota-sediment accumulation factors indicated that short-chain PFCAs with fewer than seven carbons may be environmentally benign alternatives in aquatic ecosystems; however, sulfonates with four to seven carbons may be as likely to bioaccumulate as PFOS.
14 CFR 25.1310 - Power source capacity and distribution.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...
14 CFR 25.1310 - Power source capacity and distribution.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...
14 CFR 25.1310 - Power source capacity and distribution.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...
Abundance and functional diversity of riboswitches in microbial communities
Kazanov, Marat D; Vitreschak, Alexey G; Gelfand, Mikhail S
2007-01-01
Background Several recently completed large-scale enviromental sequencing projects produced a large amount of genetic information about microbial communities ('metagenomes') which is not biased towards cultured organisms. It is a good source for estimation of the abundance of genes and regulatory structures in both known and unknown members of microbial communities. In this study we consider the distribution of RNA regulatory structures, riboswitches, in the Sargasso Sea, Minnesota Soil and Whale Falls metagenomes. Results Over three hundred riboswitches were found in about 2 Gbp metagenome DNA sequences. The abundabce of riboswitches in metagenomes was highest for the TPP, B12 and GCVT riboswitches; the S-box, RFN, YKKC/YXKD, YYBP/YKOY regulatory elements showed lower but significant abundance, while the LYS, G-box, GLMS and YKOK riboswitches were rare. Regions downstream of identified riboswitches were scanned for open reading frames. Comparative analysis of identified ORFs revealed new riboswitch-regulated functions for several classes of riboswitches. In particular, we have observed phosphoserine aminotransferase serC (COG1932) and malate synthase glcB (COG2225) to be regulated by the glycine (GCVT) riboswitch; fatty acid desaturase ole1 (COG1398), by the cobalamin (B12) riboswitch; 5-methylthioribose-1-phosphate isomerase ykrS (COG0182), by the SAM-riboswitch. We also identified conserved riboswitches upstream of genes of unknown function: thiamine (TPP), cobalamine (B12), and glycine (GCVT, upstream of genes from COG4198). Conclusion This study demonstrates applicability of bioinformatics to the analysis of RNA regulatory structures in metagenomes. PMID:17908319
Scholte, Johannes B J; van Mook, Walther N K A; Linssen, Catharina F M; van Dessel, Helke A; Bergmans, Dennis C J J; Savelkoul, Paul H M; Roekaerts, Paul M H J
2014-10-01
To explore the extent of surveillance culture (SC) implementation underlying motives for obtaining SC and decision making based on the results. A questionnaire was distributed to Heads of Department (HODs) and microbiologists within all intensive care departments in the Netherlands. Response was provided by 75 (79%) of 95 HODs and 38 (64%) of 59 laboratories allied to an intensive care unit (ICU). Surveillance cultures were routinely obtained according to 55 (73%) of 75 HODs and 33 (87%) of 38 microbiologists. Surveillance cultures were obtained in more than 80% of higher-level ICUs and in 58% of lower-level ICUs (P < .05). Surveillance cultures were obtained twice weekly (88%) and sampled from trachea (87%), pharynx (74%), and rectum (68%). Thirty (58%) of 52 HODs obtained SC to optimize individual patient treatment. On suspicion of infection from an unknown source, microorganisms identified by SC were targeted according to 87%. One third of HODs targeted microorganisms identified by SC in the case of an infection not at the location where the SC was obtained. This was significantly more often than microbiologists in case of no infection (P = .02) or infection of unknown origin (P < .05). Surveillance culture implementation is common in Dutch ICUs to optimize individual patients' treatment. Consensus is lacking on how to deal with SC results when the focus of infection is not at the sampled site. Copyright © 2014 Elsevier Inc. All rights reserved.
Development and Simulation of Increased Generation on a Secondary Circuit of a Microgrid
NASA Astrophysics Data System (ADS)
Reyes, Karina
As fossil fuels are depleted and their environmental impacts remain, other sources of energy must be considered to generate power. Renewable sources, for example, are emerging to play a major role in this regard. In parallel, electric vehicle (EV) charging is evolving as a major load demand. To meet reliability and resiliency goals demanded by the electricity market, interest in microgrids are growing as a distributed energy resource (DER). In this thesis, the effects of intermittent renewable power generation and random EV charging on secondary microgrid circuits are analyzed in the presence of a controllable battery in order to characterize and better understand the dynamics associated with intermittent power production and random load demands in the context of the microgrid paradigm. For two reasons, a secondary circuit on the University of California, Irvine (UCI) Microgrid serves as the case study. First, the secondary circuit (UC-9) is heavily loaded and an integral component of a highly characterized and metered microgrid. Second, a unique "next-generation" distributed energy resource has been deployed at the end of the circuit that integrates photovoltaic power generation, battery storage, and EV charging. In order to analyze this system and evaluate the impact of the DER on the secondary circuit, a model was developed to provide a real-time load flow analysis. The research develops a power management system applicable to similarly integrated systems. The model is verified by metered data obtained from a network of high resolution electric meters and estimated load data for the buildings that have unknown demand. An increase in voltage is observed when the amount of photovoltaic power generation is increased. To mitigate this effect, a constant power factor is set. Should the real power change dramatically, the reactive power is changed to mitigate voltage fluctuations.
Botter, Alberto; Bourguignon, Mathieu; Jousmäki, Veikko; Hari, Riitta
2015-01-01
Cortex-muscle coherence (CMC) reflects coupling between magnetoencephalography (MEG) and surface electromyography (sEMG), being strongest during isometric contraction but absent, for unknown reasons, in some individuals. We used a novel nonmagnetic high-density sEMG (HD-sEMG) electrode grid (36 mm × 12 mm; 60 electrodes separated by 3 mm) to study effects of sEMG recording site, electrode derivation, and rectification on the strength of CMC. Monopolar sEMG from right thenar and 306-channel whole-scalp MEG were recorded from 14 subjects during 4-min isometric thumb abduction. CMC was computed for 60 monopolar, 55 bipolar, and 32 Laplacian HD-sEMG derivations, and two derivations were computed to mimic “macroscopic” monopolar and bipolar sEMG (electrode diameter 9 mm; interelectrode distance 21 mm). With unrectified sEMG, 12 subjects showed statistically significant CMC in 91–95% of the HD-sEMG channels, with maximum coherence at ∼25 Hz. CMC was about a fifth stronger for monopolar than bipolar and Laplacian derivations. Monopolar derivations resulted in most uniform CMC distributions across the thenar and in tightest cortical source clusters in the left rolandic hand area. CMC was 19–27% stronger for HD-sEMG than for “macroscopic” monopolar or bipolar derivations. EMG rectification reduced the CMC peak by a quarter, resulted in a more uniformly distributed CMC across the thenar, and provided more tightly clustered cortical sources than unrectifed sEMGs. Moreover, it revealed CMC at ∼12 Hz. We conclude that HD-sEMG, especially with monopolar derivation, can facilitate detection of CMC and that individual muscle anatomy cannot explain the high interindividual CMC variability. PMID:26354317
NASA Astrophysics Data System (ADS)
Garcia-Santos, G.; Berdugo, M. B.
2010-07-01
Fog has been demonstrated as the only source of moisture during the dry climate of El Niño in the tropical Andean cloud forest of Boyacá region in Colombia, yet its importance for the forest is virtually unknown. We assessed fog water distribution during the wet season inside the forest and outside in a practically deforested area. Water intercepted by plant was measured at different vertical stratus. Soil moisture in the first centimetres was also measured. During the anomalous drier wet season there was lack of rainfall and the total recorded cloud water was lower compared with the same period during the previous year. Our results indicated that the upper part of the forest mass intercepts most of the fog water compared with lower stratus when the fog event starts. However upper most stratus became rapidly drier after the event, which is explained because water is released to the atmosphere due to high heat atmosphere-leaves interface fluctuations caused by wind and solar radiation, flows towards a different water potential and drips from the leaves. Low amount of fog dripped from tree foliage into the soil, indicating a large water storage capacity of the epiphyte and bryophyte vegetation. Despite the small amount of throughfall, understory vegetation and litter remained wet, which might be explained by the water flowing through the epiphyte vegetation or the high capacity of the understory to absorb moisture from the air. Soil water did not infiltrate in depth, which underlines the importance of fog as water and cool source for seedling growth and shallow rooted understory species, especially during drier conditions.
Avian Hepatitis E Virus in Chickens, Taiwan, 2013
Hsu, Ingrid W.-Y.
2014-01-01
A previously unidentified strain of avian hepatitis E virus (aHEV) is now endemic among chickens in Taiwan. Analysis showed that the virus is 81.5%–86.5% similar to other aHEVs. In Taiwan, aHEV infection has been reported in chickens without aHEV exposure, suggesting transmission from asymptomatic cases or repeated introduction through an unknown common source(s). PMID:24378180
13. Photographic copy of photograph. (Source: U.S. Department of Interior. ...
13. Photographic copy of photograph. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation Service. Annual Report, Fiscal Year 1926. Vol. I, Narrative and Photographs, RG 75, Entry 655, Box 29, National Archives, Washington, DC.) Photographer unknown. PIMA LATERAL DROP NEAR KENILWORTH, 5/10/26 - San Carlos Irrigation Project, Pima Lateral, Main Canal at Sacaton Dam, Coolidge, Pinal County, AZ
N. S. Wagenbrenner; S. H. Chung; B. K. Lamb
2017-01-01
Wind erosion of soils burned by wildfire contributes substantial particulate matter (PM) in the form of dust to the atmosphere, but the magnitude of this dust source is largely unknown. It is important to accurately quantify dust emissions because they can impact human health, degrade visibility, exacerbate dust-on-snow issues (including snowmelt timing, snow chemistry...
Earth-mass dark-matter haloes as the first structures in the early Universe.
Diemand, J; Moore, B; Stadel, J
2005-01-27
The Universe was nearly smooth and homogeneous before a redshift of z = 100, about 20 million years after the Big Bang. After this epoch, the tiny fluctuations imprinted upon the matter distribution during the initial expansion began to collapse because of gravity. The properties of these fluctuations depend on the unknown nature of dark matter, the determination of which is one of the biggest challenges in present-day science. Here we report supercomputer simulations of the concordance cosmological model, which assumes neutralino dark matter (at present the preferred candidate), and find that the first objects to form are numerous Earth-mass dark-matter haloes about as large as the Solar System. They are stable against gravitational disruption, even within the central regions of the Milky Way. We expect over 10(15) to survive within the Galactic halo, with one passing through the Solar System every few thousand years. The nearest structures should be among the brightest sources of gamma-rays (from particle-particle annihilation).
Nunn, Peter B; Codd, Geoffrey A
2017-12-01
The non-encoded diaminomonocarboxylic acids, 3-N-methyl-2,3-diaminopropanoic acid (syn: α-amino-β-methylaminopropionic acid, MeDAP; β-N-methylaminoalanine, BMAA) and 2,4-diaminobutanoic acid (2,4-DAB), are distributed widely in cyanobacterial species in free and bound forms. Both amino acids are neurotoxic in whole animal and cell-based bioassays. The biosynthetic pathway to 2,4-DAB is well documented in bacteria and in one higher plant species, but has not been confirmed in cyanobacteria. The biosynthetic pathway to BMAA is unknown. This review considers possible metabolic routes, by analogy with reactions used in other species, by which these amino acids might be biosynthesised by cyanobacteria, which are a widespread potential environmental source of these neurotoxins. Where possible, the gene expression that might be implicated in these biosyntheses is discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
A High-Resolution LC-MS-Based Secondary Metabolite Fingerprint Database of Marine Bacteria
Lu, Liang; Wang, Jijie; Xu, Ying; Wang, Kailing; Hu, Yingwei; Tian, Renmao; Yang, Bo; Lai, Qiliang; Li, Yongxin; Zhang, Weipeng; Shao, Zongze; Lam, Henry; Qian, Pei-Yuan
2014-01-01
Marine bacteria are the most widely distributed organisms in the ocean environment and produce a wide variety of secondary metabolites. However, traditional screening for bioactive natural compounds is greatly hindered by the lack of a systematic way of cataloguing the chemical profiles of bacterial strains found in nature. Here we present a chemical fingerprint database of marine bacteria based on their secondary metabolite profiles, acquired by high-resolution LC-MS. Till now, 1,430 bacterial strains spanning 168 known species collected from different marine environments were cultured and profiled. Using this database, we demonstrated that secondary metabolite profile similarity is approximately, but not always, correlated with taxonomical similarity. We also validated the ability of this database to find species-specific metabolites, as well as to discover known bioactive compounds from previously unknown sources. An online interface to this database, as well as the accompanying software, is provided freely for the community to use. PMID:25298017
Extending Measurements to En=30 MeV and Beyond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, Dana Lynn
The majority of energy release in the fission process is due to the kinetic energy of the fission fragments. Average Total Kinetic Energy measurements for the major actinides over a wide range of incident neutron energies were performed at LANSCE using a Frisch-gridded ionization chamber. The experiments and results of the 238U(n,f) and 235U(n,f) will be presented, including (En), (A), and mass yield distributions as a function of neutron energy. A preliminary (En) for 239Pu(n,f) will also be shown. The (En) shows a clear structure at multichance fission thresholds for all the reactions that we studied. The fragment masses aremore » determined using the iterative double energy (2E) method, with a resolution of A = 4 - 5 amu. The correction for the prompt fission neutrons is the main source of uncertainty, especially at high incident neutron energies, since the behavior of nubar(A,En) is largely unknown. Different correction methods will be discussed.« less
Loss-tolerant measurement-device-independent quantum private queries
Zhao, Liang-Yuan; Yin, Zhen-Qiang; Chen, Wei; Qian, Yong-Jun; Zhang, Chun-Mei; Guo, Guang-Can; Han, Zheng-Fu
2017-01-01
Quantum private queries (QPQ) is an important cryptography protocol aiming to protect both the user’s and database’s privacy when the database is queried privately. Recently, a variety of practical QPQ protocols based on quantum key distribution (QKD) have been proposed. However, for QKD-based QPQ the user’s imperfect detectors can be subjected to some detector- side-channel attacks launched by the dishonest owner of the database. Here, we present a simple example that shows how the detector-blinding attack can damage the security of QKD-based QPQ completely. To remove all the known and unknown detector side channels, we propose a solution of measurement-device-independent QPQ (MDI-QPQ) with single- photon sources. The security of the proposed protocol has been analyzed under some typical attacks. Moreover, we prove that its security is completely loss independent. The results show that practical QPQ will remain the same degree of privacy as before even with seriously uncharacterized detectors. PMID:28051101
Georges Bank: a leaky incubator of Alexandrium fundyense blooms
McGillicuddy, D.J.; Townsend, D.W.; Keafer, B.A.; Thomas, M.A.; Anderson, D.M.
2012-01-01
A series of oceanographic surveys on Georges Bank document variability of populations of the toxic dinoflagellate Alexandrium fundyense on time scales ranging from synoptic to seasonal to interannual. Blooms of A. fundyense on Georges Bank can reach concentrations on the order of 104 cells l−1, and are generally bank-wide in extent. Georges Bank populations of A. fundyense appear to be quasi-independent of those in the adjacent coastal Gulf of Maine, insofar as they occupy a hydrographic niche that is colder and saltier than their coastal counterparts. In contrast to coastal populations that rely on abundant resting cysts for bloom initiation, very few cysts are present in the sediments on Georges Bank. Bloom dynamics must therefore be largely controlled by the balance between growth and mortality processes, which are at present largely unknown for this population. Based on correlations between cell abundance and nutrient distributions, ammonium appears to be an important source of nitrogen for A. fundyense blooms on Georges Bank. PMID:24976691
NASA Astrophysics Data System (ADS)
Ji, Qixing; Babbin, Andrew R.; Jayakumar, Amal; Oleynik, Sergey; Ward, Bess B.
2015-12-01
The Eastern Tropical South Pacific oxygen minimum zone (ETSP-OMZ) is a site of intense nitrous oxide (N2O) flux to the atmosphere. This flux results from production of N2O by nitrification and denitrification, but the contribution of the two processes is unknown. The rates of these pathways and their distributions were measured directly using 15N tracers. The highest N2O production rates occurred at the depth of peak N2O concentrations at the oxic-anoxic interface above the oxygen deficient zone (ODZ) because slightly oxygenated waters allowed (1) N2O production from both nitrification and denitrification and (2) higher nitrous oxide production yields from nitrification. Within the ODZ proper (i.e., anoxia), the only source of N2O was denitrification (i.e., nitrite and nitrate reduction), the rates of which were reflected in the abundance of nirS genes (encoding nitrite reductase). Overall, denitrification was the dominant pathway contributing the N2O production in the ETSP-OMZ.
Georges Bank: a leaky incubator of Alexandrium fundyense blooms.
McGillicuddy, D J; Townsend, D W; Keafer, B A; Thomas, M A; Anderson, D M
2014-05-01
A series of oceanographic surveys on Georges Bank document variability of populations of the toxic dinoflagellate Alexandrium fundyense on time scales ranging from synoptic to seasonal to interannual. Blooms of A. fundyense on Georges Bank can reach concentrations on the order of 10 4 cells l -1 , and are generally bank-wide in extent. Georges Bank populations of A. fundyense appear to be quasi-independent of those in the adjacent coastal Gulf of Maine, insofar as they occupy a hydrographic niche that is colder and saltier than their coastal counterparts. In contrast to coastal populations that rely on abundant resting cysts for bloom initiation, very few cysts are present in the sediments on Georges Bank. Bloom dynamics must therefore be largely controlled by the balance between growth and mortality processes, which are at present largely unknown for this population. Based on correlations between cell abundance and nutrient distributions, ammonium appears to be an important source of nitrogen for A. fundyense blooms on Georges Bank.
Ichneumonidae (Hymenoptera) species new to the fauna of Norway
2014-01-01
Abstract The present paper contains new distributional records for 61 species of ichneumon wasps (Hymenoptera, Ichneumonidae) previously unknown for Norway, six of them are reported from Scandinavia for the first time. PMID:24855440
Editorially Speaking - Energy: World Needs and Reserves
ERIC Educational Resources Information Center
Journal of Chemical Education, 1974
1974-01-01
Discusses the world's energy requirements in contrast with the world's known and unknown energy reserves to illustrate the need for a stable and more equitable world-wide energy distribution system, especially for oil-importing countries. (CC)
Publications - GMC 253 | Alaska Division of Geological & Geophysical
of the following Copper River basin oil and gas exploratory wells: Amoco Production Company Ahtna Inc Reference Unknown, 1995, Source rock geochemical and visual kerogen data from cuttings of the following
Ye, Dan; Chen, Mengmeng; Li, Kui
2017-11-01
In this paper, we consider the distributed containment control problem of multi-agent systems with actuator bias faults based on observer method. The objective is to drive the followers into the convex hull spanned by the dynamic leaders, where the input is unknown but bounded. By constructing an observer to estimate the states and bias faults, an effective distributed adaptive fault-tolerant controller is developed. Different from the traditional method, an auxiliary controller gain is designed to deal with the unknown inputs and bias faults together. Moreover, the coupling gain can be adjusted online through the adaptive mechanism without using the global information. Furthermore, the proposed control protocol can guarantee that all the signals of the closed-loop systems are bounded and all the followers converge to the convex hull with bounded residual errors formed by the dynamic leaders. Finally, a decoupled linearized longitudinal motion model of the F-18 aircraft is used to demonstrate the effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation.
Tkach, Itshak; Jevtić, Aleksandar; Nof, Shimon Y; Edan, Yael
2018-03-02
Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors' performance, tasks' priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems.
A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation †
Nof, Shimon Y.; Edan, Yael
2018-01-01
Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors’ performance, tasks’ priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems. PMID:29498683
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Methane Leak Detection and Emissions Quantification with UAVs
NASA Astrophysics Data System (ADS)
Barchyn, T.; Fox, T. A.; Hugenholtz, C.
2016-12-01
Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.
Zischka, Melanie; Künne, Carsten T; Blom, Jochen; Wobser, Dominique; Sakιnç, Türkân; Schmidt-Hohagen, Kerstin; Dabrowski, P Wojtek; Nitsche, Andreas; Hübner, Johannes; Hain, Torsten; Chakraborty, Trinad; Linke, Burkhard; Goesmann, Alexander; Voget, Sonja; Daniel, Rolf; Schomburg, Dietmar; Hauck, Rüdiger; Hafez, Hafez M; Tielen, Petra; Jahn, Dieter; Solheim, Margrete; Sadowy, Ewa; Larsen, Jesper; Jensen, Lars B; Ruiz-Garbajosa, Patricia; Quiñones Pérez, Dianelys; Mikalsen, Theresa; Bender, Jennifer; Steglich, Matthias; Nübel, Ulrich; Witte, Wolfgang; Werner, Guido
2015-03-12
Enterococcus faecalis is a multifaceted microorganism known to act as a beneficial intestinal commensal bacterium. It is also a dreaded nosocomial pathogen causing life-threatening infections in hospitalised patients. Isolates of a distinct MLST type ST40 represent the most frequent strain type of this species, distributed worldwide and originating from various sources (animal, human, environmental) and different conditions (colonisation/infection). Since enterococci are known to be highly recombinogenic we determined to analyse the microevolution and niche adaptation of this highly distributed clonal type. We compared a set of 42 ST40 isolates by assessing key molecular determinants, performing whole genome sequencing (WGS) and a number of phenotypic assays including resistance profiling, formation of biofilm and utilisation of carbon sources. We generated the first circular closed reference genome of an E. faecalis isolate D32 of animal origin and compared it with the genomes of other reference strains. D32 was used as a template for detailed WGS comparisons of high-quality draft genomes of 14 ST40 isolates. Genomic and phylogenetic analyses suggest a high level of similarity regarding the core genome, also demonstrated by similar carbon utilisation patterns. Distribution of known and putative virulence-associated genes did not differentiate between ST40 strains from a commensal and clinical background or an animal or human source. Further analyses of mobile genetic elements (MGE) revealed genomic diversity owed to: (1) a modularly structured pathogenicity island; (2) a site-specifically integrated and previously unknown genomic island of 138 kb in two strains putatively involved in exopolysaccharide synthesis; and (3) isolate-specific plasmid and phage patterns. Moreover, we used different cell-biological and animal experiments to compare the isolate D32 with a closely related ST40 endocarditis isolate whose draft genome sequence was also generated. D32 generally showed a greater capacity of adherence to human cell lines and an increased pathogenic potential in various animal models in combination with an even faster growth in vivo (not in vitro). Molecular, genomic and phenotypic analysis of representative isolates of a major clone of E. faecalis MLST ST40 revealed new insights into the microbiology of a commensal bacterium which can turn into a conditional pathogen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleinman, L.I.; Daum, P. H.; Lee, Y.-N.
2011-06-21
During the VOCALS Regional Experiment, the DOE G-1 aircraft was used to sample a varying aerosol environment pertinent to properties of stratocumulus clouds over a longitude band extending 800 km west from the Chilean coast at Arica. Trace gas and aerosol measurements are presented as a function of longitude, altitude, and dew point in this study. Spatial distributions are consistent with an upper atmospheric source for O{sub 3} and South American coastal sources for marine boundary layer (MBL) CO and aerosol, most of which is acidic sulfate in agreement with the dominant pollution source being SO{sub 2} from Cu smeltersmore » and power plants. Pollutant layers in the free troposphere (FT) can be a result of emissions to the north in Peru or long range transport from the west. At a given altitude in the FT (up to 3 km), dew point varies by 40 C with dry air descending from the upper atmospheric and moist air having a BL contribution. Ascent of BL air to a cold high altitude results in the condensation and precipitation removal of all but a few percent of BL water along with aerosol that served as CCN. Thus, aerosol volume decreases with dew point in the FT. Aerosol size spectra have a bimodal structure in the MBL and an intermediate diameter unimodal distribution in the FT. Comparing cloud droplet number concentration (CDNC) and pre-cloud aerosol (Dp > 100 nm) gives a linear relation up to a number concentration of {approx}150 cm{sup -3}, followed by a less than proportional increase in CDNC at higher aerosol number concentration. A number balance between below cloud aerosol and cloud droplets indicates that {approx}25% of aerosol in the PCASP size range are interstitial (not activated). One hundred and two constant altitude cloud transects were identified and used to determine properties of interstitial aerosol. One transect is examined in detail as a case study. Approximately 25 to 50% of aerosol with D{sub p} > 110 nm were not activated, the difference between the two approaches possibly representing shattered cloud droplets or unknown artifact. CDNC and interstitial aerosol were anti-correlated in all cloud transects, consistent with the occurrence of dry in-cloud areas due to entrainment or circulation mixing.« less
NASA Astrophysics Data System (ADS)
Charrier, J. G.; Richards-Henderson, N. K.; Bein, K. J.; McFall, A. S.; Wexler, A. S.; Anastasio, C.
2015-03-01
Recent epidemiological evidence supports the hypothesis that health effects from inhalation of ambient particulate matter (PM) are governed by more than just the mass of PM inhaled. Both specific chemical components and sources have been identified as important contributors to mortality and hospital admissions, even when these end points are unrelated to PM mass. Sources may cause adverse health effects via their ability to produce reactive oxygen species in the body, possibly due to the transition metal content of the PM. Our goal is to quantify the oxidative potential of ambient particle sources collected during two seasons in Fresno, CA, using the dithiothreitol (DTT) assay. We collected PM from different sources or source combinations into different ChemVol (CV) samplers in real time using a novel source-oriented sampling technique based on single-particle mass spectrometry. We segregated the particles from each source-oriented mixture into two size fractions - ultrafine Dp ≤ 0.17 μm) and submicron fine (0.17 μm ≤ Dp ≤ 1.0 μm) - and measured metals and the rate of DTT loss in each PM extract. We find that the mass-normalized oxidative potential of different sources varies by up to a factor of 8 and that submicron fine PM typically has a larger mass-normalized oxidative potential than ultrafine PM from the same source. Vehicular emissions, regional source mix, commute hours, daytime mixed layer, and nighttime inversion sources exhibit the highest mass-normalized oxidative potential. When we apportion DTT activity for total PM sampled to specific chemical compounds, soluble copper accounts for roughly 50% of total air-volume-normalized oxidative potential, soluble manganese accounts for 20%, and other unknown species, likely including quinones and other organics, account for 30%. During nighttime, soluble copper and manganese largely explain the oxidative potential of PM, while daytime has a larger contribution from unknown (likely organic) species.
Geological and hydrogeological investigations in west Malaysia
NASA Technical Reports Server (NTRS)
Ahmad, J. B. (Principal Investigator); Khoon, S. Y.
1977-01-01
The author has identified the following significant results. Large structures along the east coast of the peninsula were discovered. Of particular significance were the circular structures which were believed to be associated with mineralization and whose existence was unknown. The distribution of the younger sediments along the east coast appeared to be more widespread than previously indicated. Along the Pahang coast on the southern end, small traces of raised beach lines were noted up to six miles inland. The existence of these beach lines was unknown due to their isolation in large coastal swamps.
2013-09-01
provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB...the satellite. The material constitutive laws of interest are the bidirectional reflectance distribution functions ( BRDF ) for diffuse and specular...solar panel can be related to each other using the BRDF definition. This creates a set of three independent equations and three unknowns, which can be
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Excitation efficiency of an optical fiber core source
NASA Technical Reports Server (NTRS)
Egalon, Claudio O.; Rogowski, Robert S.; Tai, Alan C.
1992-01-01
The exact field solution of a step-index profile fiber is used to determine the excitation efficiency of a distribution of sources in the core of an optical fiber. Previous results of a thin-film cladding source distribution to its core source counterpart are used for comparison. The behavior of power efficiency with the fiber parameters is examined and found to be similar to the behavior exhibited by cladding sources. It is also found that a core-source fiber is two orders of magnitude more efficient than a fiber with a bulk distribution of cladding sources. This result agrees qualitatively with previous ones obtained experimentally.
The Competition Between a Localised and Distributed Source of Buoyancy
NASA Astrophysics Data System (ADS)
Partridge, Jamie; Linden, Paul
2012-11-01
We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.
2014-06-01
high-throughput method has utility for evaluating a diversity of natural materials with unknown complex odor blends that can then be down-selected for...method has utility for evaluating a diversity of natural materials with unknown complex odor blends that can then be down-selected for further...leishmaniasis. Lancet 366: 1561-1577. Petts, S.L., Y. Tang, and R.D. Ward. 1997. Nectar from a wax plant, Hoya sp., as a carbohydrate source for
Turtle groups or turtle soup: dispersal patterns of hawksbill turtles in the Caribbean.
Blumenthal, J M; Abreu-Grobois, F A; Austin, T J; Broderick, A C; Bruford, M W; Coyne, M S; Ebanks-Petrie, G; Formia, A; Meylan, P A; Meylan, A B; Godley, B J
2009-12-01
Despite intense interest in conservation of marine turtles, spatial ecology during the oceanic juvenile phase remains relatively unknown. Here, we used mixed stock analysis and examination of oceanic drift to elucidate movements of hawksbill turtles (Eretmochelys imbricata) and address management implications within the Caribbean. Among samples collected from 92 neritic juvenile hawksbills in the Cayman Islands we detected 11 mtDNA control region haplotypes. To estimate contributions to the aggregation, we performed 'many-to-many' mixed stock analysis, incorporating published hawksbill genetic and population data. The Cayman Islands aggregation represents a diverse mixed stock: potentially contributing source rookeries spanned the Caribbean basin, delineating a scale of recruitment of 200-2500 km. As hawksbills undergo an extended phase of oceanic dispersal, ocean currents may drive patterns of genetic diversity observed on foraging aggregations. Therefore, using high-resolution Aviso ocean current data, we modelled movement of particles representing passively drifting oceanic juvenile hawksbills. Putative distribution patterns varied markedly by origin: particles from many rookeries were broadly distributed across the region, while others would appear to become entrained in local gyres. Overall, we detected a significant correlation between genetic profiles of foraging aggregations and patterns of particle distribution produced by a hatchling drift model (Mantel test, r = 0.77, P < 0.001; linear regression, r = 0.83, P < 0.001). Our results indicate that although there is a high degree of mixing across the Caribbean (a 'turtle soup'), current patterns play a substantial role in determining genetic structure of foraging aggregations (forming turtle groups). Thus, for marine turtles and other widely distributed marine species, integration of genetic and oceanographic data may enhance understanding of population connectivity and management requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less
Sepsis Within 30 Days of Geriatric Hip Fracture Surgery.
Bohl, Daniel D; Iantorno, Stephanie E; Saltzman, Bryan M; Tetreault, Matthew W; Darrith, Brian; Della Valle, Craig J
2017-10-01
Sepsis after hip fracture typically develops from one of the 3 potential infectious sources: urinary tract infection (UTI), pneumonia, and surgical site infection (SSI). The purpose of this investigation is to determine (1) the proportion of cases of sepsis that arises from each of these potential infectious sources; (2) baseline risk factors for developing each of the potential infectious sources; and (3) baseline risk factors for developing sepsis. The National Surgical Quality Improvement Program database was searched for geriatric patients (aged >65 years) who underwent surgery for hip fracture during 2005-2013. Patients subsequently diagnosed with sepsis were categorized according to concomitant diagnosis with UTI, SSI, and/or pneumonia. Multivariate regression was used to test for associations while adjusting for baseline characteristics. Among the 466 patients who developed sepsis (2.4% of all patients), 157 (33.7%) also had a UTI, 135 (29.0%) also had pneumonia, and 36 (7.7%) also had SSI. The rate of sepsis was elevated in patients who developed UTI (13.0% vs 1.7%; P < .001), pneumonia (18.2% vs 1.8%; P < .001), or SSI (14.8% vs 2.3%; P < .001). The mortality rate was elevated among those who developed sepsis (21.0% vs 3.8%; P < .001). Sepsis occurs in about 1 in 40 patients after geriatric hip fracture surgery. Of these septic cases, 1 in 3 is associated with UTI, 1 in 3 with pneumonia, and 1 in 15 with SSI. The cause of sepsis is often unknown on clinical diagnosis, and this distribution of potential infectious sources allows clinicians for direct identification and treatment. Copyright © 2017 Elsevier Inc. All rights reserved.
Sources of sub-micrometre particles near a major international airport
NASA Astrophysics Data System (ADS)
Masiol, Mauro; Harrison, Roy M.; Vu, Tuan V.; Beddows, David C. S.
2017-10-01
The international airport of Heathrow is a major source of nitrogen oxides, but its contribution to the levels of sub-micrometre particles is unknown and is the objective of this study. Two sampling campaigns were carried out during warm and cold seasons at a site close to the airfield (1.2 km). Size spectra were largely dominated by ultrafine particles: nucleation particles ( < 30 nm) were found to be ˜ 10 times higher than those commonly measured in urban background environments of London. Five clusters and six factors were identified by applying k means cluster analysis and positive matrix factorisation (PMF), respectively, to particle number size distributions; their interpretation was based on their modal structures, wind directionality, diurnal patterns, road and airport traffic volumes, and on the relationship with weather and other air pollutants. Airport emissions, fresh and aged road traffic, urban accumulation mode, and two secondary sources were then identified and apportioned. The fingerprint of Heathrow has a characteristic modal structure peaking at < 20 nm and accounts for 30-35 % of total particles in both the seasons. Other main contributors are fresh (24-36 %) and aged (16-21 %) road traffic emissions and urban accumulation from London (around 10 %). Secondary sources accounted for less than 6 % in number concentrations but for more than 50 % in volume concentration. The analysis of a strong regional nucleation event showed that both the cluster categorisation and PMF contributions were affected during the first 6 h of the event. In 2016, the UK government provisionally approved the construction of a third runway; therefore the direct and indirect impact of Heathrow on local air quality is expected to increase unless mitigation strategies are applied successfully.
Investigating the peculiar emission from the new VHE gamma-ray source H1722+119
Ahnen, M. L.
2016-03-28
The Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) telescopes ob- served the BL Lac object H1722+119 (redshift unknown) for six consecutive nights between 2013 May 17 and 22, for a total of 12.5 h. The observations were triggered by high activity in the optical band measured by the KVA (Kungliga Vetenskap- sakademien) telescope. The source was for the first time detected in the very high energy (VHE, E > 100GeV) γ-ray band with a statistical significance of 5.9 σ. The integral flux above 150GeV is estimated to be (2.0±0.5) per cent of the Crab Nebula flux. We used contemporaneous high energymore » (HE, 100MeV < E < 100GeV) γ-ray observations from Fermi-LAT (Large Area Telescope) to estimate the redshift of the source. Within the framework of the current extragalactic background light models, we estimate the redshift to be z = 0.34±0.15. Additionally, we used contemporaneous X-ray to radio data collected by the instruments on board the Swift satellite, the KVA, and the OVRO (Owens Valley Radio Observatory) telescope to study multifrequency characteristics of the source. We found no significant temporal variability of the flux in the HE and VHE bands. The flux in the optical and radio wavebands, on the other hand, did vary with different patterns. The spectral energy distribution (SED) of H1722+119 shows surprising behaviour in the ~ 3×10 14-10 18 Hz frequency range. It can be modelled using an inhomogeneous helical jet synchrotron self-Compton model.« less
Studies of the Intrinsic Complexities of Magnetotail Ion Distributions: Theory and Observations
NASA Technical Reports Server (NTRS)
Ashour-Abdalla, Maha
1998-01-01
This year we have studied the relationship between the structure seen in measured distribution functions and the detailed magnetospheric configuration. Results from our recent studies using time-dependent large-scale kinetic (LSK) calculations are used to infer the sources of the ions in the velocity distribution functions measured by a single spacecraft (Geotail). Our results strongly indicate that the different ion sources and acceleration mechanisms producing a measured distribution function can explain this structure. Moreover, individual structures within distribution functions were traced back to single sources. We also confirmed the fractal nature of ion distributions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material for...
The Larva, ecology and distribution of Tinodes braueri McLachlan, 1878 (Trichoptera: Psychomyiidae)
GRAF, WOLFRAM; KUČINIĆ, MLADEN; PREVIŠIĆ, ANA; VUČKOVIĆ, IVAN; WARINGER, JOHANN
2016-01-01
The hitherto unknown larva of Tinodes braueri McLachlan, 1878, is described and discussed in the context of contemporary Psychomyiidae keys. In addition, zoogeographical and ecological notes are included. PMID:26973366
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
12. Interior view of cement and aggregate batch plant showing ...
12. Interior view of cement and aggregate batch plant showing storage bins. Photographer unknown, c. 1926. Source: Ralph Pleasant. - Waddell Dam, On Agua Fria River, 35 miles northwest of Phoenix, Phoenix, Maricopa County, AZ
NASA Astrophysics Data System (ADS)
O'Donoghue, Aileen A.; Haynes, Martha P.; Koopmann, Rebecca A.; Jones, Michael G.; Hallenbeck, Gregory L.; Giovanelli, Riccardo; Hoffman, Lyle; Craig, David W.; Undergraduate ALFALFA Team
2017-01-01
We have completed three “Harvesting ALFALFA” Arecibo observing programs in the direction of the Pisces-Perseus Supercluster (PPS) since ALFALFA observations were finished in 2012. The first was to perform follow-up observations on high signal-to-noise (S/N > 6.5) ALFALFA detections needing confirmation and low S/N sources lacking optical counterparts. A few more high S/N objects were observed in the second program along with targets visually selected from the Sloan Digital Sky Survey (SDSS). The third program included low S/N ALFALFA sources having optical counterparts with redshifts that were unknown or differed from the ALFALFA observations. It also included more galaxies selected from SDSS by eye and by Structured Query Language (SQL) searches with parameters intended to select galaxies at the distance of the PPS (~6,000 km/s). We used pointed basic Total-Power Position-Switched Observations in the 1340 - 1430 MHz ALFALFA frequency range. For sources of known redshift, we used the Wideband Arecibo Pulsar Processors (WAPP’s) , while for sources of unknown redshift we utilized a hybrid/dual bandwidth Doppler tracking mode using the Arecibo Interim 50-MHz Correlator with 9-level sampling.Results confirmed that a few high S/N ALFALFA sources are spurious as expected from the work of Saintonge (2007), low S/N ALFALA sources lacking an optical counterpart are all likely to be spurious, but low S/N sources with optical counterparts are generally reliable. Of the optically selected sources, about 80% were detected and tended to be near the distance of the PPS.This work has been supported by NSF grant AST-1211005.
Stereoscopic augmented reality with pseudo-realistic global illumination effects
NASA Astrophysics Data System (ADS)
de Sorbier, Francois; Saito, Hideo
2014-03-01
Recently, augmented reality has become very popular and has appeared in our daily life with gaming, guiding systems or mobile phone applications. However, inserting object in such a way their appearance seems natural is still an issue, especially in an unknown environment. This paper presents a framework that demonstrates the capabilities of Kinect for convincing augmented reality in an unknown environment. Rather than pre-computing a reconstruction of the scene like proposed by most of the previous method, we propose a dynamic capture of the scene that allows adapting to live changes of the environment. Our approach, based on the update of an environment map, can also detect the position of the light sources. Combining information from the environment map, the light sources and the camera tracking, we can display virtual objects using stereoscopic devices with global illumination effects such as diffuse and mirror reflections, refractions and shadows in real time.
Fungal diversity in the Atacama Desert.
Santiago, Iara F; Gonçalves, Vívian N; Gómez-Silva, Benito; Galetovic, Alexandra; Rosa, Luiz H
2018-03-07
Fungi are generally easily dispersed, able to colonise a wide variety of substrata and can tolerate diverse environmental conditions. However, despite these abilities, the diversity of fungi in the Atacama Desert is practically unknown. Most of the resident fungi in desert regions are ubiquitous. Some of them, however, seem to display specific adaptations that enable them to survive under the variety of extreme conditions of these regions, such as high temperature, low availability of water, osmotic stress, desiccation, low availability of nutrients, and exposure to high levels of UV radiation. For these reasons, fungal communities living in the Atacama Desert represent an unknown part of global fungal diversity and, consequently, may be source of new species that could be potential sources for new biotechnological products. In this review, we focus on the current knowledge of the diversity, ecology, adaptive strategies, and biotechnological potential of the fungi reported in the different ecosystems of the Atacama Desert.
Theirrattanakul, Sirichai; Prelas, Mark
2017-09-01
Nuclear batteries based on silicon carbide betavoltaic cells have been studied extensively in the literature. This paper describes an analysis of design parameters, which can be applied to a variety of materials, but is specific to silicon carbide. In order to optimize the interface between a beta source and silicon carbide p-n junction, it is important to account for the specific isotope, angular distribution of the beta particles from the source, the energy distribution of the source as well as the geometrical aspects of the interface between the source and the transducer. In this work, both the angular distribution and energy distribution of the beta particles are modeled using a thin planar beta source (e.g., H-3, Ni-63, S-35, Pm-147, Sr-90, and Y-90) with GEANT4. Previous studies of betavoltaics with various source isotopes have shown that Monte Carlo based codes such as MCNPX, GEANT4 and Penelope generate similar results. GEANT4 is chosen because it has important strengths for the treatment of electron energies below one keV and it is widely available. The model demonstrates the effects of angular distribution, the maximum energy of the beta particle and energy distribution of the beta source on the betavoltaic and it is useful in determining the spatial profile of the power deposition in the cell. Copyright © 2017. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1981-01-01
The effects of source structure on radio interferometry measurements were investigated. The brightness distribution measurements for ten extragalactic sources were analyzed. Significant results are reported.
2dFLenS and KiDS: determining source redshift distributions with cross-correlations
NASA Astrophysics Data System (ADS)
Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian
2017-03-01
We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.
9. Photographic copy of photograph. (Source: U.S. Department of Interior. ...
9. Photographic copy of photograph. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation Service. Annual Report, Fiscal Year 1928 Vol I. Irrigation District #4, California and Southern Arizona, RG 75, BIA-Phoenix, Box 40, National Archives, Pacific Southwest Region) Photographer unknown. CASA BLANCA CANAL, HEADING AND FLUME, APRIL 10, 1928 - San Carlos Irrigation Project, Casa Blanca Canal, Gila River, Coolidge, Pinal County, AZ
10. Photographic copy of photograph. (Source: U.S. Department of Interior. ...
10. Photographic copy of photograph. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation Service. Annual Report, Fiscal Year 1928. Vol I. Irrigation District #4, California and Southern Arizona, RG 75, BIA-Phoenix, Box 40, National Archives, Pacific Southwest Region) Photographer unknown. CASA BLANCA CANAL, HEADING AND FLUME, APRIL 10, 1928 - San Carlos Irrigation Project, Casa Blanca Canal, Gila River, Coolidge, Pinal County, AZ
15. Photographic copy of photograph. (Source: U.S. Department of Interior. ...
15. Photographic copy of photograph. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation Service. Annual Report, Fiscal Year 1927. Vol. I, Narrative and Photographs, District #4, RG 75, Entry 655, BOx 29, National Archives, Washington, DC.) Photographer unknown. PIMA LATERAL, MCCLELLAN WASH CONDUIT, LOOKING SOUTH-WEST, 4/16/27 - San Carlos Irrigation Project, Pima Lateral, Main Canal at Sacaton Dam, Coolidge, Pinal County, AZ
MTR BASEMENT. DOORWAY TO SOURCE STORAGE VAULT IS AT CENTER ...
MTR BASEMENT. DOORWAY TO SOURCE STORAGE VAULT IS AT CENTER OF VIEW; TO DECONTAMINATION ROOM, AT RIGHT. PART OF MAZE ENTRY IS VISIBLE INSIDE VAULT DOORWAY. INL NEGATIVE NO. 7763. Unknown Photographer, photo was dated as 3/30/1953, but this was probably an error. The more likely date is 3/30/1952. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Schwemin, Friedhelm
2010-12-01
The appendices of the first 54 volumes (1776-1829) of the Berliner Astronomisches Jahrbuch (BAJ), edited by Johann Elert Bode, contain a plethora of biographically relevant notes, which are listed here, alphabetically sorted, in short versions. In parts, the listing possesses the quality of a primary source, and contains information on 771 persons. Many of them are poorly known or unknown.
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-04-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.
Exact solutions for the selection-mutation equilibrium in the Crow-Kimura evolutionary model.
Semenov, Yuri S; Novozhilov, Artem S
2015-08-01
We reformulate the eigenvalue problem for the selection-mutation equilibrium distribution in the case of a haploid asexually reproduced population in the form of an equation for an unknown probability generating function of this distribution. The special form of this equation in the infinite sequence limit allows us to obtain analytically the steady state distributions for a number of particular cases of the fitness landscape. The general approach is illustrated by examples; theoretical findings are compared with numerical calculations. Copyright © 2015. Published by Elsevier Inc.
Embolic Strokes of Unknown Source and Cryptogenic Stroke: Implications in Clinical Practice
Nouh, Amre; Hussain, Mohammed; Mehta, Tapan; Yaghi, Shadi
2016-01-01
Up to a third of strokes are rendered cryptogenic or of undetermined etiology. This number is specifically higher in younger patients. At times, inadequate diagnostic workups, multiple causes, or an under-recognized etiology contributes to this statistic. Embolic stroke of undetermined source, a new clinical entity particularly refers to patients with embolic stroke for whom the etiology of embolism remains unidentified despite through investigations ruling out established cardiac and vascular sources. In this article, we review current classification and discuss important clinical considerations in these patients; highlighting cardiac arrhythmias and structural abnormalities, patent foramen ovale, paradoxical sources, and potentially under-recognized, vascular, inflammatory, autoimmune, and hematologic sources in relation to clinical practice. PMID:27047443
Gagne, Nolan L; Cutright, Daniel R; Rivard, Mark J
2012-09-01
To improve tumor dose conformity and homogeneity for COMS plaque brachytherapy by investigating the dosimetric effects of varying component source ring radionuclides and source strengths. The MCNP5 Monte Carlo (MC) radiation transport code was used to simulate plaque heterogeneity-corrected dose distributions for individually-activated source rings of 14, 16 and 18 mm diameter COMS plaques, populated with (103)Pd, (125)I and (131)Cs sources. Ellipsoidal tumors were contoured for each plaque size and MATLAB programming was developed to generate tumor dose distributions for all possible ring weighting and radionuclide permutations for a given plaque size and source strength resolution, assuming a 75 Gy apical prescription dose. These dose distributions were analyzed for conformity and homogeneity and compared to reference dose distributions from uniformly-loaded (125)I plaques. The most conformal and homogeneous dose distributions were reproduced within a reference eye environment to assess organ-at-risk (OAR) doses in the Pinnacle(3) treatment planning system (TPS). The gamma-index analysis method was used to quantitatively compare MC and TPS-generated dose distributions. Concentrating > 97% of the total source strength in a single or pair of central (103)Pd seeds produced the most conformal dose distributions, with tumor basal doses a factor of 2-3 higher and OAR doses a factor of 2-3 lower than those of corresponding uniformly-loaded (125)I plaques. Concentrating 82-86% of the total source strength in peripherally-loaded (131)Cs seeds produced the most homogeneous dose distributions, with tumor basal doses 17-25% lower and OAR doses typically 20% higher than those of corresponding uniformly-loaded (125)I plaques. Gamma-index analysis found > 99% agreement between MC and TPS dose distributions. A method was developed to select intra-plaque ring radionuclide compositions and source strengths to deliver more conformal and homogeneous tumor dose distributions than uniformly-loaded (125)I plaques. This method may support coordinated investigations of an appropriate clinical target for eye plaque brachytherapy.
Non-Poissonian Distribution of Tsunami Waiting Times
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2007-12-01
Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.
Over-Distribution in Source Memory
Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.
2012-01-01
Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494
Analysis of the Source System of Nantun Group in Huhehu Depression of Hailar Basin
NASA Astrophysics Data System (ADS)
Li, Yue; Li, Junhui; Wang, Qi; Lv, Bingyang; Zhang, Guannan
2017-10-01
Huhehu Depression will be the new battlefield in Hailar Basin in the future, while at present it’s in a low exploration level. The study about the source system of Nantun group is little, so fine depiction of the source system would be significant to sedimentary system reconstruction, the reservoir distribution and prediction of favorable area. In this paper, it comprehensive uses of many methods such as ancient landform, light and heavy mineral combination, seismic reflection characteristics, to do detailed study about the source system of Nantun group in different views and different levels. The results show that the source system in Huhehu Depression is from the east of Xilinbeir bulge and the west of Bayan Moutain uplift, which is surrounded by basin. The slope belt is the main source, and the southern bulge is the secondary source. The distribution of source system determines the distribution of sedimentary system and the regularity of the distribution of sand body.
NASA Astrophysics Data System (ADS)
Charrier, J. G.; Richards-Henderson, N. K.; Bein, K. J.; McFall, A. S.; Wexler, A. S.; Anastasio, C.
2014-09-01
Recent epidemiological evidence supports the hypothesis that health effects from inhalation of ambient particulate matter (PM) are governed by more than just the mass of PM inhaled. Both specific chemical components and sources have been identified as important contributors to mortality and hospital admissions, even when these endpoints are unrelated to PM mass. Sources may cause adverse health effects via their ability to produce reactive oxygen species, possibly due to the transition metal content of the PM. Our goal is to quantify the oxidative potential of ambient particle sources collected during two seasons in Fresno, CA using the dithiothreitol (DTT) assay. We collected PM from different sources or source combinations into different ChemVol (CV) samplers in real time using a novel source-oriented sampling technique based on single particle mass spectrometry. We segregated the particles from each source-oriented mixture into two size fractions - ultrafine (Dp ≤ 0.17 μm) and submicron fine (0.17 μm ≤ Dp ≤ 1.0 μm) - and measured metals and the rate of DTT loss in each PM extract. We find that the mass-normalized oxidative potential of different sources varies by up to a actor of 8 and that submicron fine PM typically has a larger mass-normalized oxidative potential than ultrafine PM from the same source. Vehicular Emissions, Regional Source Mix, Commute Hours, Daytime Mixed Layer and Nighttime Inversion sources exhibit the highest mass-normalized oxidative potential. When we apportion the volume-normalized oxidative potential, which also accounts for the source's prevalence, cooking sources account for 18-29% of the total DTT loss while mobile (traffic) sources account for 16-28%. When we apportion DTT activity for total PM sampled to specific chemical compounds, soluble copper accounts for roughly 50% of total air-volume-normalized oxidative potential, soluble manganese accounts for 20%, and other unknown species, likely including quinones and other organics, account for 30%. During nighttime, soluble copper and manganese largely explain the oxidative potential of PM, while daytime has a larger contribution from unknown (likely organic) species.
26. Evening view of concrete mixing plant, concrete placement tower, ...
26. Evening view of concrete mixing plant, concrete placement tower, cableway tower, power line and derrick. Photographer unknown, 1927. Source: MWD. - Waddell Dam, On Agua Fria River, 35 miles northwest of Phoenix, Phoenix, Maricopa County, AZ
Early signs of recovery of Acropora palmata in St. John, US Virgin Islands
Muller, E.M.; Rogers, Caroline S.; van Woesik, R.
2014-01-01
Since the 1980s, diseases have caused significant declines in the population of the threatened Caribbean coral Acropora palmata. Yet it is largely unknown whether the population densities have recovered from these declines and whether there have been any recent shifts in size-frequency distributions toward large colonies. It is also unknown whether colony size influences the risk of disease infection, the most common stressor affecting this species. To address these unknowns, we examined A. palmata colonies at ten sites around St. John, US Virgin Islands, in 2004 and 2010. The prevalence of white-pox disease was highly variable among sites, ranging from 0 to 53 %, and this disease preferentially targeted large colonies. We found that colony density did not significantly change over the 6-year period, although six out of ten sites showed higher densities through time. The size-frequency distributions of coral colonies at all sites were positively skewed in both 2004 and 2010, however, most sites showed a temporal shift toward more large-sized colonies. This increase in large-sized colonies occurred despite the presence of white-pox disease, a severe bleaching event, and several storms. This study provides evidence of slow recovery of the A. palmata population around St. John despite the persistence of several stressors.
Integral-moment analysis of the BATSE gamma-ray burst intensity distribution
NASA Technical Reports Server (NTRS)
Horack, John M.; Emslie, A. Gordon
1994-01-01
We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
An Analytical Methodology for Predicting Repair Time Distributions of Advanced Technology Aircraft.
1985-12-01
1984. 3. Barlow, Richard E. "Mathematical Theory of Reliabilitys A Historical Perspective." ZEEE Transactions on Reliability, 33. 16-19 (April 1984...Technology (AU), Wright-Patterson AFB OH, March 1971. 11. Coppola, Anthony. "Reliability Engineering of J- , Electronic Equipment," ZEEE Transactions on...1982. 64. Woodruff, Brian W. at al. "Modified Goodness-o-Fit Tests for Gamma Distributions with Unknown Location and Scale Parameters," ZEEE
Evaluation of nitrous acid sources and sinks in urban outflow
NASA Astrophysics Data System (ADS)
Gall, Elliott T.; Griffin, Robert J.; Steiner, Allison L.; Dibb, Jack; Scheuer, Eric; Gong, Longwen; Rutter, Andrew P.; Cevik, Basak K.; Kim, Saewung; Lefer, Barry; Flynn, James
2016-02-01
Intensive air quality measurements made from June 22-25, 2011 in the outflow of the Dallas-Fort Worth (DFW) metropolitan area are used to evaluate nitrous acid (HONO) sources and sinks. A two-layer box model was developed to assess the ability of established and recently identified HONO sources and sinks to reproduce observations of HONO mixing ratios. A baseline model scenario includes sources and sinks established in the literature and is compared to scenarios including three recently identified sources: volatile organic compound-mediated conversion of nitric acid to HONO (S1), biotic emission from the ground (S2), and re-emission from a surface nitrite reservoir (S3). For all mechanisms, ranges of parametric values span lower- and upper-limit values. Model outcomes for 'likely' estimates of sources and sinks generally show under-prediction of HONO observations, implying the need to evaluate additional sources and variability in estimates of parameterizations, particularly during daylight hours. Monte Carlo simulation is applied to model scenarios constructed with sources S1-S3 added independently and in combination, generally showing improved model outcomes. Adding sources S2 and S3 (scenario S2/S3) appears to best replicate observed HONO, as determined by the model coefficient of determination and residual sum of squared errors (r2 = 0.55 ± 0.03, SSE = 4.6 × 106 ± 7.6 × 105 ppt2). In scenario S2/S3, source S2 is shown to account for 25% and 6.7% of the nighttime and daytime budget, respectively, while source S3 accounts for 19% and 11% of the nighttime and daytime budget, respectively. However, despite improved model fit, there remains significant underestimation of daytime HONO; on average, a 0.15 ppt/s unknown daytime HONO source, or 67% of the total daytime source, is needed to bring scenario S2/S3 into agreement with observation. Estimates of 'best fit' parameterizations across lower to upper-limit values results in a moderate reduction of the unknown daytime source, from 0.15 to 0.10 ppt/s.
Predicting species distributions from checklist data using site-occupancy models
Kery, M.; Gardner, B.; Monnerat, C.
2010-01-01
Aim: (1) To increase awareness of the challenges induced by imperfect detection, which is a fundamental issue in species distribution modelling; (2) to emphasize the value of replicate observations for species distribution modelling; and (3) to show how 'cheap' checklist data in faunal/floral databases may be used for the rigorous modelling of distributions by site-occupancy models. Location: Switzerland. Methods: We used checklist data collected by volunteers during 1999 and 2000 to analyse the distribution of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly in Switzerland. We used data from repeated visits to 1-ha pixels to derive 'detection histories' and apply site-occupancy models to estimate the 'true' species distribution, i.e. corrected for imperfect detection. We modelled blue hawker distribution as a function of elevation and year and its detection probability of elevation, year and season. Results: The best model contained cubic polynomial elevation effects for distribution and quadratic effects of elevation and season for detectability. We compared the site-occupancy model with a conventional distribution model based on a generalized linear model, which assumes perfect detectability (p = 1). The conventional distribution map looked very different from the distribution map obtained using site-occupancy models that accounted for the imperfect detection. The conventional model underestimated the species distribution by 60%, and the slope parameters of the occurrence-elevation relationship were also underestimated when assuming p = 1. Elevation was not only an important predictor of blue hawker occurrence, but also of the detection probability, with a bell-shaped relationship. Furthermore, detectability increased over the season. The average detection probability was estimated at only 0.19 per survey. Main conclusions: Conventional species distribution models do not model species distributions per se but rather the apparent distribution, i.e. an unknown proportion of species distributions. That unknown proportion is equivalent to detectability. Imperfect detection in conventional species distribution models yields underestimates of the extent of distributions and covariate effects that are biased towards zero. In addition, patterns in detectability will erroneously be ascribed to species distributions. In contrast, site-occupancy models applied to replicated detection/non-detection data offer a powerful framework for making inferences about species distributions corrected for imperfect detection. The use of 'cheap' checklist data greatly enhances the scope of applications of this useful class of models. ?? 2010 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Sakamoto, Kimiko M.; Laing, James R.; Stevens, Robin G.; Jaffe, Daniel A.; Pierce, Jeffrey R.
2016-06-01
Biomass-burning aerosols have a significant effect on global and regional aerosol climate forcings. To model the magnitude of these effects accurately requires knowledge of the size distribution of the emitted and evolving aerosol particles. Current biomass-burning inventories do not include size distributions, and global and regional models generally assume a fixed size distribution from all biomass-burning emissions. However, biomass-burning size distributions evolve in the plume due to coagulation and net organic aerosol (OA) evaporation or formation, and the plume processes occur on spacial scales smaller than global/regional-model grid boxes. The extent of this size-distribution evolution is dependent on a variety of factors relating to the emission source and atmospheric conditions. Therefore, accurately accounting for biomass-burning aerosol size in global models requires an effective aerosol size distribution that accounts for this sub-grid evolution and can be derived from available emission-inventory and meteorological parameters. In this paper, we perform a detailed investigation of the effects of coagulation on the aerosol size distribution in biomass-burning plumes. We compare the effect of coagulation to that of OA evaporation and formation. We develop coagulation-only parameterizations for effective biomass-burning size distributions using the SAM-TOMAS large-eddy simulation plume model. For the most-sophisticated parameterization, we use the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) to build a parameterization of the aged size distribution based on the SAM-TOMAS output and seven inputs: emission median dry diameter, emission distribution modal width, mass emissions flux, fire area, mean boundary-layer wind speed, plume mixing depth, and time/distance since emission. This parameterization was tested against an independent set of SAM-TOMAS simulations and yields R2 values of 0.83 and 0.89 for Dpm and modal width, respectively. The size distribution is particularly sensitive to the mass emissions flux, fire area, wind speed, and time, and we provide simplified fits of the aged size distribution to just these input variables. The simplified fits were tested against 11 aged biomass-burning size distributions observed at the Mt. Bachelor Observatory in August 2015. The simple fits captured over half of the variability in observed Dpm and modal width even though the freshly emitted Dpm and modal widths were unknown. These fits may be used in global and regional aerosol models. Finally, we show that coagulation generally leads to greater changes in the particle size distribution than OA evaporation/formation does, using estimates of OA production/loss from the literature.
Improved Ambient Pressure Pyroelectric Ion Source
NASA Technical Reports Server (NTRS)
Beegle, Luther W.; Kim, Hugh I.; Kanik, Isik; Ryu, Ernest K.; Beckett, Brett
2011-01-01
The detection of volatile vapors of unknown species in a complex field environment is required in many different applications. Mass spectroscopic techniques require subsystems including an ionization unit and sample transport mechanism. All of these subsystems must have low mass, small volume, low power, and be rugged. A volatile molecular detector, an ambient pressure pyroelectric ion source (APPIS) that met these requirements, was recently reported by Caltech researchers to be used in in situ environments.
12. Photographic copy of photograph. (Source: U.S. Department of Interior. ...
12. Photographic copy of photograph. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation Service. Annual Report, Fiscal Year 1925. Vol. I, Narrative and Photographs, Irrigation District #4, California and Southern Arizona, RG 75, Entry 655, Box 28, National Archives, Washington, DC.) Photographer unknown. PIMA LATERAL, LINING EQUIPMENT, 5/13/25 - San Carlos Irrigation Project, Pima Lateral, Main Canal at Sacaton Dam, Coolidge, Pinal County, AZ
14. Photographic copy of photograph. (Source: U.s. Department of Interior. ...
14. Photographic copy of photograph. (Source: U.s. Department of Interior. Office of Indian Affairs. Indian Irrigation Service. Annual Report, Fiscal Year 1927. Vol. I, Narrative and Photographs, District #4, RG 75, Entry 655, Box 29, National Archives, Washington, DC.) Photographer unknown. PIMA LATERAL, MCCLELLAN CONDUIT, ENTRANCE BEFORE POURING THE CONDUIT, 4/30/27 - San Carlos Irrigation Project, Pima Lateral, Main Canal at Sacaton Dam, Coolidge, Pinal County, AZ
21. Photographic copy of photograph. (Source: U.S. Department of Interior. ...
21. Photographic copy of photograph. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation Service. Annual Report, Fiscal Year 1926. Vol. I, Narrative and Photographs, RG 75, Entry 655, Box 29, National Archives, Washington, DC.) Photographer unknown. SACATON DAM, UPSTREAM SIDE FROM SOUTH END, 8/29/25 - San Carlos Irrigation Project, Sacaton Dam & Bridge, Gila River, T4S R6E S12/13, Coolidge, Pinal County, AZ
20. Photographic copy of photograph. (Source: U.S. Department of Interior. ...
20. Photographic copy of photograph. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation Service. Annual Report, Fiscal Year 1926. Vol. I, Narrative and Photographs, RG 75, Entry 655, Box 29, National Archives, Washington, DC.) Photographer unknown. SACATON DAM, BRIDGE FROM SOUTH END, 8/29/25 - San Carlos Irrigation Project, Sacaton Dam & Bridge, Gila River, T4S R6E S12/13, Coolidge, Pinal County, AZ
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
MoCha: Molecular Characterization of Unknown Pathways.
Lobo, Daniel; Hammelman, Jennifer; Levin, Michael
2016-04-01
Automated methods for the reverse-engineering of complex regulatory networks are paving the way for the inference of mechanistic comprehensive models directly from experimental data. These novel methods can infer not only the relations and parameters of the known molecules defined in their input datasets, but also unknown components and pathways identified as necessary by the automated algorithms. Identifying the molecular nature of these unknown components is a crucial step for making testable predictions and experimentally validating the models, yet no specific and efficient tools exist to aid in this process. To this end, we present here MoCha (Molecular Characterization), a tool optimized for the search of unknown proteins and their pathways from a given set of known interacting proteins. MoCha uses the comprehensive dataset of protein-protein interactions provided by the STRING database, which currently includes more than a billion interactions from over 2,000 organisms. MoCha is highly optimized, performing typical searches within seconds. We demonstrate the use of MoCha with the characterization of unknown components from reverse-engineered models from the literature. MoCha is useful for working on network models by hand or as a downstream step of a model inference engine workflow and represents a valuable and efficient tool for the characterization of unknown pathways using known data from thousands of organisms. MoCha and its source code are freely available online under the GPLv3 license.
Mapping the unknown: Modeling future scenarios of riverine fish communities
Riverscapes can be defined by spatial and temporal variation in a suite of environmental conditions that influence the distribution and persistence of riverine fish populations. Fish in riverscapes can exhibit extensive movements, require seasonally-distinct habitats for spawnin...
Finn, C.A.; Deszcz-Pan, M.; Anderson, E.D.; John, D.A.
2007-01-01
Hydrothermally altered rocks, particularly if water saturated, can weaken stratovolcanoes, thereby increasing the potential for catastrophic sector collapses that can lead to far-traveled, destructive debris flows. Evaluating the hazards associated with such alteration is difficult because alteration has been mapped on few active volcanoes and the distribution and intensity of subsurface alteration are largely unknown on any active volcano. At Mount Adams, some Holocene debris flows contain abundant hydrothermal minerals derived from collapse of the altered, edifice. Intense hydrothermal alteration significantly reduces the resistivity and magnetization of volcanic rock, and therefore hydrothermally altered rocks can be identified with helicopter electromagnetic and magnetic measurements. Electromagnetic and magnetic data, combined with geological mapping and rock property measurements, indicate the presence of appreciable thicknesses of hydrothermally altered rock in the central core of Mount Adams north of the summit. We identify steep cliffs at the western edge of this zone as the likely source for future large debris flows. In addition, the electromagnetic data identified water in the brecciated core of the upper 100-200 m of the volcano. Water helps alter the rocks, reduces the effective stress, thereby increasing the potential for slope failure, and acts, with entrained melting ice, as a lubricant to transform debris avalanches into lahars. Therefore knowing the distribution of water is also important for hazard assessments. Our results demonstrate that high-resolution geophysical and geological observations can yield unprecedented views of the three-dimensional distribution of altered rock and shallow pore water aiding evaluation of the debris avalanche hazard.
Wang, Ruwei; Liu, Guijian; Zhang, Jiamei
2015-12-15
Coal-fired power plants (CFPPs) represent important source of atmospheric PAHs, however, their emission characterization are still largely unknown. In this work, the concentration, distribution and gas-particle partitioning of PM10- and gas-phase PAHs in flue gas emitted from different coal-fired utility boilers were investigated. Moreover, concentration and distribution in airborne PAHs from different functional areas of power plants were studied. People's inhalatory and dermal exposures to airborne PAHs at these sites were estimated and their resultant lung cancer and skin cancer risks were assessed. Results indicated that the boiler capacity and operation conditions have significant effect on PAH concentrations in both PM10 and gas phases due to the variation of combustion efficiency, whereas they take neglected effect on PAH distributions. The wet flue gas desulphurization (WFGD) takes significant effect on the scavenging of PAH in both PM10 and gas phases, higher scavenging efficiency were found for less volatile PAHs. PAH partitioning is dominated by absorption into organic matter and accompanied by adsorption onto PM10 surface. In addition, different partitioning mechanism is observed for individual PAHs, which is assumed arising from their chemical affinity and vapor pressure. Risk assessment indicates that both inhalation and dermal contact greatly contribute to the cancer risk for CFPP workers and nearby residents. People working in workshop are exposed to greater inhalation and dermal exposure risk than people living in nearby vicinity and working office. Copyright © 2015. Published by Elsevier B.V.
A Study of Clinically Related Open Source Software Projects
Hogarth, Michael A.; Turner, Stuart
2005-01-01
Open source software development has recently gained significant interest due to several successful mainstream open source projects. This methodology has been proposed as being similarly viable and beneficial in the clinical application domain as well. However, the clinical software development venue differs significantly from the mainstream software venue. Existing clinical open source projects have not been well characterized nor formally studied so the ‘fit’ of open source in this domain is largely unknown. In order to better understand the open source movement in the clinical application domain, we undertook a study of existing open source clinical projects. In this study we sought to characterize and classify existing clinical open source projects and to determine metrics for their viability. This study revealed several findings which we believe could guide the healthcare community in its quest for successful open source clinical software projects. PMID:16779056
Stanton, David W G; Mulville, Jacqueline A; Bruford, Michael W
2016-04-13
Red deer (Cervus elaphus) have played a key role in human societies throughout history, with important cultural significance and as a source of food and materials. This relationship can be traced back to the earliest human cultures and continues to the present day. Humans are thought to be responsible for the movement of a considerable number of deer throughout history, although the majority of these movements are poorly described or understood. Studying such translocations allows us to better understand ancient human-wildlife interactions, and in the case of island colonizations, informs us about ancient human maritime practices. This study uses DNA sequences to characterise red deer genetic diversity across the Scottish islands (Inner and Outer Hebrides and Orkney) and mainland using ancient deer samples, and attempts to infer historical colonization events. We show that deer from the Outer Hebrides and Orkney are unlikely to have originated from mainland Scotland, implying that humans introduced red deer from a greater distance. Our results are also inconsistent with an origin from Ireland or Norway, suggesting long-distance maritime travel by Neolithic people to the outer Scottish Isles from an unknown source. Common haplotypes and low genetic differentiation between the Outer Hebrides and Orkney imply common ancestry and/or gene flow across these islands. Close genetic proximity between the Inner Hebrides and Ireland, however, corroborates previous studies identifying mainland Britain as a source for red deer introductions into Ireland. This study provides important information on the processes that led to the current distribution of the largest surviving indigenous land mammal in the British Isles. © 2016 The Authors.
Problem solving as intelligent retrieval from distributed knowledge sources
NASA Technical Reports Server (NTRS)
Chen, Zhengxin
1987-01-01
Distributed computing in intelligent systems is investigated from a different perspective. From the viewpoint that problem solving can be viewed as intelligent knowledge retrieval, the use of distributed knowledge sources in intelligent systems is proposed.
Pharmaceuticals and Hormones in the Environment
Some of the earliest initial reports from Europe and the United States demonstrated that a variety of pharmaceuticals and hormones could be found in surface waters, source waters, drinking water, and influents and effluents from wastewater treatment plants (WWTPs). It is unknown...
Serebruany, Victor L; Cherepanov, Vasily; Kim, Moo Hyun; Litvinov, Oleg; Cabrera-Fuentes, Hector A; Marciniak, Thomas A
The US Food and Drug Administration Adverse Event Reporting System (FAERS) is a global passive surveillance database that relies on voluntary reporting by health care professionals and consumers as well as required mandatory reporting by pharmaceutical manufacturers. However, the initial filers and comparative patterns for oral P2Y12 platelet inhibitor reporting are unknown. We assessed who generated original FAERS reports for clopidogrel, prasugrel, and ticagrelor in 2015. From the FAERS database we extracted and examined adverse event cases coreported with oral P2Y12 platelet inhibitors. All adverse event filing originating sources were dichotomized into consumers, lawyers, pharmacists, physicians, other health care professionals, and unknown. Overall, 2015 annual adverse events were more commonly coreported with clopidogrel (n = 13,234) with known source filers (n = 12,818, or 96.9%) than with prasugrel (2,896; 98.9% out of 2,927 cases) or ticagrelor (2,163, or 82.3%, out of 2,627 cases, respectively). Overall, most adverse events were filed by consumers (8,336, or 44.4%), followed by physicians (5,290, or 28.2%), other health care professionals (2,997, or 16.0%), pharmacists (1,125, or 6.0%), and finally by lawyers (129, or 0.7%). The origin of 811 (4.7%) initial reports remains unknown. The adverse event filing sources differ among drugs. While adverse events coreported with clopidogrel and prasugrel were commonly originated by patients (40.4 and 84.3%, respectively), most frequently ticagrelor reports (42.5%) were filed by physicians. The reporting quality and initial sources differ among oral P2Y12 platelet inhibitors in FAERS. The ticagrelor surveillance in 2015 was inadequate when compared to clopidogrel and prasugrel. Patients filed most adverse events for clopidogrel and prasugrel, while physicians originated most ticagrelor complaints. These differences justify stricter compliance control for ticagrelor manufacturers and may be attributed to the confusion of treating physicians with unexpected fatal, cardiac, and thrombotic adverse events linked to ticagrelor. © 2017 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.
2018-03-01
In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.
NASA Astrophysics Data System (ADS)
Cerovski-Darriau, C.; Stock, J. D.; Winans, W. R.
2016-12-01
Episodic storm runoff in West Maui (Hawai'i) brings plumes of terrestrially-sourced fine sediment to the nearshore ocean environment, degrading coral reef ecosystems. The sediment pollution sources were largely unknown, though suspected to be due to modern human disturbance of the landscape, and initially assumed to be from visibly obvious exposed soil on agricultural fields and unimproved roads. To determine the sediment sources and estimate a sediment budget for the West Maui watersheds, we mapped the geomorphic processes in the field and from DEMs and orthoimagery, monitored erosion rates in the field, and modeled the sediment flux using the mapped processes and corresponding rates. We found the primary source of fine sands, silts and clays to be previously unidentified fill terraces along the stream bed. These terraces, formed during legacy agricultural activity, are the banks along 40-70% of the streams where the channels intersect human-modified landscapes. Monitoring over the last year shows that a few storms erode the fill terraces 10-20 mm annually, contributing up to 100s of tonnes of sediment per catchment. Compared to the average long-term, geologic erosion rate of 0.03 mm/yr, these fill terraces alone increase the suspended sediment flux to the coral reefs by 50-90%. Stakeholders can use our resulting geomorphic process map and sediment budget to inform the location and type of mitigation effort needed to limit terrestrial sediment pollution. We compare our mapping, monitoring, and modeling (M3) approach to NOAA's OpenNSPECT model. OpenNSPECT uses empirical hydrologic and soil erosion models paired with land cover data to compare the spatially distributed sediment yield from different land-use scenarios. We determine the relative effectiveness of calculating a baseline watershed sediment yield from each approach, and the utility of calibrating OpenNSEPCT with M3 results to better forecast future sediment yields from land-use or climate change scenarios.
Blind source separation by sparse decomposition
NASA Astrophysics Data System (ADS)
Zibulevsky, Michael; Pearlmutter, Barak A.
2000-04-01
The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.
Defense Energy Support Center Fact Book, Fiscal Year 1999, Twenty-Second Edition
1999-01-01
numbers SOURCE: FACILITIES AND DISTRIBUTION MANAGEMENT COMMODITY BUSINESS UNIT 11 OCONUS COCO 10 8,717,850...GOCO 7 1,518,905 SOURCE: FACILITIES AND DISTRIBUTION MANAGEMENT COMMODITY BUSINESS UNIT DLA MANAGED STORAGE...FY 95 FY 96 FY 97 FY 98 FY 99 SOURCE: FACILITIES AND DISTRIBUTION MANAGEMENT COMMODITY BUSINESS UNIT 13 0 20 40 60 80 100 120 140 160 180 200 220
NASA Technical Reports Server (NTRS)
Diederich, Franklin W; Zlotnick, Martin
1955-01-01
Spanwise lift distributions have been calculated for nineteen unswept wings with various aspect ratios and taper ratios and with a variety of angle-of-attack or twist distributions, including flap and aileron deflections, by means of the Weissinger method with eight control points on the semispan. Also calculated were aerodynamic influence coefficients which pertain to a certain definite set of stations along the span, and several methods are presented for calculating aerodynamic influence functions and coefficients for stations other than those stipulated. The information presented in this report can be used in the analysis of untwisted wings or wings with known twist distributions, as well as in aeroelastic calculations involving initially unknown twist distributions.
NASA Astrophysics Data System (ADS)
Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.
2016-12-01
Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.
Balss, Karin M; Long, Frederick H; Veselov, Vladimir; Orana, Argjenta; Akerman-Revis, Eugena; Papandreou, George; Maryanoff, Cynthia A
2008-07-01
Multivariate data analysis was applied to confocal Raman measurements on stents coated with the polymers and drug used in the CYPHER Sirolimus-eluting Coronary Stents. Partial least-squares (PLS) regression was used to establish three independent calibration curves for the coating constituents: sirolimus, poly(n-butyl methacrylate) [PBMA], and poly(ethylene-co-vinyl acetate) [PEVA]. The PLS calibrations were based on average spectra generated from each spatial location profiled. The PLS models were tested on six unknown stent samples to assess accuracy and precision. The wt % difference between PLS predictions and laboratory assay values for sirolimus was less than 1 wt % for the composite of the six unknowns, while the polymer models were estimated to be less than 0.5 wt % difference for the combined samples. The linearity and specificity of the three PLS models were also demonstrated with the three PLS models. In contrast to earlier univariate models, the PLS models achieved mass balance with better accuracy. This analysis was extended to evaluate the spatial distribution of the three constituents. Quantitative bitmap images of drug-eluting stent coatings are presented for the first time to assess the local distribution of components.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; ...
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less
Effects of errors and gaps in spatial data sets on assessment of conservation progress.
Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C
2013-10-01
Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.
The industrial melanism mutation in British peppered moths is a transposable element.
Van't Hof, Arjen E; Campagne, Pascal; Rigden, Daniel J; Yung, Carl J; Lingley, Jessica; Quail, Michael A; Hall, Neil; Darby, Alistair C; Saccheri, Ilik J
2016-06-02
Discovering the mutational events that fuel adaptation to environmental change remains an important challenge for evolutionary biology. The classroom example of a visible evolutionary response is industrial melanism in the peppered moth (Biston betularia): the replacement, during the Industrial Revolution, of the common pale typica form by a previously unknown black (carbonaria) form, driven by the interaction between bird predation and coal pollution. The carbonaria locus has been coarsely localized to a 200-kilobase region, but the specific identity and nature of the sequence difference controlling the carbonaria-typica polymorphism, and the gene it influences, are unknown. Here we show that the mutation event giving rise to industrial melanism in Britain was the insertion of a large, tandemly repeated, transposable element into the first intron of the gene cortex. Statistical inference based on the distribution of recombined carbonaria haplotypes indicates that this transposition event occurred around 1819, consistent with the historical record. We have begun to dissect the mode of action of the carbonaria transposable element by showing that it increases the abundance of a cortex transcript, the protein product of which plays an important role in cell-cycle regulation, during early wing disc development. Our findings fill a substantial knowledge gap in the iconic example of microevolutionary change, adding a further layer of insight into the mechanism of adaptation in response to natural selection. The discovery that the mutation itself is a transposable element will stimulate further debate about the importance of 'jumping genes' as a source of major phenotypic novelty.
Kumwenda, Benjamin; Litthauer, Derek; Reva, Oleg
2014-09-25
Bacteria of genus Thermus inhabit both man-made and natural thermal environments. Several Thermus species have shown biotechnological potential such as reduction of heavy metals which is essential for eradication of heavy metal pollution; removing of organic contaminants in water; opening clogged pipes, controlling global warming among many others. Enzymes from thermophilic bacteria have exhibited higher activity and stability than synthetic or enzymes from mesophilic organisms. Using Meiothermus silvanus DSM 9946 as a reference genome, high level of coordinated rearrangements has been observed in extremely thermophilic Thermus that may imply existence of yet unknown evolutionary forces controlling adaptive re-organization of whole genomes of thermo-extremophiles. However, no remarkable differences were observed across species on distribution of functionally related genes on the chromosome suggesting constraints imposed by metabolic networks. The metabolic network exhibit evolutionary pressures similar to levels of rearrangements as measured by the cross-clustering index. Using stratigraphic analysis of donor-recipient, intensive gene exchanges were observed from Meiothermus species and some unknown sources to Thermus species confirming a well established DNA uptake mechanism as previously proposed. Global genome rearrangements were found to play an important role in the evolution of Thermus bacteria at both genomic and metabolic network levels. Relatively higher level of rearrangements was observed in extremely thermophilic Thermus strains in comparison to the thermo-tolerant Thermus scotoductus. Rearrangements did not significantly disrupt operons and functionally related genes. Thermus species appeared to have a developed capability for acquiring DNA through horizontal gene transfer as shown by the donor-recipient stratigraphic analysis.
NASA Astrophysics Data System (ADS)
Hasanov, Alemdar; Kawano, Alexandre
2016-05-01
Two types of inverse source problems of identifying asynchronously distributed spatial loads governed by the Euler-Bernoulli beam equation ρ (x){w}{tt}+μ (x){w}t+{({EI}(x){w}{xx})}{xx}-{T}r{u}{xx}={\\sum }m=1M{g}m(t){f}m(x), (x,t)\\in {{{Ω }}}T := (0,l)× (0,T), with hinged-clamped ends (w(0,t)={w}{xx}(0,t)=0,w(l,t) = {w}x(l,t)=0,t\\in (0,T)), are studied. Here {g}m(t) are linearly independent functions, describing an asynchronous temporal loading, and {f}m(x) are the spatial load distributions. In the first identification problem the values {ν }k(t),k=\\bar{1,K}, of the deflection w(x,t), are assumed to be known, as measured output data, in a neighbourhood of the finite set of points P:= \\{{x}k\\in (0,l),k=\\bar{1,K}\\}\\subset (0,l), corresponding to the internal points of a continuous beam, for all t\\in ]0,T[. In the second identification problem the values {θ }k(t),k=\\bar{1,K}, of the slope {w}x(x,t), are assumed to be known, as measured output data in a neighbourhood of the same set of points P for all t\\in ]0,T[. These inverse source problems will be defined subsequently as the problems ISP1 and ISP2. The general purpose of this study is to develop mathematical concepts and tools that are capable of providing effective numerical algorithms for the numerical solution of the considered class of inverse problems. Note that both measured output data {ν }k(t) and {θ }k(t) contain random noise. In the first part of the study we prove that each measured output data {ν }k(t) and {θ }k(t),k=\\bar{1,K} can uniquely determine the unknown functions {f}m\\in {H}-1(]0,l[),m=\\bar{1,M}. In the second part of the study we will introduce the input-output operators {{ K }}d :{L}2(0,T)\\mapsto {L}2(0,T),({{ K }}df)(t):= w(x,t;f),x\\in P, f(x) := ({f}1(x),\\ldots ,{f}M(x)), and {{ K }}s :{L}2(0,T)\\mapsto {L}2(0,T), ({{ K }}sf)(t):= {w}x(x,t;f), x\\in P , corresponding to the problems ISP1 and ISP2, and then reformulate these problems as the operator equations: {{ K }}df=ν and {{ K }}sf=θ , where ν (t):= ({ν }1(t),\\ldots ,{ν }K(t)) and {θ }k(t):= ({θ }1(t),\\ldots ,{θ }K(t)). Since both measured output data contain random noise, we use the most prominent regularisation method, Tikhonov regularisation, introducing the regularised cost functionals {J}1α (f):= (1/2)\\parallel {{ K }}df-ν {\\parallel }{L2(0,T)}2+(1/2)α \\parallel f{\\parallel }{L2(0,T)}2 and {J}2α (f):= (1/2)\\parallel {{ K }}sf-θ {\\parallel }{L2(0,T)}2+(1/2)α \\parallel f{\\parallel }{L2(0,T)}2. Using a priori estimates for the weak solution of the direct problem and the Tikhonov regularisation method combined with the adjoint problem approach, we prove that the Fréchet gradients {J}1\\prime (f) and {J}2\\prime (f) of both cost functionals can explicitly be derived via the corresponding weak solutions of adjoint problems and the known temporal loads {g}m(t). Moreover, we show that these gradients are Lipschitz continuous, which allows the use of gradient type iteration convergent algorithms. Two applications of the proposed theory are presented. It is shown that solvability results for inverse source problems related to the synchronous loading case, with a single interior measured data, are special cases of the obtained results for asynchronously distributed spatial load cases.
Point and Compact Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.
2017-12-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.